url
stringlengths
14
2.42k
text
stringlengths
100
1.02M
date
stringlengths
19
19
metadata
stringlengths
1.06k
1.1k
http://www.proba.jussieu.fr/mathdoc/preprints/hu.Tue_Jun_15_11_01_12_CEST_1999.html
Université Paris 6Pierre et Marie Curie Université Paris 7Denis Diderot CNRS U.M.R. 7599 Probabilités et Modèles Aléatoires'' ### Ray-Knight theorems related to a stochastic flow Auteur(s): Code(s) de Classification MSC: • 60J55 Local time and additive functionals Résumé: We study a stochastic flow of ${\cal C}^1$-homeomorphisms of $\r$. At certain stopping times, the spatial derivative of the flow is a diffusion in the space variable and its generator is given. This answers several questions posed in a previous study by Bass and Burdzy [3].
2017-10-23 17:01:20
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.3322330117225647, "perplexity": 3019.0461005981388}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-43/segments/1508187826210.56/warc/CC-MAIN-20171023164236-20171023184236-00095.warc.gz"}
https://brilliant.org/discussions/thread/another-calculus-challenge/
# Another Calculus Challenge! Prove the following identity - $\displaystyle \prod_{r=1}^{n}{\Gamma\left(\frac{r}{n+1}\right)} = \sqrt{\frac{{(2\pi)}^{n}}{n+1}}$ Please be original. I hope you will not copy from sites like MSE(where it might be discussed). Note by Kartik Sharma 5 years, 2 months ago This discussion board is a place to discuss our Daily Challenges and the math and science related to those challenges. Explanations are more than just a solution — they should explain the steps and thinking strategies that you used to obtain the solution. Comments should further the discussion of math and science. When posting on Brilliant: • Use the emojis to react to an explanation, whether you're congratulating a job well done , or just really confused . • Ask specific questions about the challenge or the steps in somebody's explanation. Well-posed questions can add a lot to the discussion, but posting "I don't understand!" doesn't help anyone. • Try to contribute something new to the discussion, whether it is an extension, generalization or other idea related to the challenge. MarkdownAppears as *italics* or _italics_ italics **bold** or __bold__ bold - bulleted- list • bulleted • list 1. numbered2. list 1. numbered 2. list Note: you must add a full line of space before and after lists for them to show up correctly paragraph 1paragraph 2 paragraph 1 paragraph 2 [example link](https://brilliant.org)example link > This is a quote This is a quote # I indented these lines # 4 spaces, and now they show # up as a code block. print "hello world" # I indented these lines # 4 spaces, and now they show # up as a code block. print "hello world" MathAppears as Remember to wrap math in $$ ... $$ or $ ... $ to ensure proper formatting. 2 \times 3 $2 \times 3$ 2^{34} $2^{34}$ a_{i-1} $a_{i-1}$ \frac{2}{3} $\frac{2}{3}$ \sqrt{2} $\sqrt{2}$ \sum_{i=1}^3 $\sum_{i=1}^3$ \sin \theta $\sin \theta$ \boxed{123} $\boxed{123}$ Sort by: Lemma : $\prod_{r=1}^{n-1}\sin\left(\dfrac{r\pi}{n}\right)=\dfrac{n}{2^{n-1}}$ Proof: $\because\sin(x) = \frac{1}{2i}(e^{ix}-e^{-ix})$ $\displaystyle\implies\prod_{k=1}^{n-1} \sin\left(\dfrac{k\pi}{n}\right) = \left(\dfrac{1}{2i}\right)^{n-1}\prod_{k=1}^{n-1} \left(e^{\frac{k\pi i}{n}} - e^{\frac{-k\pi i}{n}}\right)$ $\displaystyle= \left(\dfrac{1}{2i}\right)^{n-1} \ \left(\prod_{k=1}^{n-1} e^{\frac{k\pi i}{n}} \right) \prod_{k=1}^{n-1} \left(1-e^{\frac{-2k\pi i}{n}} \right)$ $\displaystyle= \left(\dfrac{1}{2i}\right)^{n-1} \times i^{n-1} \prod_{k=1}^{n-1} \left(1-e^{\frac{-2k\pi i}{n}} \right)$ $\displaystyle= \dfrac{1}{2^{n-1}} \prod_{k=1}^{n-1} \left(1-e^{\frac{-2k\pi i}{n}} \right)$ Consider the factorisation, $x^{n-1}+x^{n-2}+ \ldots + x + 1 = \prod_{k=1}^{n-1}\left(x-e^{\frac{-2\pi k}{n}}\right)$ Putting $x=1$ in the above identity gives, $\displaystyle \prod_{k=1}^{n-1} \left(1-e^{\frac{-2k\pi i}{n}} \right)=n$ $\displaystyle \therefore \prod_{r=1}^{n-1}\sin\left(\dfrac{r\pi}{n}\right)=\dfrac{n}{2^{n-1}}$ Now, consider $\text{S}=\sum_{r=1}^{n}\log \left(\Gamma{\left(\dfrac{r}{n+1}\right)}\right)$ $\displaystyle = \sum_{r=1}^{n}\log \left(\Gamma{\left(\dfrac{n+1-r}{n+1}\right)}\right) \ \left(\because \sum_{r=a}^{b}f(r)=\sum_{r=a}^{b}f(a+b-r)\right)$ $\displaystyle = \sum_{r=1}^{n}\log \left(\Gamma{\left(1-\dfrac{r}{n+1}\right)}\right)$ By Euler's Reflection Formula, $\displaystyle \text{S} = \sum_{r=1}^{n}\log \left(\dfrac{\pi}{\Gamma{\left(\dfrac{r}{n+1}\right)}\sin \left(\dfrac{r\pi}{n+1}\right)}\right)$ $\displaystyle =\log ({\pi}^n) - \sum_{r=1}^{n}\log\left(\Gamma{\left(\dfrac{r}{n+1}\right)}\right) - \sum_{r=1}^{n} \log \left(\sin \left(\dfrac{r\pi}{n+1}\right)\right)$ $\displaystyle \implies 2\text{S} = \log({\pi}^n) - \sum_{r=1}^{n} \log \left(\sin \left(\dfrac{r\pi}{n+1}\right)\right)$ Using the Lemma, we have, $2\text{S}=\log({\pi}^n) - \log\left(\dfrac{n+1}{2^n}\right)$ $\displaystyle \implies \text{S} = \log \left(\sqrt{\dfrac{(2\pi)^n}{n+1}}\right)$ $\displaystyle \implies \sum_{r=1}^{n}\log \left(\Gamma{\left(\dfrac{r}{n+1}\right)}\right) = \log \left(\sqrt{\dfrac{(2\pi)^n}{n+1}}\right)$ $\displaystyle \implies \log\left(\prod_{r=1}^{n}\Gamma{\left(\dfrac{r}{n+1}\right)}\right) = \log \left(\sqrt{\dfrac{(2\pi)^n}{n+1}}\right)$ $\boxed {\therefore \displaystyle \prod_{r=1}^{n}\Gamma{\left(\dfrac{r}{n+1}\right)}=\sqrt{\dfrac{(2\pi)^n}{n+1}}}$ - 5 years, 1 month ago Yep. That's correct! That is also the way I did it. I am curious to know if there is any other approach. Maybe using convolution theorem? - 5 years, 1 month ago Another way is to use the Multiplication Theorem. To prove the multiplication theorem, the limit definition of gamma function can be used. - 5 years, 1 month ago Oh! That's nice! I didn't know about it. To prove the multiplication theorem, the limit definition of gamma function can be used. I don't understand what's the limit definition? - 5 years, 1 month ago $\displaystyle\Gamma(z)=\lim_{n\to \infty}\dfrac{n^z n!}{z(z+1)(z+2)\ldots(z+n)}$ - 5 years, 1 month ago Oh! Okay fine, I get it. - 5 years, 1 month ago
2020-10-20 03:58:40
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 31, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.991717517375946, "perplexity": 3781.8689299250436}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-45/segments/1603107869785.9/warc/CC-MAIN-20201020021700-20201020051700-00166.warc.gz"}
http://www.purplemath.com/learning/viewtopic.php?t=2138
## finding the area of parallelogram / combination/ permutation Sequences, counting (including probability), logic and truth tables, algorithms, number theory, set theory, etc. jigoku_snow Posts: 2 Joined: Wed Aug 03, 2011 1:46 pm Contact: ### finding the area of parallelogram / combination/ permutation 1) ABCD is a parallelogram and the coordinates of A, B, C are (-8, -11), (1, 2) and (4, 3) respectively. Find the area of the parallelogram. [i]i manage to find that D =( -5.-10) . area of parallelogram = AD x height i'm stuck here. how to find the height? i tried it by finding the angle between AB and AD first , (37.32 degree) then use it to find the vertical line from B. but i dont get the right answer. 2). Points A and C have coordinates (-1, 2) and (9, 7) respectively. A rectangle ABCD has AC as a diagonal. Calculate the possible coordinates of B and D if the length of AB is 10 units. from the question, i know that AB=CD=10. the equation of line AB = x^2 +y^2+ 2x -4y-97=0. the equation of line CD= a^2 +b^2-18 a- 14b +30=0. then what should i do next? 3)If the five letters a, b, c, d, e are put into a hat, in how many ways could you draw two letters? must we use permutation or combination? if permutation, why? why can't i use combination? 5C2? 4))3 balls are to be placed in 3 different boxes, not necessarily with 1 ball in each box. any box can hold up to 3 balls. find the number of ways the balls can placed if i) they are all of the same colour ii) they are all of different colours i) what is wrong with my workings? 3C3 + (3C2 x 1C1) + 3C1? ii) i have no clue at all for this. isnt it suppose to be the same as the first one? 5)GOLD MEDAL- if 2 D letters come first and the 2 L letters come last, find how many different arrangement there are D D _ _ _ _ _ L L 5P5 x 2P2 x 2P2. why i dont need to permute the letter D and the L? 6) 3 letters are selected at random from the word "BIOLOGY" . Find the number that the selection contains one letter O 2C1 x 5C2 = 20 , the answer given is 5C2 = 10 , why i dont have to write 2C1? 7)The sum of the first 5 terms of a G.P. is twice the sum of the terms from the 6th to the 15th inclusive. Show that r^5 = 1/2 (SQRT3 -1). S5 = 2 (S15 - S5) a(1-r^5)/ i-r = 2a/ 1-r [(1-r^15)-(1-r^5)] 1 - r^5 = 2(r^5 -r^15) 2r^15-3r^5+1=0 let u=r^5 2u^3 - 3u +1=0 i'm stuck here. what to do next? nona.m.nona Posts: 285 Joined: Sun Dec 14, 2008 11:07 pm Contact: ### Re: finding the area of parallelogram / combination/ permuta jigoku_snow wrote:1) ABCD is a parallelogram and the coordinates of A, B, C are (-8, -11), (1, 2) and (4, 3) respectively. Find the area of the parallelogram. Use the Distance Formula to find the lengths of each base. Find the slope of the line containing either base; use this to find the perpendicular slope. Find the line equation for the perpendicular through one of the corners; find the point where this line intersects the other base (or the other base, extended). Find the distance between the two bases, and thus the height. Plug the base lengths and the height into the area formula. jigoku_snow wrote:2). Points A and C have coordinates (-1, 2) and (9, 7) respectively. A rectangle ABCD has AC as a diagonal. Calculate the possible coordinates of B and D if the length of AB is 10 units. from the question, i know that AB=CD=10. the equation of line AB = x^2 +y^2+ 2x -4y-97=0. the equation of line CD= a^2 +b^2-18 a- 14b +30=0. then what should i do next? The line AB will be linear; what you have posted is a quadratic. How did you derive this? (One of the links in the previous response explains how to find linear equations from two points.) As for finding the coordinates, try drawing the two points you have now, along with the line containing them. The rectangle is either to the one side of that line, or else to the other. So where must the other two points be? jigoku_snow wrote:3)If the five letters a, b, c, d, e are put into a hat, in how many ways could you draw two letters? must we use permutation or combination? if permutation, why? why can't i use combination? 5C2? Since nothing in the exercise says anything about ordering, combinations should be sufficient. However, you can see your book and your class notes; if your book, by default, defines "drawing" as "one at a time, with order taken into account", then you should use permutations. jigoku_snow wrote:4))3 balls are to be placed in 3 different boxes, not necessarily with 1 ball in each box. any box can hold up to 3 balls. find the number of ways the balls can placed if i) they are all of the same colour ii) they are all of different colours i) what is wrong with my workings? 3C3 + (3C2 x 1C1) + 3C1? You seem to be counting the ways to choose the balls, but not the ways in which you could then distribute them (in the chosen groupings) amongst the boxes. jigoku_snow wrote:ii) i have no clue at all for this. isnt it suppose to be the same as the first one? If you have boxes A, B, and C and if you are placing three red balls in them, then you have the first situation. However, if you have boxes A, B, and C and if you are placing a red, a green, and a yellow ball in them, then you have the second situation. In the first case, "Box A has a red ball" is the same as "Box A has a red ball", since the balls are indistinguishable. In the second case, "Box A has a red ball" is different from "Box A has a green ball". jigoku_snow wrote:5)GOLD MEDAL- if 2 D letters come first and the 2 L letters come last, find how many different arrangement there are D D _ _ _ _ _ L L 5P5 x 2P2 x 2P2. why i dont need to permute the letter D and the L? How does "D" differ from "D"? How are they distinguishable? jigoku_snow wrote:6) 3 letters are selected at random from the word "BIOLOGY" . Find the number that the selection contains one letter O 2C1 x 5C2 = 20 , the answer given is 5C2 = 10 , why i dont have to write 2C1? What was your logic for including "2C1"? jigoku_snow wrote:7)The sum of the first 5 terms of a G.P. is twice the sum of the terms from the 6th to the 15th inclusive. Show that r^5 = 1/2 (SQRT3 -1). [i]S5 = 2 (S15 - S5) a(1-r^5)/ i-r = 2a/ 1-r [(1-r^15)-(1-r^5)] How did you arrive at this equation? Using the formulation explained here: $a\left(\frac{1\, -\, r^5}{1\, -\, r}\right)\, =\, 2a\left(\frac{1\, -\, r^{15}}{1\, -\, r}\, -\, \frac{1\, -\, r^5}{1\, -\, r}\right)$ Multiply by (1 - r)/a to get: $1\, -\, r^5\, =\, 2\left(1\, -\, r^{15}\right)\, -\, \left(1\, -\, r^5\right)$ $1\, -\, r^5\, =\, 2\, -\, 2r^{15}\, -\, 2\, +\, 2r^5$ $2r^{15}\, -\, 3r^5\, +\, 1\, =\, 0$ Then solve the polynomial for its rational root (which clearly does not apply, as a value for the common ratio, to this series). Solve the resulting quadratic for the other root, using the definition of "geometric progression" to pick the correct value. jigoku_snow Posts: 2 Joined: Wed Aug 03, 2011 1:46 pm Contact: ### Re: finding the area of parallelogram / combination/ permuta The line AB will be linear; what you have posted is a quadratic. How did you derive this? since i cant find the gradient so i use the distance formula. i let B=(x,y) and D=(a.b) What was your logic for including "2C1"? well, is because there are 2 'O' s. but i think i know why i dont have to write that, is because they are indistinguishable. How did you arrive at this equation? from a(1-r^5)/ 1-r = 2a [ (1-r^15/ 1-r) - ( 1-r^5 / 1-r )] i simplify it . Then solve the polynomial for its rational root i think this question is way too far for me as i not yet learn polynomial. nona.m.nona Posts: 285 Joined: Sun Dec 14, 2008 11:07 pm Contact: ### Re: finding the area of parallelogram / combination/ permuta The line AB will be linear; what you have posted is a quadratic. How did you derive this? jigoku_snow wrote:since i cant find the gradient so i use the distance formula. i let B=(x,y) and D=(a.b) You have two points on a straight line. You can definitely find the slope ("gradient"). Please study the lesson in the link, provided earlier, to learn how. Then solve the polynomial for its rational root jigoku_snow wrote:i think this question is way too far for me as i not yet learn polynomial. You are working with polynomials, such as the quadratic you created for the first exercise and "1 - r15" here, so I am not sure what you mean when you say that you haven't yet learned about polynomials. Please study the lesson provided in the link to learn how to solve polynomials.
2016-07-25 02:28:58
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 4, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8357856869697571, "perplexity": 650.2362229740813}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2016-30/segments/1469257824201.56/warc/CC-MAIN-20160723071024-00086-ip-10-185-27-174.ec2.internal.warc.gz"}
https://kullabs.com/classes/subjects/units/lessons/notes/note-detail/9533
## Note on Energy bands in semiconductors • Note • Things to remember ### Energy bands in semiconductors 1. In semiconductors valence band is filled at 0k. No more electrons can be added to valence at 0k due to Paulie’s exclusion principle. 2. When electric field is applied or temperature gradient is applied electrons forms valence band jump to empty conduction band and holes are created in valence band and conduction electrons are present in conduction band. 3. In semiconductors the band gap is a little bit longer than that of conductor and less than that of insulator. The approximate value of band gap for narrow band gap semiconductor is less than 2ev and greater than 0ev. 4. For example, in case of silicon band gap is 1.1ev at room temperature but that of Germenium is 0.67ev. 1. The probability that an electron reaches the conduction band is proportional to $$e^\frac{-E_g}{K_B T}$$ with increase in temperature the probability to find electron in conduction band increase. 2. Both electrons in conduction band and holes in valence band participate in the electrical conduction. In metals only electrons participate in conduction. 3. There are different ways for excitation of electron to conduction band 4. Temperature ii. Electric field iii. Absorption of radiation 5. In semiconductor, the band between two atoms is covalent bond whereas for metals the bond between two atoms is metallic bond. #### Energy band in insulator 1. In insulator, the valence band is completely filled whereas conduction band is completely empty at room temperature at 0k. 2. Large amount of energy is required to jump electron from filled valence band to empty conduction band. 3. The band gap is greater than 2ev. 4. At room temperature, electrons couldn’t reach from valence band to conduction band. So insulators are bad conductor of heat and electricity. 5. Holes also exist in metal and at semiconductor but absent in insulator. 6. In insulator valence electrons are tightly bound or shared with other atoms due to ionic and covalent bonding. #### What do you mean by mobility? Derive the relationship between electrical conductivity and mobility? The motion of electrons in metal is random thermal motion. So net current in the absence of electric field is always zero. When external field is applied electron experience electric field in the opposite direction of applied electric field.i.e. $$F=eE\dotsm(1)$$Due to this force a constant acceleration is produced, $$a=\frac{F}{m}=\frac{-eE}{m}\dotsm(2)$$ During acceleration of electron, electron experiences collision with ions, electrons, photons and its speed cannot increase linearly. These obstacles for motion of electron behaves like frictional force known as electrical resistance. The average velocity of electron between successive collisions remains constant known as drift velocity. The magnitude of drift velocity per unit applied electric field is known as mobility. It is denoted by $$\mu_e$$i.e. $$\mu_e=\frac{|V_d|}{E}\dotsm(3)$$ The current density J in terms of carrier concentration ‘n’ drift velocity is given by, $$J=neV_d$$ $$\therefore |V_d|=\frac{J}{ne}\dotsm(4)$$ From (3) and (4), $$\mu_e=\frac{J}{neE}$$ According to OHM’s law $$J=\sigma E$$ Where $$\sigma$$=electrical conductivity $$\therefore \mu_e=\frac{\sigma}{ne}$$ $$\sigma=ne\mu_e\dotsm(7)$$ Equation (7)gives the relation between electrical conductivity ($$\sigma$$), number density for electron (n) and mobility ($$\mu_e$$). Mobility and carrier density for semiconductor and metals: $$n_{metal}>>n_{semiconductor}$$ $$\mu_{metal}<\mu_{semiconductor}$$ $$\Rightarrow \sigma_{metal}>\sigma_{semiconductor}$$ The conductivity of metal is greater than the conductivity of semiconductor due to the presence of high concentration of carrier density. #### What is the source of resistivity in a conductivity material ? State the Matthiessen rule and explain the different contributions. The resistivity of material is the property of material which resist or oppose the motion of charge carrier. The charge carrier experience resistance for the motion due to different reasons. According to Matthieness rule, the total resistivity of metal as conducting material is, $$\rho_{total}=\rho_{thermal}+\rho_{impurity}+\rho_{deformation}$$ Where $$\rho_{thermal}$$=resistivity due to the lattice vibration. Elastic waves in lattice is quantize lattice waves and known as phonon scattered electron and oppose the motion of electron in metal. #### References: Callister, W.D and D.G Rethwisch. Material Science and Engineering. 2nd. New Delhi: Wiley India, 2014. Lindsay, S.M. Introduction of Nanoscience . New York : Oxford University Press, 2010. Patton, W.J. Materials in industry . New Delhi : Prentice hall of India, 1975. Poole, C.P. and F.J. Owens. Introduction To Nanotechnology. New Delhi: Wiley India , 2006. Raghavan, V. Material Science and Engineering. 4th . New Delhi: Pretence-Hall of India, 2003. Tiley, R.J.D. Understanding solids: The science of Materials. Engalnd : John wiley & Sons , 2004. 1.$$F=eE$$ $$a=\frac{F}{m}=\frac{-eE}{m}$$ $$\mu_e=\frac{|V_d|}{E}$$ $$J=neV_d$$ $$\sigma=ne\mu_e$$ $$\rho_{total}=\rho_{thermal}+\rho_{impurity}+\rho_{deformation}$$ $$n_{metal}>>n_{semiconductor}$$ $$\mu_{metal}<\mu_{semiconductor}$$ $$\Rightarrow \sigma_{metal}>\sigma_{semiconductor}$$ . 0%
2017-07-28 18:52:39
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 2, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.6098673343658447, "perplexity": 1470.1389989504391}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-30/segments/1500550977093.96/warc/CC-MAIN-20170728183650-20170728203650-00348.warc.gz"}
https://learnche.org/3E4/Assignment_and_tutorial_grading
Each question in an assignment or tutorial will be graded with either an: • $$\mathbf{\alpha}$$: you understood all or most of the question and answered all or most of it correctly. In an exam or test you would have scored close to full marks or 100%. • $$\mathbf{\beta}$$: you understood the basic concept of the question, but there was either a calculation error, or you didn't have enough details in your answer to score full marks. In a test or exam you would have scored around 60 to 75% of the marks. • $$\mathbf{\gamma}$$: you attempted the question, perhaps got one or two things right, but there was a serious deficiency in your understanding of the problem, or a serious calculation error. In an exam or test you would have got less than 50% of the marks for that question. To calculate the final grade, an $$\mathbf{\alpha}$$ will be weighted as 1, a $$\mathbf{\beta}$$ as 0.65 and $$\mathbf{\gamma}$$ as 0.4. Each question will be assigned a point number, shown in brackets next to the question, for example [2]. Most questions will be worth 1 or 2 points, some will be worth 3 points; occasionally they will be worth more. In the example below, there were 6 questions and one bonus question. Given the Greek letter grade and the point number, a weighted sum is created to determine the final grade. You will get extra credit for the bonus questions. Grades from those bonus questions are cumulative. This way it is possible to score above 100% for the assignment-portion of your overall grade. Example Question 1 2 3 4 5 6 7 (bonus) Point number [1] [1] [2] [1] [1] [2] [1] Mark $$\alpha$$ $$\beta$$ $$\beta$$ $$\gamma$$ $$\alpha$$ $$\alpha$$ $$\beta$$ Weighting (convert Greek letter to a number) 1.00 0.65 0.65 0.40 1.00 1.00 0.65 Multiply point number with weighting: 1.00 0.65 1.30 0.40 1.00 2.00 0.65 Grade for a student not attempting the bonus question (1.00 + 0.65 + 1.30 + 0.40 + 1.00 + 2.00)/8 = 6.35/8 = 79.3% Grade for a student attempting the bonus question (1.00 + 0.65 + 1.30 + 0.40 + 1.00 + 2.00 + 0.65)/8 = 7/8 = 87.5% • Unanswered questions are weighted with a zero; as are questions that are attempted, but really nothing was answered (e.g. you just repeat the question, or are doing the totally wrong thing). • A student scoring $$\alpha$$ on every question, including the bonus question would achieve a grade of (1+1+2+1+1+2+1)/8 = 9/8 = 112.5%.
2018-10-16 14:06:54
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.6011491417884827, "perplexity": 492.2320308517149}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-43/segments/1539583510754.1/warc/CC-MAIN-20181016134654-20181016160154-00147.warc.gz"}
http://bth.diva-portal.org/smash/resultList.jsf?p=51&fs=false&language=en&searchType=SIMPLE&query=&af=%5B%5D&aq=%5B%5B%7B%22organisationId%22%3A%2216833%22%7D%5D%5D&aq2=%5B%5B%5D%5D&aqe=%5B%5D&noOfRows=50&sortOrder=author_sort_asc&sortOrder2=title_sort_asc&onlyFullText=false&sf=all
Change search Refine search result 123 51 - 100 of 105 CiteExportLink to result list Cite Citation style • apa • harvard1 • ieee • modern-language-association-8th-edition • vancouver • Other style More styles Language • de-DE • en-GB • en-US • fi-FI • nn-NO • nn-NB • sv-SE • Other locale More languages Output format • html • text • asciidoc • rtf Rows per page • 5 • 10 • 20 • 50 • 100 • 250 Sort • Standard (Relevance) • Author A-Ö • Author Ö-A • Title A-Ö • Title Ö-A • Publication type A-Ö • Publication type Ö-A • Issued (Oldest first) • Issued (Newest first) • Created (Oldest first) • Created (Newest first) • Last updated (Oldest first) • Last updated (Newest first) • Disputation date (earliest first) • Disputation date (latest first) • Standard (Relevance) • Author A-Ö • Author Ö-A • Title A-Ö • Title Ö-A • Publication type A-Ö • Publication type Ö-A • Issued (Oldest first) • Issued (Newest first) • Created (Oldest first) • Created (Newest first) • Last updated (Oldest first) • Last updated (Newest first) • Disputation date (earliest first) • Disputation date (latest first) Select The maximal number of hits you can export is 250. When you want to export more records please use the Create feeds function. • 51. Jena, Ajit K. Traffic Control in ATM Networks: Engineering Impacts of Realistic Traffic Processes1996Conference paper (Refereed) This paper reviews the current state of the art in the rapidly developing areas of ATM traffic controls and traffic modeling, and identifies future research areas to facilitate the implementation of control methods that can support a desired quality of service without sacrificing network utilizations. Two sets of issues are identified, one on the impacts of realistic traffic on the efficacy of traffic controls in supporting specific traffic management objectives, and the other dealing with the extend to which controls modify traffic characteristics. These issues are illustrated using the example of traffic shaping of individual ON-OFF sources that have infinite variance sojourn times. • 52. Karlsson, Pär Modelling of traffic with high variability over long time scales with MMPPs1996Conference paper (Refereed) We describe the frst steps in the evaluation of an idea to match the high vari- ability found in measurements of trafficc, by Markov-modulated Poisson processes (MMPPs). It has been shown that one can arrange the parameters of a complex MMPP in a way that at least makes it visually self-similar over a limited time scale. The big benefit that arises from having an MMPP as a model of the trafic is that it is much easier to analyse mathematically than competing models, such as chaotic maps and fractional Brownian motion. We suggest to start with an MMPP with two states and match the four pa- rameters of it to a certain time scale. By splitting each of the two states into two new states, and adjusting the parameters associated with the new states to another (finer) time scale, variability over larger time scales is introduced. The resulting states can then be split again, until the required accuracy is obtained. In the split- ting of states, one must in each stage conserve the mean of the stage above when defining the new states. The main purpose of our models is to model the queue filling behaviour of a real-life trafic process. To determine the suitability of our models this is the most important qualification and it is used to evaluate the models. • 53. Karlsson, Pär On the Characteristics of WWW Traffic and the Relevance to ATM1997Conference paper (Refereed) • 54. Karlsson, Pär TCP/IP User Level Modeling for ATM1998Conference paper (Refereed) We propose a method for performance modeling of TCP/IP over ATM. The modeling is focused on user level behavior and demands. The basic components of our model are the arrivals of new TCP connections according to a Poisson process, and file sizes following heavy-tailed distributions. Using simulations we investigate the impacts of the behavior of such a source on the traffic at lower layers in the network. The benefits of considering the whole system in this way are several. Compared to commonly suggested models operating solely on the link level, a more complete and thorough view of the system is attained. The model also lends itself easily to studies of improvements and modifications of the involved protocols, as well as new ways of handling the traffic. The verification of our model demonstrates that it captures relevant features shown to be present in traffic measurements, such as high variability over long time-scales, self-similarity, long-range dependence, and buffering characteristics. • 55. Karlsson, Pär The Characteristics of WWW Traffic and the Relevance to ATM1997Conference paper (Refereed) This document describes a study of the characteristics of recorded WWWtraffic. Several parameters of the traffic are investigated, The results are used to investigate a scenario where ATM is used as the underlying transport mechanism. Problems with the deployment of ATM in the approach taken are considered and suggestions for improvements are made. The high variability of the traffic implies that a fixed allocation of bandwidth between the mean and peak rate is an infeasible way to achieve a reasonable utilization of the system, since this results in tremendous buffering demands. This calls for a different view on the way to study the system under consideration. Different properties of the traffic must be taken care of by different methods. Variations over longer time-scales are dealt with by means of capacity allocation and fluctuations with shorter duration are buffered. This realistic way of looking at the system might also put different tasks, such as traffic modelling, in a new light. For instance, does a model that is to be used for buffer dimensioning have to capture traffic behavior on time-scales that are longer than reasonably can be buffered anyway? • 56. Larsson, Sven-Olof A Local Approach for VPC Capacity Management1998Conference paper (Refereed) By reserving transmission capacity on a series of links from one node to another, making a virtual path connection (VPC) between these nodes, several benefits are obtained. VPCs will simplify routing at transit nodes, connection admission control, and QoS management by traffic segregation. As telecommunications traffics experience variations in the number of calls per time unit due to office hours, inaccurate forecasting, quick changes in traffic loads (New Year's Eve), and changes in the types of traffic (as in introduction of new services), there is a need to cope for this by adaptive capacity reallocation between different VPCs. We have developed a type of VPC capacity management policy that uses an allocation function to determine the needed capacity for the coming updating interval, based on the current number of active connections but independent of the offered traffic. In this work we propose and evaluate a method to get an optimal parameter setting of the allocation function based only on average values for a network link. We also discuss the influence of different factors on the allocation function. • 57. Larsson, Sven-Olof An Evaluation of a Local Approach for VPC Capacity Management1998Conference paper (Refereed) By reserving transmission capacity on a series of links from one node to another, making a virtual path connection (VPC) between these nodes, several benefits are obtained. VPCs will simplify routing at transit nodes, connection admission control, and QoS management by traffic segregation. As telecommunications traffics experience variations in the number of calls per time unit due to office hours, inaccurate forecasting, quick changes in traffic loads, and changes in the types of traffic (as in introduction of new services), there is a need to cope for this by adaptive capacity reallocation between different VPCs. We have developed a type of local VPC capacity management policy that uses an allocation function to determine the needed capacity for the coming updating interval, based on the current number of active connections but independently of the offered traffic. We determine its optimal parameters, and the optimal updating interval for different overhead costs. Our policy is shown to be able to combine benefits from both VP and VC routing by fast capacity reallocations. The method of signaling is easy to implement and our evaluations indicate that the method is robust. This paper is based on our earlier work, described in [19]. The calculations are simplified and the methodology is changed. • 58. Larsson, Sven-Olof VPC Management in ATM Networks1998Report (Other academic) The goals of VPC management functions are to reduce the call blocking probability, increase responsiveness, stability, and fairness. As telecommunications traffics experience variations in the number of calls per time unit, due to office hours, inaccurate forecasting, quick changes in traffic loads (e.g. New Year's Eve), and changes in the types of traffic (as in introduction of new services), this can be met by adaptive capacity reallocation and topology reconfiguration. Brief explanations of the closely related concepts effective bandwidth and routing are given together with an overview of ATM. Fundamentally different approaches for VPC capacity reallocations are compared and their pros and cons are discussed. Finally, a further development of one of the approaches is described. • 59. Larsson, Sven-Olof VPC Management in ATM Networks1997Licentiate thesis, comprehensive summary (Other academic) • 60. Blekinge Institute of Technology, Department of Telecommunications and Mathematics. Blekinge Institute of Technology, Department of Telecommunications and Mathematics. A Comparison between Different Approaches for VPC Bandwidth Management1997Conference paper (Refereed) By reserving transmission capacity on a series of links from one node to another, making a virtual path connection (VPC) between these nodes, several benefits are obtained. VPCs will enable segregation of traffics with different QoS, simplify routing at transit nodes, and simplify connection admission control. As telecommunications traffics experience variations in the number of calls per time unit, due to office hours, inaccurate forecasting, quick changes in traffic loads, and changes in the types of traffic (as in introduction of new services), there is a need to cope for this by adaptive capacity reallocation between different VPCs. The focus of this paper is to introduce a distributed approach for VPC management and compare it to a local and a centralised one. Our results show pros and cons of the different approaches. • 61. Larsson, Sven-Olof A Comparison between Different Approaches for VPC Bandwidth Management1997Conference paper (Refereed) By reserving transmission capacity on a series of links from one node to another, making a virtual path connection (VPC) between these nodes, several benefits are obtained. VPCs will enable segregation of traffics with different QoS, simplify routing at transit nodes, and simplify connection ad-mission control. As telecommunications traffics experience variations in the number of calls per time unit, due to office hours, inaccurate forecasting, quick changes in traffic loads, and changes in the types of traffic (as in introduction of new services), there is a need to cope for this by adaptive capacity real-location between different VPCs. The focus of this paper is to introduce a distributed approach for VPC management and compare it to a local and a centralised one. Our results show pros and cons of the dif-ferent approaches. • 62. Larsson, Sven-Olof A Study of a Distributed Approach for VPC Network Management1997Conference paper (Refereed) By reserving transmission capacity on a series of links from one node to another, making a virtul path connection (VPC) between these nodes, several benefits are made. VPCs will enable segregation of traffics with different QoS, simplify routing at transit nodes, and simplify connection admission control. As telecommunications traffics experience variations in the number of calls per time unit, due to office hours, inaccurate forecasting, quick changes in traffic loads, and changes in the type of traffic (as in introduction of new services), there is a need to cope for this by adaptive capacity reallocation between different VPCs. The focus of this paper is to introduce a distributed appraoch for VPC management and compare it to a central one. Our results show that the distributed approach is an interesting alternative. • 63. Blekinge Institute of Technology, Department of Telecommunications and Mathematics. Blekinge Institute of Technology, Department of Telecommunications and Mathematics. Performance Evaluation of a Distributed Approach for VPC Network Management1997Conference paper (Refereed) By reserving transmission capacity on a series of links from one node to another, making a virtual path connection (VPC) between these nodes, several benefits are made. VPCs will enable segregation of traffics with different QoS, simplify routing at transit nodes, and simplify connection admission control. As telecommunications traffics experience variations in the number of calls per time unit, due to office hours, inaccurate forecasting, quick changes in traffic loads, and changes in the types of traffic (as in introduction of new services), there is a need to cope for this by adaptive capacity reallocation. By using VPC capacity reallocation, the responsivness for traffic fluctuations will increase. The focus of this paper is to propose and evaluate a distributed approach for VPC management with multiple routes. The evaluation is done in networks with one type of traffic and having Poissonian call arrivals. The size of the network is moderate and can be seen as a core ATM network. • 64. Larsson, Sven-Olof Performance Evaluation of a Local Approach for VPC Capacity Management1998In: IEICE transactions on communications, ISSN 0916-8516, E-ISSN 1745-1345, Vol. 81, no 5, p. 870-876Article in journal (Refereed) • 65. Blekinge Institute of Technology, Department of Telecommunications and Mathematics. Blekinge Institute of Technology, Department of Telecommunications and Mathematics. Performance Evaluation of Different Local and Distributed Approaches for VP Network Management1996Conference paper (Refereed) As telecommunications traffics experience variations in the intensity due to office hours, inaccurate forecasting, quick changes in traffic load, and changes in the types of traffic (as in introduction of new services), there is a need to cope for this by adaptive change of the capacity in the network. This can be done by reserving transmission capacity on a series of links from one node to another making a virtual path (VP) between these nodes. The control of the VPs can be centralised, distributed or local. The idea of local and distributed approaches is to increase the robustness and performance compared to a central approach, which is depending on a central computer for the virtual path network (VPN) management. The main focus in this paper is to simulate different strategies and compare them to each other. The measured parameters are blocked traffic, the amount of signalling, unused capacity and maximal VP blocking. The simulation is done in a non-hierarchical network with one type of traffic i.e. a VP-subnetwork using statistical multiplexing and having Poissonian traffic streams. The size of the network is moderate. The VPN management will handle slow to medium fast variations, typically on the order of minutes up to hours. The traffic variations are met by reshaping the VPN in order to match the current demands. • 66. Lazraq, Tawfiq Modelling a 10 Gbits/Port Shared Memory ATM Switch1997Conference paper (Refereed) The speed of optical transmission links is growing at a rate which is difficult for the microelectronic technology of ATM switches to follow. In order to cover the transmission rate gap between optical transmission links and ATM switches, ATM switches operating at multi Gbit/s rate have to be developed. A 10 Gbit/s/port shared memory ATM switch is under development at Linkoping Institute of Technology (LiTH) and Lund Institute of Technology (LTH) in Sweden. It has 8 inputs and 8 outputs. The switch will be implemented on a single chip in 0.8 μm BiCMOS. We report on a performance analysis of the switch under a specific traffic model. This traffic model emulates the LAN type of traffic. Performance analysis is crucial for evaluating and dimensioning the very high speed ATM switch • 67. Lennerstad, Håkan Logical graphs: how to map mathematics1996In: ZDM - Zentralblatt für Didaktik der Mathematik, ISSN 0044-4103, Vol. 27, no 3, p. 87-92Article in journal (Refereed) A logical graph is a certain directed graph with which any mathematical theory or proof can be presented - its logic is formulated in graph form. Compared to the usual narrative description, the presentation usually gains in survey, clarity and precision. A logical graph formulation can be thought of as a detailed and complete map over the mathematical landscape. The main goal in the design of logical graphs is didactical: to improve the orientation in a mathematical proof or theory for a reader, and thus to improve the access of mathematics. • 68. Lennerstad, Håkan The directional display1997Conference paper (Refereed) The directional display contains and shows several images-which particular image is visible depends on the viewing direction. This is achieved by packing information at high density on a surface, by a certain back illumination technique, and by explicit mathematical formulas which eliminate projection deformations and make it possible to automate the production of directional displays. The display is illuminated but involves no electronic components. Patent is pending for the directional display. Directional dependency of an image can be used in several ways. One is to achieve three-dimensional effects. In contrast to that of holograms, large size and full color involve no problems. Another application of the technique is to show moving sequences. Yet another is to make a display more directionally independent than conventional displays. It is also possible and useful in several contexts to show different text in different directions with the same display. The features can be combined. • 69. Blekinge Institute of Technology, Department of Telecommunications and Mathematics. The Geometry of the Directional Display1996Report (Refereed) The directional display is a new kind of display which can contain and show several images -which particular image is visible depends on the viewing direction. This is achieved by packing information at high density on a surface, by a certain back illumination technique, and by explicit mathematical formulas which make it possible to automatize the printing of a display to obtain desired effects. The directional dependency of the display can be used in several different ways. One is to achieve three-dimensional effects. In contrast to that of holograms, large size and full color here involve no problems. Another application of the basic technique is to show moving sequences. Yet another is to make a display more directionally independent than today’s displays. Patent is pending for the invention in Sweden. • 70. Lennerstad, Håkan An Optimal Execution Time Estimate of Static versus Dynamic Allocation in Multiprocessor Systems1992Report (Other academic) Consider a multiprocessor with $k$ identical processors, executing parallel programs consisting of $n$ processes. Let $T_s(P)$ and $T_d(P)$ denote the execution times for the program $P$ with optimal static and dynamic allocations respectively, i. e. allocations giving minimal execution time. We derive a general and explicit formula for the maximal execution time ratio $g(n,k)=\max T_s(P)/T_d(P)$, where the maximum is taken over all programs $P$ consisting of $n$ processes. Any interprocess dependency structure for the programs $P$ is allowed, only avoiding deadlock. Overhead for synchronization and reallocation is neglected. Basic properties of the function $g(n,k)$ are established, from which we obtain a global description of the function. Plots of $g(n,k)$ are included. The results are obtained by investigating a mathematical formulation. The mathematical tools involved are essentially tools of elementary combinatorics. The formula is a combinatorial function applied on certain extremal matrices corresponding to extremal programs. It is mathematically complicated but rapidly computed for reasonable $n$ and $k$, in contrast to the np-completeness of the problems of finding optimal allocations. • 71. Lennerstad, Håkan Combinatorics for multiprocessor scheduling optimization and other contexts in computer architecture1996Conference paper (Refereed) The method described consists of two steps. First, unnecessary programs are eliminated through a sequence of program transformations. Second, within the remaining set of programs, sometimes regarded as matrices, those where all possible combinations of synchronizations occur equally frequently are proven to be extremal. At this stage we obtain a formulation which is simple enough to allow explicit formulas to be derived. It turns out that the same method can be used for obtaining worst-case bounds on other NP-hard problems within computer architecture. • 72. Lennerstad, Håkan Optimal combinatorial functions comparing multiprocess allocation performance in multiprocessor systems2000In: SIAM journal on computing (Print), ISSN 0097-5397, E-ISSN 1095-7111, p. 1816-1838Article in journal (Refereed) For the execution of an arbitrary parallel program P, consisting of a set of processes with any executable interprocess dependency structure, we consider two alternative multiprocessors. The first multiprocessor has q processors and allocates parallel programs dynamically; i.e., processes may be reallocated from one processor to another. The second employs cluster allocation with k clusters and u processors in each cluster: here processes may be reallocated within a cluster only. Let T-d(P, q) and T-c(P, k, u) be execution times for the parallel program P with optimal allocations. We derive a formula for the program independent performance function [GRAPHICS] Hence, with optimal allocations, the execution of P can never take more than a factor G(k, u, q) longer time with the second multiprocessor than with the first, and there exist programs showing that the bound is sharp. The supremum is taken over all parallel programs consisting of any number of processes. Overhead for synchronization and reallocation is neglected only. We further present a tight bound which exploits a priori knowledge of the class of parallel programs intended for the multiprocessors, thus resulting in a sharper bound. The function g(n, k, u, q) is the above maximum taken over all parallel programs consisting of n processes. The functions G and g can be used in various ways to obtain tight performance bounds, aiding in multiprocessor architecture decisions. • 73. Lennerstad, Håkan Optimal Combinatorial Functions Comparing Multiprocess Allocation Performance in Multiprocessor Systems1993Report (Other academic) For the execution of an arbitrary parallel program P, consisting of a set of processes, we consider two alternative multiprocessors. The first multiprocessor has q processors and allocates parallel programs dynamically, i.e. processes may be reallocated from one processor to another. The second employs cluster allocation with k clusters and u processors in each cluster - here processes may be reallocated within a cluster only. Let T_d(P,q) and T_c (P,k,u) be execution times for the parallel program P with optimal allocations. We derive a formula for the program independent performance function G(k,u,q)=\sup_ all parallel programs P T_c(P,k,u)}{T_d(P,q)}. Hence, with optimal allocations, the execution of $P$ can never take more than a factor $G(k,u,q)$ longer time with the second multiprocessor than with the first, and there exist programs showing that the bound is sharp. The supremum is taken over all parallel programs consisting of any number of processes. Any interprocess dependency structure is allowed for the parallel programs, except deadlock. Overhead for synchronization and reallocation is neglected only. We further present optimal formulas which exploits a priori knowledge of the class of parallel programs intended for the multiprocessor, thus resulting in sharper optimal bounds. The function g(n,k,u,q) is the above maximum taken over all parallel programs consisting of n processes. The function s(n,v,k,u) is the same maximum, with q=n, taken over all parallel programs of $n$ processes which has a degree of parallelism characterized by a certain parallel profile vector v=(v_1,...,v_n). The functions can be used in various ways to obtain optimal performance bounds, aiding in multiprocessor architecture decisions. An immediate application is the evaluation of heuristic allocation algorithms. It is well known that the problems of finding the corresponding optimal allocations are NP-complete. We thus in effect present a methodology to obtain optimal control of NP-complete scheduling problems. • 74. Lennerstad, Håkan Optimal Scheduling Results for Parallel Computing1996In: Applications on advanced architecture computers / [ed] Astfalk, Greg, Philadelphia, USA: SIAM , 1996, p. 155-164Chapter in book (Refereed) Load balancing is one of many possible causes of poor performance on parallel machines. If good load balancing of the decomposed algorithm or data is not achieved, much of the potential gain of the parallel algorithm is lost to idle processors. Each of the two extremes of load balancing - static allocation and dynamic allocation - has advantages and disadvantages. This chapter illustrates the relationship between static and dynamic allocation of tasks. • 75. Lennerstad, Håkan Optimal Worst Case Formulas Comparing Cache Memory Associativity1995Report (Other academic) Consider an arbitrary program $P$ which is to be executed on a computer with two alternative cache memories. The first cache is set associative or direct mapped. It has $k$ sets and $u$ blocks in each set, this is called a (k,u)$-cache. The other is a fully associative cache with$q$blocks - a$(1,q)$-cache. We present formulas optimally comparing the performance of a$(k,u)$-cache compared to a$(1,q)$-cache for worst case programs. Optimal mappings of the program variables to the cache blocks are assumed. Let$h(P,k,u)$denote the number of cache hits for the program$P$, when using a$(k,u)$-cache and an optimal mapping of the program variables of$P$to the cache blocks. We establish an explicit formula for the quantity $$\inf_P \frac{h(P,k,u)}{h(P,1,q)},$$ where the infimum is taken over all programs$P$which contain$n$variables. The formula is a function of the parameters$n,k,u$and$q$only. We also deduce a formula for the infimum taken over all programs of any number of variables, this formula is a function of$k,u$and$q$. We further prove that programs which are extremal for this minimum may have any hit ratio, i.e. any ratio$h(P,1,q)/m(P)$. Here$m(P)$is the total number of memory references for the program P. We assume the commonly used LRU replacemant policy, that each variable can be stored in one memory block, and is free to be stored in any block. Since the problems of finding optimal mappings are NP-hard, the results provide optimal bounds for NP-hard quantities. The results on cache hits can easily be transformed to results on access times for different cache architectures. • 76. Lennerstad, Håkan Optimal worst case formulas comparing cache memory associativity2000In: SIAM journal on computing (Print), ISSN 0097-5397, E-ISSN 1095-7111, p. 872-905Article in journal (Refereed) In this paper we derive a worst case formula comparing the number of cache hits for two different cache memories. From this various other bounds for cache memory performance may be derived. Consider an arbitrary program P which is to be executed on a computer with two alternative cache memories. The rst cache is set-associative or direct-mapped. It has k sets and u blocks in each set; this is called a (k, u)-cache. The other is a fully associative cache with q blocks-a (1, q)-cache. We derive an explicit formula for the ratio of the number of cache hits h(P, k, u) for a(k, u)-cache compared to a (1, q)-cache for a worst case program P. We assume that the mappings of the program variables to the cache blocks are optimal. The formula quantifies the ratio [GRAPHICS] where the in mum is taken over all programs P with n variables. The formula is a function of the parameters n, k, u, and q only. Note that the quantity h ( P, k, u) is NP-hard. We assume the commonly used LRU (least recently used) replacement policy, that each variable can be stored in one memory block, and that each variable is free to be mapped to any set. Since the bound is decreasing in the parameter n, it is an optimal bound for all programs with at most n variables. The formula for cache hits allows us to derive optimal bounds comparing the access times for cache memories. The formula also gives bounds ( these are not optimal, however) for any other replacement policy, for direct-mapped versus set-associative caches, and for programs with variables larger than the cache memory blocks. • 77. Lindström, Fredric Delayed Filter Update: An Acoustic Echo Canceler Structure for Improved Doubletalk Detection2003Conference paper (Refereed) • 78. Lundberg, Lars Optimal bounds on the gain of permitting dynamic allocation of communication channels in distributed computing1999In: Acta Informatica, ISSN 0001-5903, E-ISSN 1432-0525, p. 425-446Article in journal (Refereed) Consider a distributed system consisting of n computers connected by a number of identical broadcast channels. All computers may receive messages from all channels. We distinguish between two kinds of systems: systems in which the computers may send on any channel (dynamic allocation) and system where the send port of each computer is statically allocated to a particular channel. A distributed task (application) is executed on the distributed system. A task performs execution as well as communication between its subtasks. We compare the completion time of the communication for such a task using dynamic allocation and k(d) channels with the completion time using static allocation and k(s) channels. Some distributed tasks will benefit very much from allowing dynamic allocation, whereas others will work fine with static allocation. In this paper we define optimal upper and lower bounds on the gain (or loss) of using dynamic allocation and k(d) channels compared to static allocation and k(s) channels. Our results show that, for some tasks, the gain of permitting dynamic allocation is substantial, e.g. when k(s) = k(d) = 3, there are tasks which will complete 1.89 times faster using dynamic allocation compared to using the best possible static allocation, but there are no tasks with a higher such ratio. • 79. Blekinge Institute of Technology, Department of Telecommunications and Mathematics. A Comparative Study of Three New Object-Oriented Methods1995Report (Refereed) In this paper we will compare and contrast some of the newer methods with some of the established methods in the field of object-oriented software engineering. The methods re-viewed are Solution-Based Modelling, Business Object Notation and Object Behaviour Analysis. The new methods offer new solutions and ideas to issues such as object identi-fication from scenarios, traceability supporting techniques, criteria for phase completion and method support for reliability. Although all these contributions, we identified some issues, particular design for dynamic binding, that still have to be taken into account in an object-oriented method. • 80. Blekinge Institute of Technology, Department of Telecommunications and Mathematics. Applying the Object-Oriented Framework Technique to a Family of Embedded Systems1996Report (Refereed) This paper discusses some experiences from a project developing an object-oriented framework for a family of fire alarm system products. TeleLarm AB, a Swedish security company, initiated the project. One application has so far been generated from the framework with successful results. The released application has shown zero defects and has proved to be highly flexible. Fire alarm systems have a long lifetime and have high reliability and flexibility requirements. The most important observations presented in this paper are that the programming language C++ can be used successfully for small embedded systems; and that object-orientation and framework techniques offer flexibility and reusability in such systems. It has also been noted that design for verifiability and testability is very important, affecting as it does both maintainability and reliability. • 81. Blekinge Institute of Technology, Department of Telecommunications and Mathematics. Verifying Framework-Based Applications by Establishing Conformance1996Report (Refereed) The use of object-oriented frameworks is one way to increase productivity by reusing both design and code. In this paper, a framework-based application is viewed as composed by a framework part and an increment. It is difficult to relate the intended behaviour of the final application to specific increment requirements, it is therefore difficult to test the increment using traditional testing methods. Instead, the notion of increment conformance is proposed, meaning that the increment is designed conformant to the intentions of the framework designers. This intention is specified as a set of composability constraints defined as an essential part of the framework documentation. Increment conformance is established by verifying the composability constraints by means of code and design inspection. Conformance of the increment is a necessary but not sufficient condition for correct behaviour of the final application. • 82. Blekinge Institute of Technology, Department of Telecommunications and Mathematics. Blekinge Institute of Technology, Department of Telecommunications and Mathematics. Tight Bounds on the Minimum Euclidean Distance for Block Coded Phase Shift Keying1996Report (Refereed) We present upper and lower bounds on the minimum Euclidean distance$d_{Emin}(C)$for block coded PSK. The upper bound is an analytic expression depending on the alphabet size$q$, the block length$n$and the number of codewords$|C|$of the code$C$. This bound is valid for all block codes with$q\geq4$and with medium or high rate - codes where$|C|>(q/3)^n$. The lower bound is valid for Gray coded binary codes only. This bound is a function of$q$and of the minimum Hamming distance$d_{Hmin}(B)$of the corresponding binary code$B$. We apply the results on two main classes of block codes for PSK; Gray coded binary codes and multilevel codes. There are several known codes in both classes which satisfy the upper bound on$d_{Emin}(C)$with equality. These codes are therefore best possible, given$q,n$and$|C|$. We can deduce that the upper bound for many parameters$q,n$and$|C|$is optimal or near optimal. In the case of Gray coded binary codes, both bounds can be applied. It follows for many binary codes that the upper and the lower bounds on$d_{Emin}(C)$coincide. Hence, for these codes$d_{Emin}(C)\$ is maximal. • 83. Blekinge Institute of Technology, Department of Telecommunications and Mathematics. Some Results on Optimal Decisions in Network Oriented Load Control in Signaling Networks1996Report (Refereed) Congestion control in the signaling system number 7 is a necessity to fulfil the requirements of a telecommunication network that satisfy customers’ requirements on quality of service. Heavy network load is an important source of customer dissatisfaction as congested networks result in deteriorated quality of service. With the introduction of a Congestion Control Mechanism (CCM), that annihilates service sessions with a predicted completion time greater than the maximum allowed completion time for the session, network performance improves dramatically. Annihilation of already delayed sessions let other sessions benefit and increase the useful overall network throughput. This report discuss the importance of customer satisfaction and the relation between congestion in signaling networks and customer dissatisfaction. The advantage of using network profit as a network performance metric is also addressed in this report. Network profit and network costs are given a stringent definition with respect to customer satisfaction. An expression of the marginal cost for accepting or annihilating sessions is also given. Finally, the CCM is refined using a decision theoretic approach that bases the decision of annihilation on the average profit attached to each of the two possible actions, i.e. annihilate the session or not. The decision theoretic approach use a load dependent probability distribution for the completion time. The results in this report indicate that the decision theoretic approach to the CCM (DCCM) is robust and can handle very high overloads, both transient and focused, keeping the network profit on a high level. • 84. Pettersson, Stefan A Decision Theoretic Approach to Congestion Control in Signalling Networks1996Conference paper (Refereed) Congestion control in the signaling system number 7 is a necessity to fulfil the requirements of a telecommunication network that satisfy customers’ requirements on quality of service. Heavy network load is an important source of customer dis-satisfaction as congested networks result in deteriorated quality of service. With the introduction of a Congestion Control Mechanism (CCM), that annihilates serv-ice sessions with a predicted completion time greater than the maximum allowed completion time for the session, network performance improves dramatically. Annihilation of already delayed sessions let other sessions benefit and increase the useful overall network throughput. This paper uses a decision theoretic approach that bases the decision of annihilation on the average profit attached to each of the two possible actions, i.e. annihilate or not. We describe the load dependent proba-bility distribution for the completion time, and discuss the use of attributes attached to each session describing the outcome of any performed CCM action, e.g. the bad will costs associated with annihilation. These attributes are also used to calculate the network profit for a given network load. The results in this paper indi-cate that the decision theoretic approach to the CCM (DCCM) can handle very high overloads, keeping the network profit on a reasonable level. • 85. Pettersson, Stefan A Profit Optimizing Strategy for Congestion Control in Signaling Networks1995Conference paper (Refereed) Congestion control in the signaling system number 7 (SS7) is a necessity to fulfil the requirements of a telecommunication network that satisfy customers’ requirements on quality of service. Heavy network load is an important source of customer dissatisfaction as congested networks result in deteriorated quality of service. With the introduction of a Congestion Control Mechanism (CCM), that annihilates service sessions with a predicted completion time greater than the maximum allowed com-pletion time for the session, network performance improve dramatically. Annihilation of already delayed sessions let other sessions benefit and increase the overall network throughput. This paper investigates the possibilities of using a decision theoretic approach that base the decision of annihilation on the average loss attached to each of the two possible actions, i.e. annihilate or not. Attributes are attached to each session describing the out-come of any performed CCM action, e.g. the economic loss connected with the annihila-tion of a session. The attributes are also used to calculate the network loss for a given network load. The results in this paper indicate that the decision theoretic approach can decrease the network loss up to 40% for the improved CCM (ICCM) compared to an ordinary CCM. • 86. Pettersson, Stefan Economical Aspects of a Congestion Control Mechanism in a Signaling Network1995Conference paper (Refereed) Congestion control in signaling system 7 (SS7) is a necessity for fulfilling the requirements of a telecommunication network that provides customer satisfaction. Heavy network load is a source of customer dissatisfaction as congested networks result in unsuccessful calls. With the introduction of network profit as a metric, it is possible to study the efficiency of an annihilation congestion control algorithm (ACCM) from the operator's point of view. Several strategies for applying the ACCM are investigated. A model describing the income and cost for a call is also introduced • 87. Pettersson, Stefan Network Oriented Load Control in Intelligent Networks1997Conference paper (Refereed) Heavy network load in the signaling system number 7 is an important source of customer dissatisfaction as congested networks result in deteriorated quality of service. With the introduction of a Congestion Control Mechanism (CCM), that rejects service sessions with a predicted completion time greater than a maximum al- lowed completion time for the session, network perfor- mance improves dramatically, and thus customer satis- faction. Rejection of already delayed sessions let other sessions benefit and increase the useful overall network throughput. The decision of rejection is based upon Bayesian decision theory that takes into account the cost or revenue attached to each action, i.e. whether to reject the session or not. More valuable sessions are then given priority through the network, at the ex- pense of less valuable sessions. To clearly display the benefit from this approach we propose to use network profit as a performance metric. This paper summarises the ongoing research and discusses the future direction of this project. Of spe- cial interest is deployment of new services in the IN and the implications this has to network load and the profit made by the operator. • 88. Popescu, Adrian Modeling and Analysis of Network Applications and Services1998Other (Other academic) Recent traffic measurement studies from a wide range of working packet networks have convincingly shown the presence of self-similar (long-range dependence LRD) properties in both local and wide area traffic traces. LRD processes are characterized (in the case of finite variance) by self-similarity of aggregated summands, slowly decaying covariances, heavy-tailed distributions and a spectral density that tends to infinity for frequencies approaching zero. This discovery calls to question some of the basic assumptions made by most of the research in control, engineering and operations of broadband integrated systems. At the time being, there is mounting evidence that self-similarity is of fundamental importance for a number of teletraffic engineering problems, such as traffic measurements and modeling, queueing behavior and buffer sizing, admission control, congestion control, etc. These impacts have highlighted the need for precise and computationally feasible methods to estimate diverse LRD parameters. Especially real-time estimation of measured data traces and off-line analysis of enormous collected data sets call for accurate and effective estimation techniques. A wavelet-based tool for the analysis of LRD is presented in this paper together with a semi-parametric estimator of the Hurst parameter. The estimator has been proved to be unbiased under fractional Brownian motion fBm and Gaussian assumptions. Analysis of the Bellcore Ethernet traces using the wavelet-based estimator is also reported. • 89. Popescu, Adrian NEMESIS: A Multigigabit Optical Local Area Network1994Conference paper (Refereed) A new architecture is developed for an integrated 20 Gbps fiber optic Local Area Network (LAN) that supports data rates up to 9.6 Gbps. The architecture does not follow the standard, vertically-oriented Open System Interconnection (OSI) layering approach of other LANs. Instead, a horizontally-oriented model is introduced for the communication process to open up the three fundamental bottlenecks, i.e., opto-electronic, service and processing bottlenecks, that occur in a multi-Gbps integrated communication over multiwavelength optical networks. Furthermore, the design follows also a new concept called Wavelength-Dedicated-to-Application (WDA) concept in opening up the opto-electronic and service bottlenecks. Separate, simplified, and application-oriented protocols supporting both circuit- and packet-switching are used to open up the processing bottleneck. • 90. Popescu, Adrian Dynamic Time Sharing: A New Approach For Congestion Management1996Conference paper (Refereed) A new approach for bandwidth allocation and congestion control is reported in this paper, which is of the Rate Controlled admission with Priority Scheduling service type. It is called Dynamic Time Sharing (DTS), because of the dynamic nature of the procedure for resource partitioning to allocate and guarantee a required bandwidth for every traffic class. This approach is based on guaranteeing specific traffic parameters (bandwidth requirements) through a policing unit, and then optimizing the bandwidth assignment within the network for specific parameters of interest (like delay or jitter, and loss). The optimization process is based on the parameters guaranteed by the policing unit. A batch admission policy is used at the edges of the network according to a specific framing strategy to follow the traffic characteristics (e.g., the traffic constraint function) of different traffic classes. On the other hand, another framing (congestion control) strategy is used within the network, which is based on different (delay/loss) requirements of the traffic classes. Proper management of bandwidth and buffer resources is provided in every (switch) node of the network, such as to guarantee the diverse performance of interest. • 91. Popescu, Adrian Dynamic Time Sharing: A New Approach For Congestion Management1997In: ATM Networks: Performance Modelling and Analysis / [ed] Kouvatsos, Demetres, London: Chapman & Hall , 1997Chapter in book (Other academic) A new approach for bandwidth allocation and congestion control is reported in this paper, which is of the Rate Controlled admission with Priority Scheduling service type. It is called Dynamic Time Sharing (DTS), because of the dynamic nature of the procedure for resource partitioning to allocate and guarantee a requested bandwidth for every traffic class. This approach is based on guaranteeing specific traffic parameters (bandwidth requirements) through a policing unit, and then optimizing the bandwidth assignment within the network for specific parameters of interest (like delay or jitter, and loss). The optimization process is based on the parameters guaranteed by the policing unit. The policing unit also functions to enforce the "fair" sharing of resources. A batch admission policy is used at the edges of the network according to a specific framing strategy to follow the traffic characteristics (e.g., the traffic constraint function) of different traffic classes. On the other hand, the DTS mechanism allows for another framing (congestion control) strategy to be used within the network, which is based on different (delay/loss) requirements of the traffic classes. Proper management of bandwidth and buffer resources is provided in every (switch) node of the network, such as to guarantee the diverse performance of interest. • 92. Pruthi, Parag HTTP Interactions with TCP1998Conference paper (Refereed) In this paper we describe our simulation models for evaluating end-to-end performance of HTTP transactions. We first analyze several gigabytes of collected traffic traces from a production Frame Relay network. Using these traces we extract web traffic and analyze the server web pages accessed by actual users. We analyze over 25000 web pages and develop a web client/server interaction model based upon our analysis of many server contents. We make specific contributions by analyzing the popularity of web servers and the number of bytes transferred from them during a busy hour. We also compute the distribution of the number of embedded items within a web document. We then use these models to drive a network simulation and show the interactions between the TCP/IP flow control and retransmission mechanism on source parameters. One of our important contributions has been to show that the Hurst parameter is robust with regard to TCP/IP flow and error control. Using the simulation studies we show that the distribution of end-to-end application message delay has a heavy-tail distribution and we discuss how these distributions arise in the network context. • 93. Pruthi, Parag Effect of Controls on Self-Similar Traffic1997Conference paper (Refereed) Tremendous advances in technology have made possible Giga- and Terabit networks today. Simi- lar advances need to be made in the management and control of these networks if these technologies are to be successfully accepted in the market place. Although years of research have been expended at designing control mechanisms necessary for fair resource allocation as well as guaranteeing Quality of Service, the discovery of the self-similar nature of traffic ows in all packet networks and services, irrespective of topology, technology or protocols leads one to wonder whether these control mech- anisms are applicable in the real world. In an attempt to answer this question we have designed network simulators consisting of realistic client/server interactions over various protocol stacks and network topologies. Using this testbed we present some preliminary results which show that simple ow control mechanisms and bounded resources cannot alter the heavy-tail nature of the offered traffic. We also discuss methods by which application level models can be designed and their impacts on network performance can be studied. • 94. Blekinge Institute of Technology, Faculty of Engineering, Department of Mechanical Engineering. Blekinge Institute of Technology, School of Engineering, Department of Mechanical Engineering. Blekinge Institute of Technology, Faculty of Engineering, Department of Mechanical Engineering. Blekinge Institute of Technology, School of Engineering, Department of Mathematics and Science. Blekinge Institute of Technology, School of Engineering, Department of Mechanical Engineering. Blekinge Institute of Technology, School of Engineering, Department of Mathematics and Natural Sciences. Blekinge Institute of Technology, Department of Telecommunications and Mathematics. A new equation and exact solutions describing focal fields in media with modular nonlinearity2017In: Nonlinear dynamics, ISSN 0924-090X, E-ISSN 1573-269X, Vol. 89, no 3, p. 1905-1913Article in journal (Refereed) Brand-new equations which can be regarded as modifications of Khokhlov–Zabolotskaya–Kuznetsov or Ostrovsky–Vakhnenko equations are suggested. These equations are quite general in that they describe the nonlinear wave dynamics in media with modular nonlinearity. Such media exist among composites, meta-materials, inhomogeneous and multiphase systems. These new models are interesting because of two reasons: (1) the equations admit exact analytic solutions and (2) the solutions describe real physical phenomena. The equations model nonlinear focusing of wave beams. It is shown that inside the focal zone a stationary waveform exists. Steady-state profiles are constructed by the matching of functions describing the positive and negative branches of exact solutions of an equation of Klein–Gordon type. Such profiles have been observed many times during experiments and numerical studies. The non-stationary waves can contain singularities of two types: discontinuity of the wave and of its derivative. These singularities are eliminated by introducing dissipative terms into the equations—thereby increasing their order. © 2017 The Author(s) • 95. Rönngren, Robert Parallel Simulation of a High Speed LAN1994Conference paper (Refereed) In this paper we discuss modeling and simulation of a multigigabit/s LAN. We use parallel simulation techniques to reduce the simulation time. Optimistic and conservative parallel simulators have been used. Our results on a shared memory multiprocessor indicate that the conservative method is superior to the optimistic one for the specific application. Further, the parallel simulator based on the conservative scheme shows a linear speedup for large networks. • 96. Svensson, Anders Dynamic Alternation between Load Sharing Algorithms1992Conference paper (Refereed) Load sharing algorithms can use sender-initiated, receiver-initiated, or symmetrically-initiated schemes to improve performance in distributed systems. The relative performance of these schemes has been shown to depend on the system load. The author proposes an adaptive symmetrically-initiated scheme where all nodes alternate between a sender-initiated and a receiver-initiated algorithm depending on the current system load. Simulations show that the mean job response times for the proposed scheme are superior to the best attained by its two algorithms used separately and simultaneously. The alternating scheme performs best at intermediate and high loads, when the job arrival process is bursty, and when it is costly to find complementary nodes. • 97. Svensson, Anders History, an Intelligent Load Sharing Filter1990Conference paper (Refereed) The author proposes a filter component to be included in a load-sharing algorithm to detect short-lived jobs not worth considering for remote execution. Three filters are presented. One filter, called History, detects short-lived jobs by using job names and statistics based on previous executions. Job traces are allocated from diskless work stations connected by a local area network and supported by a distributed file system. Trace-driven simulation is then used to evaluate History with respect to the other filters. Two load-sharing algorithms show significant improvement of the mean job response ratio when the History filter is added. • 98. Blekinge Institute of Technology, Department of Telecommunications and Mathematics. Optimization of Circuit Switched Networks1996Conference paper (Refereed) • 99. Blekinge Institute of Technology, Department of Telecommunications and Mathematics. The Adaptive Cross Validation Method-applied to the Control of Circuit Switched Networks1996Conference paper (Refereed) The adaptive cross validation (AVC) method is a very general method for system performance optimization. It can be used in the design phase to determine how a system should be designed, or it could be used during the operational phase to dynamically determine the control of the system. A novel formalism, suitable for presentation of mathematical models in technical publications, is used to formally describe the studied system, which is a circuit switched network. The generality of the method implies that it, is valid for general distributions describing inter-arrival and holding times, as well as for complex routing methods which make an analytical approach infeasible. The method is used for real-time control of two routing algorithm parameters. The results are very satisfactory. • 100. Svensson, Anders The Adaptive Cross Validation Method: Design and Control of Dynamical Systems1996Conference paper (Refereed) A new approach to find optimal settings of model parameters as well as optimal models of dynamical systems is presented. The Adaptive Cross Validation (ACV) method is based on a number of well known ideas which are combined to a general optimization tool. It could be used for both design (off-line) and control (on-line) of widespread applications, which can be both continuous and discrete. The method is specially suited for modular optimization problems. A new mathematical model formalism for describing systems is also introduced. A controlled system application, a circuit-switched network, is used as an example to clarify the method. 123 51 - 100 of 105 CiteExportLink to result list Cite Citation style • apa • harvard1 • ieee • modern-language-association-8th-edition • vancouver • Other style More styles Language • de-DE • en-GB • en-US • fi-FI • nn-NO • nn-NB • sv-SE • Other locale More languages Output format • html • text • asciidoc • rtf
2019-11-17 20:24:55
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.4057669937610626, "perplexity": 1707.1659125736066}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-47/segments/1573496669276.41/warc/CC-MAIN-20191117192728-20191117220728-00057.warc.gz"}
https://dmoj.ca/problem/bf3
## Next Prime View as PDF Points:5 Time limit:2.0s Memory limit:64M Problem type ##### Brute Force Practice 3 You love prime numbers. You own a number, but you suspect it might not be prime. You want a prime number, but it must be at least as large as the number you currently own. Find the smallest number that satisfies those conditions. #### Input The first line will have the integer () #### Output Print the number you want. #### Sample Input 4 #### Sample Output 5 • nishux commented on Nov. 14, 2017 There seems to be a problem with test case #2 and #6 I have verified my results and also check that they are prime. In test case #2 the answer is 3 for which the number can either be 1 or 2, I have also verified that my code does not return 3 for any other value. Can someone please check the test cases. • Injust commented on Nov. 15, 2017 Instead of suggesting that correct solutions from 400+ other users are flawed, you may want to take another look at the problem statement (emphasis mine): You want a prime number, but it must be at least as large as the number you currently own. • Anix55 commented on Oct. 15, 2015 TLE what does TLE mean? • Xyene commented on Oct. 15, 2015 It means your program has exceeded the per-testcase time limit (in this case, 2 seconds). You can hover over status codes, as their alt text provides more details on what they are. • bobhob314 commented on Jan. 6, 2015 Sieve? Are there any pointers you could give me to solve this question? Should I use the Sieve or something? I'm stuck :( • FatalEagle commented on Jan. 6, 2015 The name of the problem might give a hint. Specifically, "Brute Force Practice 3". • FatalEagle commented on Nov. 14, 2014 Hint Even though this is Brute Force Practice 3, you still need a little optimization -- for example, can you determine if a number is prime just by checking divisors up to and including the square root of a number? • BMP commented on Nov. 14, 2014 dafuq Keep getting 90%, first test fails. What... • FatalEagle commented on Nov. 14, 2014 Partial output has been enabled. You can see what output your program produced (up to about 32 bytes for now) and try to debug your code. • PaulOlteanu commented on Nov. 24, 2014 Has the output been disabled? • PaulOlteanu commented on Nov. 24, 2014 Nevermind. It seems the output only displays if you have less than a certain number of mistakes. • FatalEagle commented on Nov. 24, 2014 Actually, there will not be an arrow if your program does not produce any output. That was what was really happening. • Yuting9 commented on Oct. 27, 2014 Last One Lucky I can't seem to get the last test to work for my code... • FatalEagle commented on Oct. 27, 2014 Your code is incorrect. There is a corner case you missed. • Yuting9 commented on Oct. 27, 2014 what do you mean, "corner case?" • FatalEagle commented on Oct. 27, 2014 If I wrote it here, it wouldn't be a corner case anymore. Make sure your solution is valid for all possible inputs within the range specified. • JannyWang commented on Oct. 16, 2014 Janny Does it have to work for decimals? • quantum commented on Oct. 16, 2014 Inputs are integer only. • quantum commented on Sept. 27, 2014 Not fair Not fair how /^1?$|^(11+?)\1+$/ fails. • FatalEagle commented on Sept. 27, 2014 If only the memory limit was a few GB more and you had a few more days to run your program! • MateiG commented on June 19, 2017 Lol
2017-11-23 09:32:25
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.3542464077472687, "perplexity": 1851.7681955333746}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-47/segments/1510934806768.19/warc/CC-MAIN-20171123085338-20171123105338-00488.warc.gz"}
https://docs.astropy.org/en/latest/api/astropy.units.equivalencies.thermodynamic_temperature.html
# thermodynamic_temperature¶ astropy.units.equivalencies.thermodynamic_temperature(frequency, T_cmb=None)[source] Defines the conversion between Jy/sr and “thermodynamic temperature”, $$T_{CMB}$$, in Kelvins. The thermodynamic temperature is a unit very commonly used in cosmology. See eqn 8 in [1] $$K_{CMB} \equiv I_\nu / \left(2 k \nu^2 / c^2 f(\nu) \right)$$ with $$f(\nu) = \frac{ x^2 e^x}{(e^x - 1 )^2}$$ where $$x = h \nu / k T$$ Parameters frequencyQuantity with spectral units The observed spectral equivalent Unit (e.g., frequency or wavelength) T_cmbQuantity with temperature units or None The CMB temperature at z=0. If None, the default cosmology will be used to get this temperature. Notes For broad band receivers, this conversion do not hold as it highly depends on the frequency References 1 Planck 2013 results. IX. HFI spectral response https://arxiv.org/abs/1303.5070 Examples Planck HFI 143 GHz: >>> from astropy import units as u >>> from astropy.cosmology import Planck15 >>> freq = 143 * u.GHz >>> equiv = u.thermodynamic_temperature(freq, Planck15.Tcmb0) >>> (1. * u.mK).to(u.MJy / u.sr, equivalencies=equiv) <Quantity 0.37993172 MJy / sr>
2020-01-27 06:48:12
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.47982120513916016, "perplexity": 13132.297187741558}, "config": {"markdown_headings": true, "markdown_code": false, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-05/segments/1579251694908.82/warc/CC-MAIN-20200127051112-20200127081112-00552.warc.gz"}
http://math.stackexchange.com/questions/189338/when-min-max-max-min
# When $\min \max = \max \min$? Let $X \subset \mathbb{R}^n$ and $Y \subset \mathbb{R}^m$ be compact sets. Consider a continuous function $f : X \times Y \rightarrow \mathbb{R}$. Say under which condition we have $$\min_{x \in X} \max_{y \in Y} f(x,y) = \max_{y \in Y} \min_{x \in X} f(x,y).$$ From this we have that $\max_{y \in Y} \min_{x \in X} f(x,y) \leq \min_{x \in X} \max_{y \in Y} f(x,y)$. So here we are looking for conditions on $f$ such that we have the equality. - I would suggest you try $n=m=1, X=Y=[0,1]$ and some functions that are easy to handle, like a constant, $f(x,y)=x, f(x,y)=xy, f(x,y)=x+y$ and see if this gives you any ideas. –  Ross Millikan Aug 31 '12 at 16:59 Typical sufficient conditions are $X,Y$ compact and convex, $f$ continuous, convex in $x$, concave in $y$. See Rockafellar, "Convex Analysis", Corollary 37.3.2. –  copper.hat Aug 31 '12 at 17:05 In the context of Continuous Logic this is to ask when a formula $f(x,y)$ is such that $\exists x\forall y f(x,y)\iff\forall y\exists x f(x,y)$. –  Asaf Karagila Aug 31 '12 at 17:09 take two examples: $$f(x,y) = \cos(x+y)$$ and $$f(x,y)=2xy(x-y)$$ In the first case $\min_x \max_y f(x,y)=1$ and $\max_y\min_x f(x,y)=-1$ for all $x,y$. In the second one we have $y=0.5x$ and $x=0.5y$ since we have $f(x,y)^{''}>0$ for $\frac{d^2}{dx}f(x,y)$ and $f(x,y)^{''}<0$ for $\frac{d^2}{dy}f(x,y)$, they are minima and maxima respectively. As a result one gets $$\min_x \max_y f(x,y) = \max_y \min_x f(x,y)$$ for the second case. -
2015-04-28 16:32:13
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9477380514144897, "perplexity": 270.92323391687864}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2015-18/segments/1429246661733.69/warc/CC-MAIN-20150417045741-00253-ip-10-235-10-82.ec2.internal.warc.gz"}
https://pnrs1.aprogeopr.org.br/blog/index.php?entryid=5679
## Mensagens do blog por Madelaine Wilsmore ### What The Experts Aren't Locution About Growth Testosterone And How It Affects You Testosterone boosters will be supplements that lift testosterone levels within the body. High levels of testosterone are related with increased durability and muscle tissue. Nevertheless, most people will be not blessed together with such high degrees. Individuals who have a high level of androgenic hormone or testosterone are usually really athletic and usually tend to pack on lean muscle fast. If you are thinking about gaining muscle tissue quickly, an androgenic hormone or testosterone booster may be the answer for you personally. Hunter Test is a powerful testo Booster-sterone booster designed for active, sporty, focused men. It consists of herbal extracts that are aphrodisiacs and even enhance energy ranges. TestoFuel is some sort of weightlifting testosterone product that supports healthy and balanced testosterone levels within the body. It is found in supplement form. Take four capsules daily in addition to start seeing results within a few days. Continue using the supplement on a regular basis to see better results. There are several types of testosterone boosters, and these people each have different side effects. Many may cause acne, fatty skin, and increased sweating. Other significant negative effects include liver damage and heart attack. You should talk to a doctor just before taking any type of testosterone booster. However, most men will certainly not have any kind of gloomy effects any time they take these types of bodybuilding supplements. A few of these products belong of D-aspartic acid, which regulates testosterone universe and triggers the synthetic thinking of luteinizing internal secretion (LH), the harbinger of intimate hormones. Close to of these supplements bathroom besides take early benefits, so much as enhanced immunity, increased lastingness, and decreased muscle red ink. Effectual steroids give the sack sure as shooting be ill-used simply by women for the exceptional Lapp reasons as manpower. Til now , they should be applied with caution, as they derriere rich person got dramatic composition effects upon their personify. Some legal steroids English hawthorn easily even out get fair sex users to make virile characteristics. Intended for example, females whitethorn good grow virile seventh cranial nerve features, get ahead undesirable consistence fat, compound their voices, addition mature nervus facialis and consistency fuzz. Testo-Max is definitely an all-natural testosterone booster which has been utilized by bodybuilders for many years. It increases energy level, increases muscle, and helps men achieve a lean body. In addition, it increases luteinizing hormone throughout the body, which often triggers the manufacturing of testosterone. Testo-Max also improves your speed and agility in the fitness center. This product contains Zinc and Calciferol, which are essential for testosterone manufacturing. Another sound sex hormone add on that is useable is by all odds DecaDuro. It whole kit and caboodle as a Deca-Nandrolone stand-in and enhances stamina. It is to boot efficacious with gaze to combustion surplus flesh fat. It similarly increases testosterone levels, which promotes add up conditioning. Combined conjointly with an work organization, these aggregation steroid drugs fanny serve a person gain gestation involving muscle-building. Physical exercise on a even fundament is unitary associated with the topper slipway to increment testosterone. Various varieties of physical utilisation, such as strolling, long suit training, addition cardio, toilet nurture the internal secretion. Workout regularly bathroom also increase pump health and brawniness bulk. The more muscles a populate has, a set to a greater extent intimate vigor he has. Testofuel is the supplement made up of eleven strong organic ingredients, which include botanical extracts and even vitamins and nutrients. It has helped many men boost their sexual performance, increase their energy amounts, and build muscle. The high dosage and effectiveness of the product or service makes it a well-liked option among bodybuilders plus athletes. When purchasing sound steroids at #link#, commend to condition with attentiveness to a trusted maker. A lawful supplier induce to be able-bodied in ordering to undertake livery asset repay the buy cost in the outcome that you're non happy. Also, the best-loved companies pop the question fantabulous discounts for bulge purchases. Generally, licit steroids price done $40 to$80 for a month's supplying. If near in all likelihood timid of the ware quality, understand reviews posted by other consumers. These reports testament provide a person with advisable regarding how effective the better steroid leave terminate up beingness. TestoPrime is one involving the best-known all natural testosterone boosters about the market. That contains powerful organic ingredients and offers been evaluated expertly through human tests. It promises in order to improve libido, decrease body fat, increase lean muscle bulk, and improve overall health. Many customers have also described a boost in sexual prowess, and enhanced mental and actual alertness. The producer provides an ingredient list that is transparent and comprehensive, so that you know exactly what you're getting in. • Compartilhar
2023-01-27 22:07:54
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.21790964901447296, "perplexity": 13319.891193398687}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2023-06/segments/1674764495012.84/warc/CC-MAIN-20230127195946-20230127225946-00689.warc.gz"}
https://www.nature.com/articles/s41598-019-44596-3?error=cookies_not_supported&code=50ab7a77-94e1-4d11-90a4-97c081f0ae0b
## Introduction Accretion processes are of much interest to the astrophysics community, as they are thought to supply power in various astrophysical objects, as well as being the dominant radiation source in many binary systems1. Understanding the complex physical processes that allow the release of gravitational energy in the form of radiation, is fundamental to interpreting high-energy astronomical observations. According to the magnetospheric accretion model, a shock is formed when material from a stellar accretion disk falls down and impacts upon the surface of the star2,3,4. This material is taken away from the plane of the disk and directed along the star’s magnetic field lines and onto its surface. The process is therefore highly dependent on the nature and strength of the star’s magnetic field. There is now a wealth of evidence to support the existence of these accretion shocks across a range of different systems5,6. In the context of young stars, the final mass is ultimately determined by the accretion mechanism during the early stages of its formation and evolution towards main sequence. Therefore the study of accreting young stars and, in particular, the way in which matter settles on the their surface is wide interest in the context of star formation7. As for the different binary star systems, magnetic cataclysmic variable stars (MCVs) represent a unique opportunity to perform a focused study on accretion dynamics, since the accretion region is responsible for the majority of the luminosity of these systems. Moreover, MCVs have long been discussed as potential progenitors of type Ia supernovae8, and so understanding their dynamics is crucial to explain the initial conditions of these explosions, which themselves are used to study the acceleration of the Universe9. MCVs consist of a strongly magnetised white dwarf (WD) accreting matter from a low-mass companion star10. Of particular interest are the subclasses of systems known as polars or intermediate polars, the former being characterised by a single optical and X-ray photometric period and strong optical linear (5–10%) and circular (10–80%) polarisation. Due to the very strong magnetic field of the white dwarf (B ~ 0.01–1 MG for intermediate polars and up to B > 10 MG for polars), the accreting plasma flow (υ ~ 1000 km/s) is guided along the field lines onto the WD photosphere, forming a column rather than an accretion disk11,12,13. After impact on the WD photosphere, a radiative reverse shock is formed, which propagates counter to the incoming flow, thus heating the accretion column to temperatures up to 10 keV. Consequently, an intense spectrum from soft to hard X-rays, predominantly due to bremsstrahlung cooling, is observed. However, the observed ratio of hard to soft X-ray emission currently disagrees with the standard model of accretion columns14. Additionally, unexplained luminosity oscillations, possibly related to unstable thermal oscillations in the shock front, or magnetohydrodynamic instabilities in the accretion column, have also been reported15,16. The intense radiation emitted in the post-shock region acts to slow down the accretion shock, which reaches a steady height of hs ~ 100–1000 km above the WD photosphere. However, a complete stoppage of the infalling plasma requires an infinitely large radiative energy density in the lower part of the accretion column17,18. The first models of accretion systems therefore invoked singularities in order to ensure the stationarity of the shock11. Modern studies, however, suggest the possibility of mass being ejected laterally upon collision with the star’s surface19,20,21, thus maintaining a constant shock height. Many theories relating the properties of the WD (e.g. its mass) to this height22,23 have been proposed, but the distance is far too small to be resolved with observations. Therefore, obtaining spatial profiles of the radiative zone, reconciling observations with theory, and ultimately confirming the properties of WDs all remain out of reach. Small-scale models of accretion columns, based on similarity relations to their astrophysical counterparts, can now be created using high-power lasers21. Radiation hydrodynamics permits exact24,25 or parametric26 scaling laws that permits the comparison of scales in the laboratory to those in astrophysics, with a high degree of fidelity. In this paper, we present a scaled study, built on well established experimental platforms27,28,29,30,31 to investigate accretion shocks in binary systems. In this experiment, a strong magnetic field is imposed on the experimental system, capable of collimating the plasma flow as in the astrophysical case. Improvements are thus made on previous work where either the plasma flow was left to expand freely with no collimation mechanism or a tube was employed to artificially collimate the flow, strongly influencing the observations. Various diagnostics are employed, allowing a full understanding of the dynamics of the system, over the full timescale of the experiment. Evidence of a new shock structure is seen, whereby the return shock is initially slowed down by a strongly radiative region upstream and remains stationary for a period in excess of 60 ns, caused by lateral mass ejection in the collision region. ## Results Figure 1 displays a schematic of the experiment, which was carried out at the LULI2000 laser facility. A full description is available in the Methods section. ### Initial plasma flow The evolution of the plasma flow, resulting from the initial laser-target interaction, is measured by means of two gated Schlieren imaging systems. These are sensitive to the electronic density gradients associated with the front of the plasma flow. Their delays are varied independently, with respect to the drive laser, in order to determine the speed of the front of the plasma flow towards the obstacle. The progression of the flow with time is shown in Fig. 2, together with two sample Schlieren images with and without an imposed magnetic field of 15 T, respectively. The speed is thus determined to be 78 ± 5 km/s. This is in agreement with previous results using similar targets and laser conditions30,32. The magnetic field does not affect the speed of the plasma flow as expected. The increase in density in the post-shock region renders this diagnostic unsuitable for studying the reverse shock and so delays are limited to the phase of the experiment where the plasma flows towards the obstacle. At higher densities, X-ray radiography is employed to probe the physical structure of the plasma flow as well as that of the reverse shock33. Images are taken along the axis orthogonal to the optical diagnostics and the plasma flow. The probe laser, used to generate the short X-ray source, is delayed with respect to the drive laser, to track the progress of the system with time. The collimating effect of the magnetic field on the plasma flow is investigated at early times by the Schlieren images and at late times by the radiography. Figure 3 illustrates that at early times (t ~ 15 ns) the effect of the magnetic field is not yet discernible, as the magnetic pressure is small compared with the ram pressure of the plasma. However, at later times, from around t ~ 75 ns to t ~ 180 ns, the width of the flow is noticeably smaller (~20%) when the magnetic field of 15 T is imposed. This follows the dynamics described by30, where, after an initial flow launching region, the magnetic pressure is expected to be sufficient to manipulate the large-scale structure of the flow. One should note also that there is a non-negligible hydrodynamic collimation of the flow, via a nozzle mechanism, even without the imposed magnetic field, as observed in previous work31. One-dimensional, streaked, self-emission of the whole experimental system is also obtained and displayed in Fig. 4 for shots with and without the imposed magnetic field. The plasma is created at the upper right hand corner of the panel, at t = 0, x = 3 mm. It then moves from right to left at a speed of 75 ± 10 km/s towards the obstacle, which is at a distance of 3 mm away from the initial laser target interaction. When the flow impacts upon the obstacle (~60 ns), it heats up and begins to emit. For a short period of time, material builds up on the surface of the obstacle, before eventually a strongly emitting reverse shock is formed, moving counter to the incoming plasma flow. Also seen in both the 2D images and the lineouts is a flow of weakly emitting plasma, moving slowly from the target towards the obstacle. This flow is predominantly made up of the CH ablator layer in the target and does not play a role in the interaction with the obstacle. We note that the speed of the incoming flow is in agreement with that measured by the Schlieren imaging diagnostic. ### Reverse shock In examining the behaviour of the reverse shock, we first note that the emission is stronger at later times in the case of the imposed magnetic field. This can be explained by recognizing the fact that the density of the plasma flow is higher in the presence of the magnetic field, due to the collimating effect seen in Fig. 3 and described in the previous section. This in turn leads to higher temperatures in the post-shock region also. The interplay of the two effects combined therefore leads to an increase in the emission seen by the detector. To illustrate this point further, Fig. 4c shows horizontal lineouts from the streaked images at 150 and 195 ns, averaged over multiple shots. The difference between the two cases is initially small, whereas at later times, the maximum emission increases by up to a factor of 50%. The small peaks on the right hand side of the graph correspond to the slow moving CH plasma emanating from the multi-layer target. The evolution of the reverse shock with time is visualised using X-ray radiography. Figure 5 shows six images over a range of 150 ns, spanning the formation and propagation of the reverse shock, all with an imposed B field of 15 T. The laser is incident from the bottom of the image, creating a plasma flow which travels upwards towards the obstacle. The reverse shock then moves counter to this flow, downwards on the image. The high temporal and spatial resolution of the data (10 ps, 25 μm, combined with the range of delay times measured, enable the detailed study of the physical processes at play in the post-shock region, inaccessible in previous work. Due to the demonstrable advantages of the magnetic field in reproducing the astrophysical case by collimating the plasma flow, as previously discussed, no data was taken at late times without the magnetic field. ## Discussion We have already shown that the magnetic field helps to collimate to flow in the pre-shock or upstream region. In order to compare to the astrophysical case however, we now consider solely the post-shock or downstream region. The βram value (the ratio of thermal to magnetic pressure in a shocked system) is βram ~ 10−2–10−4 for the case of polars10 compared to $${\beta }_{ram}\lesssim 1$$ for intermediate polars34. The huge magnetic field associated with polars means that strict Alfven similarity criteria cannot feasibly be met in a scaled laboratory experiment. However, with a value of βram ~ 1 in our experiment, we are in a regime that is similar to that of intermediate polars. We expect magnetic effects to play a significant (but not dominant) role in the dynamics of the system in the two cases. There is also good agreement in the magnetic Reynolds number, Rm, (a measure of the diffusivity of the magnetic field inside the plasma), between the two systems (Rm 1 in MCVs and Rm ~ 10 in the laboratory). In both cases then the field lines are expected to be frozen in the magnetised plasma, with diffusive effects being small. The behaviour and predominant effect of the magnetic field in the two cases is expected to be the same; that is, the field is frozen in the plasma and will add an additional component to the Lorentz force, helping to collimate the flow, even in the post-shock region. Therefore, comparison between the two systems can be instructive. To conclude, we compare the shock standoff distance observed in the laboratory to those taken from a typical astrophysical accretion column for the specific case of a an intermediate polar, by employing the scaling method as described in43,44. The values relevant to the scaling between the two systems are summarized in Table 1. The astrophysical system has a characteristic cooling time and velocity of, tcool = 1 s and v = 1000 km/s respectively. The cooling time in the experimental case is calculated using two different methods. The first is using the formula: tcool = P/$$\epsilon$$ (γ − 1), where P is the pressure of the post-shock region, and $$\epsilon$$, the emissivity, is related to the Planck mean free path, λpl via the relation $$\epsilon$$ = σT4/λpl. The second method follows the formulation proposed by45, for the case of a black body. The two approaches are approximately in agreement, giving 3 and 4 ns respectively. Taking the measured experimental velocity of v = 75 km/s and shock standoff distance of hs = 0.8 ± 0.1 mm, we then calculate a corresponding distance of 2000–3000 km for the astrophysical regime. This value is of the same order of magnitude to those predicted by theory12, although is slightly larger than previous experimental results29. This discrepancy once again underlines the failings of simple 1-D models as well as the limitations of previous experiments, and provides further evidence of the need to model accretion columns in a fully 3-D manner. This work builds upon the previously established platform to study the physics of accretion processes in magnetic cataclysmic variables, currently inaccessible with observations. The quality and wealth of radiographic and optical data allow us to fully understand the dynamics of the laboratory system for the first time, revealing a complex interplay of material, thermal, radiative and magnetic effects, previously unseen in experiments. The results reveal that clear improvements are made by the deployment of the external magnetic field over a plastic tube or simple inertial collimation. By measuring a steady shock front over an extended period of time, we are able to scale the experimental regime to the astrophysical one, gaining further insight into the complex dynamics at play in both systems. In particular, we observe a significant mass ejection in the transverse direction at the moment the plasma flow impacts upon the obstacle, not predicted by MHD simulations. This changes the structure of the post shock region and has important implications for astrophysical models, including those used to determine, for example, the WD’s mass. A clear pathway for future experiments, exploring a radiation dominated regime exists is therefore apparent; this could be achieved, for example, on magnetic field platforms at facilities such as the NIF or LMJ where higher laser energies are available. In this manner, astrophysical accretion models may be refined and uncertainties surrounding these systems may finally be resolved. ## Methods ### Experimental platform The experiment was carried out at the LULI2000 laser facility at the LULI laboratory (Ecole Polytechnique, France). A long pulse (t = 1.5 ns), high energy (E = 500 J, λ = 527 nm) drive beam was used to produce a strong shock wave in a solid multi-layer target. The target consisted of a very thin (several nm) layer of aluminium on the laser facing side, attached to a 25 μm layer of ablator material (CH), a 1.5 μm layer of gold to act as a radiation shield produced by the corona, and finally a 6 μm layer of titanium. The target was fixed onto a 4 mm diameter holder with a 2 mm diameter inner hole. A flat-topped 500 μm focal spot was produced using a hybrid phase plate. The plasma flow produced by the shock breakout at the rear surface of the target impacted onto a gadolinium gallium garnet (GGG) obstacle (chosen for its transparency and high density, ρ = 7.08 g/cm3) at a distance of 3 mm. ### Magnetic field An external magnetic field of 15 T, generated by a specially designed coil, coupled to a pulsed power generator, is applied to the whole experimental system. The capacitor-based pulse generator is charged up to a voltage of 9.6 kV and provides a peak current of 23.6 kA to the coil. The magnetic field inside the coil reaches its peak value after 183 μs and stays constant (less than 2% variation) over a duration of microseconds; much longer than the timescales of the experiment. The drive lasers were fired when the magnetic field reached its maximum value. ### X-ray diagnostics The X-ray source was generated by the interaction of a high-intensity, short-pulse laser (50 J, 10 ps, with a focal spot of 50 μm) and a titanium wire target. The wire was positioned 3 cm below the main target and the image was recorded onto an imaging plate 60 cm above the experimental plane, giving a magnification factor of 2033. ### Optical diagnostics A probe laser beam (1 mJ, 7 ns, λ = 532 nm) was used to produce Schlieren images of the plasma at various delay times. The images were recorded using CCDs, coupled to gated optical imagers (GOIs) with a 200 ps gate time. Optical self-emission of the experimental region was also recorded at a wavelength of 450 ± 40 nm using a streak camera. Both diagnostics give the speed of the plasma flow as well as relative density and temperature measurements respectively. ### Numerical simulations The numerical simulations were performed using the FLASH code, developed at the University of Chicago. A non-ideal MHD solver with an unsplit staggered mesh was used together with physical modules that allow modeling high energy density laser experiments including a laser energy deposition module, SESAME equations of state and radiation transfer, solved in the multi-group diffusion approximation using 40 radiation groups. A constant magnetic field of 15 T was applied in the direction of the propagation of the flow. The laser intensity focal spot on target was a super Gaussian function with a diameter of 500 μm. The rise time of the laser beam was 0.2 ns, followed by a plateau at maximum intensity of 1.7 × 1014 W cm−2 for a period of 1.3 ns. The simulation had a resolution of 5.12 μm and hence the thin layer of gold in the multi-layer target was not able to be resolved. A mass-density equivalent metal target layer was therefore used. The X-ray radiographs were produced assuming a quasi-monochromatic Ti backlighter source at 4.51 keV and using cold opacities46. Additional figures are shown in the Supplementary Information.
2022-09-25 16:19:49
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.6478465795516968, "perplexity": 770.5251441321589}, "config": {"markdown_headings": true, "markdown_code": false, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030334579.46/warc/CC-MAIN-20220925132046-20220925162046-00335.warc.gz"}
https://www.edaboard.com/threads/scaling-down-the-resolution-of-vmodcam-demo-project-atlys-board.359944/
Scaling down the resolution of VmodCam demo project, Atlys board Status Not open for further replies. Taki_comp Member level 1 Hi guys, For those who worked with VmodCam I am working on a project involving the implementation of real-time stereo vision system, for this purpose I am using the demo project provided by the digilent company to display video feeds on HDMI port https://reference.digilentinc.com/_media/vmodcam/vmodcam_ref_hd_demo_13.zip I tried to display the video feeds at the resolution of the project 1600x900, but the screen dipalys "HDMI not support" because the maximum resolution of the screen was 1366x768, for this reason I tried to scale down the resolution to 1280x720 so I did the following: first of all, I modified the camera configuration as shown in the code below: Code: signal CamInitRAM: CamInitRAM_type := ( IRD & x"30001580", -- Chip version. Default 0x1580 IWR & x"33860501", -- MCU Reset IWR & x"33860500", -- MCU Release from reset IWR & x"32140D85", -- Slew rate control, PCLK 5, D 5 IWR & x"341E8F0B", -- PLL control; bypassed, powered down IWR & x"341C0250", -- PLL dividers; M=80,N=2,fMCLK=fCLKIN*M/(N+1)/8=80MHz IWR & x"341E8F09", -- PLL control; Power-up PLL; wait 1ms after this! IWR & x"341E8F08", -- PLL control; Turn off bypass IWR & x"32020008", -- Standby control; Wake up IWR & x"338C2797", -- Output format; Context B shadow IWR & x"33900030", -- RGB with BT656 codes IWR & x"338C272F", -- Sensor Row Start Context B IWR & x"33900004", -- 4 IWR & x"338C2733", -- Sensor Row End Context B IWR & x"339002DB", -- 1211, 731 => 2DB Modified IWR & x"338C2731", -- Sensor Column Start Context B IWR & x"33900004", -- 4 IWR & x"338C2735", -- Sensor Column End Context B IWR & x"3390050B", -- 1611, 1291 => 50B modified IWR & x"338C2707", -- Output width; Context B IWR & x"33900500", -- 1600, 500 => 1280 modified IWR & x"338C2709", -- Output height; Context B IWR & x"339002D0", -- 1200,720=> 2D0 Modified IWR & x"338C275F", -- Crop X0; Context B IWR & x"33900000", -- 0 IWR & x"338C2763", -- Crop Y0; Context B IWR & x"33900000", -- 0 IWR & x"338C2761", -- Crop X1; Context B IWR & x"33900500", -- 1600, 1280 => 500 Modified IWR & x"338C2765", -- Crop Y1; Context B IWR & x"339002D0", -- 1200, 720=> 2D0 Modified IWR & x"338C2741", -- Sensor_Fine_IT_min B IWR & x"33900169", -- 361 IWR & x"338CA120", -- Capture mode options IWR & x"339000F2", -- Turn on AWB, AE, HG, Video IWR & x"338CA103", -- Refresh Sequencer Mode IWR & x"33900002", -- Capture IRD & x"33900000", -- Read until sequencer in mode 0 (run) IWR & x"301A02CC" -- reset/output control; parallel enable, drive pins, start streaming ); -- The modified lines are indicated with the word "modified" One thing I don't get is how to change the " Sensor Row Start Context B" and " Sensor Row Start Context B", Actually I have no idea what they do mean. I also modified the input resolution in the videotiming ctl as follows: Code: Inst_VideoTimingCtl: entity digilent.VideoTimingCtl PORT MAP ( PCLK_I => PClk, RSEL_I => R1280_720P, --this project supports only 1280x720 RST_I => VtcRst, VDE_O => VtcVde, HS_O => VtcHs, VS_O => VtcVs, HCNT_O => VtcHCnt, VCNT_O => VtcVCnt ); Finally, I modified the following writing address process for ports As follows Code: RADDRCNT_PROC_A: process (CLKA) begin if Rising_Edge(CLKA) then if (pa_int_rst = '1' and p1_wr_empty = '1') then elsif (stateWrA = stWrCmd) then else end if; end if; end if; end process; begin if Rising_Edge(CLKB) then if (pb_int_rst = '1' and p2_wr_empty = '1') then elsif (stateWrB = stWrCmd) then else end if; end if; end if; end process; I believe that I made all the required modification in the vhdl code to scale down the resolution, unfortunately the on the screen was " HDMI Not support". Last edited by a moderator: TrickyDicky Have you get a testbench? why cant you test it in simulation? Taki_comp Member level 1 The simulation will take very long time, and I am not sure whether changing some timing parameters will affect the result or It won't - - - Updated - - - Beisdes that, I simulate the input signals from VmodCam, It is a hard and time consuming task TrickyDicky I doubt this is a standard many people have experience with. Why cant you create a reference model for the input? when Ive simulated SDI in the past you can run an entire frame in a few minutes. I dont see why this should be any different. Taki_comp Member level 1 Did you use VmodCam before ? - - - Updated - - - I would like to understand what sensor row start is TrickyDicky Ive never even heard of VmodCAM, and it appears neither has most of the internet. Is it some proprietary 3d camera format? you might need to contact dilligent about how to use it as they seem to be the only people that make anything to do wtih it. Taki_comp Member level 1 If you have never heard of VmodCam, this does not mean that most of the internet did not, I am sure that most of people working in FPGA implementation of stereo vision system know about it, thank you for trying to be helpful TrickyDicky I am very confident that the numbers working on FPGA stereo vision systems are tiny/miniscule compared to the numbers working with standard video. You're very unlikely to get a reply to your specific problem, especially here. Google comes up with very limited hits - therefore it is has a very limited user base. The only real hit is a diligent board which only has reference designs for ISE 12/13 (so its at least 3 years old - https://reference.digilentinc.com/vmodcam/vmodcam) and another link says the board is now discontinued (https://store.digilentinc.com/vmodcam-stereo-camera-module-retired/) reading the vmodcam reference manual (https://reference.digilentinc.com/_media/vmodcam/vmodcam_rm.pdf) the outputs are pretty straight forward and basically standard 422 YCbCr that you get on any SDI link or Various sized channels of RGB. So making a model of the input data provided from the VmodCam would be fairly straight forward. You say that simulation of your VmodCam system takes "a long time". can you define how long? are you talking minutes, hours, days? Do you have simulation models for your interfaces? What are you trying to simulate? I doubt anyone is going to post here that knows anything about VmodCam. You'll be better off trying to explain your problems and then maybe people here (who are experts in similar fields) may be able to help. Status Not open for further replies.
2022-01-24 13:55:32
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.2441788613796234, "perplexity": 9035.62047400977}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-05/segments/1642320304570.90/warc/CC-MAIN-20220124124654-20220124154654-00070.warc.gz"}
http://www.gradesaver.com/textbooks/math/other-math/thinking-mathematically-6th-edition/chapter-3-logic-3-2-compound-statements-and-connectives-concept-and-vocabulary-check-page-132/4
## Thinking Mathematically (6th Edition) The given compound statement is symbolized by $\underline{p \leftrightarrow q}$ and is called a $\underline{\text{biconditional statement}}$. The given compound statement is symbolized by $\underline{p \leftrightarrow q}$ and is called a $\underline{\text{biconditional statement}}$. The arrow pointing in both directions indicates that we should use an "if and only if" in our statement.
2018-04-24 09:12:21
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.7266805171966553, "perplexity": 306.58538192756004}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-17/segments/1524125946578.68/warc/CC-MAIN-20180424080851-20180424100851-00459.warc.gz"}
https://www.gradesaver.com/textbooks/science/physics/introduction-to-electrodynamics-4e/chapter-1-section-1-2-vector-algebra-problem-page-7/4
## Introduction to Electrodynamics 4e unit vector n = $\frac{1}{7}$(6x+3y+2z) where x,y,z are the unit vectors in the Cartesian coordinate.
2019-11-21 21:45:10
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8392519354820251, "perplexity": 1285.3435699292047}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-47/segments/1573496670987.78/warc/CC-MAIN-20191121204227-20191121232227-00122.warc.gz"}
https://byjus.com/question-answer/the-diameters-of-two-plantes-are-in-the-ration-4-1-and-their-density-in/
Question # The diameters of two plantes are in the ration 4:1 and their density in the ration 1:2 The acceleration due to gravity on the planets will be in ratio, A 1 : 2 B 2 : 3 C 2 : 1 D 4 : 1 Solution ## The correct option is C 2 : 1$$M_p=Density\times Volume=Density\times\dfrac{4\pi r^3}{3}$$$$g=\dfrac{GM_p}{r^2}$$$$g=\dfrac{G\times density\times 4\pi r^3}{3r^2}$$Thus $$g\propto r$$ and $$g\propto Density$$So, $$\dfrac{g_1}{g_2}=\dfrac{4\times 1}{1\times 2}=\dfrac{2}{1}$$Physics Suggest Corrections 0 Similar questions View More People also searched for View More
2022-01-24 23:36:47
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.4542657136917114, "perplexity": 3875.1372216012487}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-05/segments/1642320304686.15/warc/CC-MAIN-20220124220008-20220125010008-00309.warc.gz"}
https://www.acmicpc.net/problem/5152
시간 제한 메모리 제한 제출 정답 맞은 사람 정답 비율 1 초 128 MB 0 0 0 0.000% ## 문제 The fact that many people in the US do not have health insurance causes many types of problems, including the following: Most doctors do not treat patients without insurance. Thus, obtaining medical care is difficult for those patients. However, hospital emergency rooms are required to treat all patients, because in their originally intended function, they may be the difference between life and death. But you see where this goes: since they are the only option that must treat uninsured patients, that’s where uninsured patients will go, even for non-life-threatening cases such as flu or measles. Now, such cases will not be treated instantly, because after all, the emergency room does have life-and-death situations to deal with. Thus, what we have is people with flu or other infectious diseases spending many hours waiting in a crowded emergency room. To put it mildly, it’s not clear whether that’s the best strategy to prevent the spread of such infectious diseases. In this problem, we are going to explore the way in which diseases spread among emergency room visitors. You will be given all the arrivals and departures of patients in an emergency room over time. The emergency room contains s seats, whose locations you will be given. Each arriving patient sits in the lowest-numbered seat available when he enters, and stays in that seat until he leaves. The same patient may return later, though, and will then pick a new seat. Initially, only patient 1 has the disease. A disease is transmitted from patient A to patient B if they sit at distance 2 meters or less of each other for at least 20 consecutive minutes (for instance, from time 5 until time 25; from time 5 until time 24 is not enough). If one of them leaves, then the clock starts from 0 again (i.e., it does not count to sit next to each other twice for 10 minutes each). Once a patient has the disease, he starts transmitting it the next day, i.e., 1440 minutes later; we assume that the patient stays infectious forever after that. Your goal is to figure out how many patients have the disease by the end. ## 입력 The first line contains the number K of data sets. This is followed by K data sets, each of the following form: The first line contains three integers P,S,V . 1 ≤ P ≤ 1000 is the number of patients (with only patient 1 sick initially), 1 ≤ S ≤ 100 the number of seats in the emergency room, and 1 ≤ V ≤ 100000 the number of visits of patients. This is followed by S lines, each containing two real numbers xi,yi, the coordinates of the ith seat in meters. Next come V lines. Each of these lines contains three integers pj ,aj ,dj . pj is the number of the patient making the jth visit, aj is his arrival time (in minutes, from time 0), and dj > aj his departure time. These will be sorted by non-decreasing arrival times. Our inputs will ensure that at no time will there be more than S patients in the emergency room simultaneously. Also, if a patient i arrives at the exact time another patient j leaves, we assume that j has already given up his seat, so patient i can sit there. ## 출력 For each data set, output “Data Set x:” on a line by itself, where x is its number. Then, output, on a line by itself, the total number of patients sick by the end of the process. Each data set should be followed by an empty line. ## 예제 입력 1 6 4 10 -0.5 0 0.5 1 1.5 2 -0.3 2 1 0 60 2 20 50 3 50 120 2 90 120 5 1440 1620 2 1460 1520 6 1501 1600 3 1510 1600 1 1530 1540 4 1800 2000 ## 예제 출력 Data Set 1: 3
2018-03-17 13:05:35
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.17401538789272308, "perplexity": 1209.862568500123}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-13/segments/1521257645069.15/warc/CC-MAIN-20180317120247-20180317140247-00568.warc.gz"}
https://chemistry.stackexchange.com/questions/34850/can-we-determine-the-amplitude-of-vibration-of-o-h-bonds-in-water
# Can we determine the amplitude of vibration of O-H bonds in water? This is going to be a long shot, and I might be wrong in any and all of my assumptions, be warned (... and please correct me). I might also be trivially right, I have no idea. At equilibrium, each degree of freedom has the kinetic energy: $$E = \frac{1}{2}k_\mathrm{B}T$$ So, I have water at, say, $300\ \mathrm K$. Water has three atoms, so 9 degrees of freedom (3 per atom, because space is three-dimensional). One of them is for a $\ce{O-H}$ stretching. I know that usually, you would need to divide into modes of oscillations, but let's say we combine half the energy of the symmetric and antisymmetric stretchings, giving that one $\ce{O-H}$ stretching has a full $\frac{1}{2}k_\mathrm{B}T$ in it. So we have an average, roll of drums, $$E = \frac{1}{2}(1.3806488 \times 10^{-23})(300) = 2.071\times 10^{-21}\ \mathrm J$$ in a $\ce{O-H}$ stretching. NOW, let's say this stretching behaves like an harmonic oscillator, its' energy is given by the amplitude of oscillation: $$E = \frac{1}{2}kA^2$$ where $k$ is the stiffness constant of the chemical bond. That's nice, but we have two unknowns, $k$ and $A$. So let's go further. If we know the absorption band's wavenumber, we can say that the amplitude corresponds to this wavelength, $2.734\ \mathrm{\mu m}$ (source = Wikipedia). This would mean that we have a stiffness constant: $$k = 5.5412 \times 10^{-10}\ \mathrm{N/m}$$ I know that, although it was fun to blindly math around, this result is wrong. Just the fact that the stiffness constant would depend on temperature doesn't sound right. Neither does the bit about wavelength = amplitude. My question is: is it possible to know the amplitude of oscillation/stiffness constant of a $\ce{O-H}$ bond? My question is: is it possible to know the amplitude of oscillation/stiffness constant of a O−H bond? Yes. Your approach is along the right lines but - The Hooke's Law formula you used $$E = \frac{1}{2}kA^2$$ is for the stretching of a spring. Hooke's law changes if the spring has mas(ses) attached at the end(s), now it is a harmonic oscillator. In this case Hooke's Law becomes $$E = \frac {h}{2 \pi} \sqrt{\frac{k}{\mu}}$$ where k is the force constant you are interested in and $\mathrm{\mu}$ is the reduced mass of the system. Since $$E=h\nu$$ The frequency, which can be experimentally measured, is given as $$\mathrm{frequency} = \frac {1}{2 \pi} \sqrt{\frac{k}{\mu}}$$ The frequency is independent of the amplitude, affected only by the masses and the stiffness of the spring. If the masses of the atoms at the ends of the spring are given as A and B, then $$\mu =\frac{A \cdot B}{A+B}$$ So for an $\ce{O-H}$ bond $\mathrm{\mu=\frac{1 \cdot 16}{1+16}}\ \mathrm{u}$. If we measure the frequency of the $\ce{O-H}$ stretch absorption, then we can calculate the force constant. Here is a link to a one-page example that shows the actual computation and explains the specific units to be used very nicely. You also ask about the amplitude of oscillation. If, by this you mean the bond length, then yes this can also be calculated from spectra (relatively easy in simple diatiomic cases, but more difficult in complex polyatomic cases. The methodology involves assuming a rigid rotor and then using the Schrodinger equation to solve for the energy states and extracting the bond length. A nice example using $\ce{HCl}$ can be found here, look down the page for the section entitled "Bond Length of HCl". • What I meant with "amplitude" is the deviation from the mean bond length. Your document is very helpful, just to make it clear, though, they deduce the "spring constant" from the wavenumber of the stretching mode's absorption band and the bond length from the wavenumber of a rotational mode's absorption band. Can we return to my idea with HCl at 300K and find the amplitude $A = \sqrt{\frac{2E}{k}} = \sqrt{\frac{2(2.071\times 10^{-21}J)}{481N/m}} = 2.93fm$? Which would seem to fit with the $127fm$ bond length of HCl. Aug 10 '15 at 0:15 Finally, I have a day off to read that question and response. Half-way reading it I was already a bit suspicious mostly because of the half-jolly and half-formal style OP used. :) Don't get me wrong, in principle I have nothing against such style. But the thing is that usually when such style is used some nonsense is eventually introduced at some point, and this nonsense is not so easily detectable since it is buried under all these jolly informalities. That is exactly what happened this time too. we can say that the amplitude corresponds to this wavelength Wait, what? Amplitude has no relation whatsoever to wavelength. My question is: is it possible to know the amplitude of oscillation/stiffness constant of a $\ce{O-H}$ bond? Yes, you can calculate the force constants of molecular vibrations (as well as frequencies). These calculations are routinely done in the domain of quantum chemistry these days. • Perhaps you could expand a bit upon both of your responses. At the moment they are more or less unqualified statements with no explanation. – bon Aug 9 '15 at 20:35
2021-12-06 02:05:40
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8695884943008423, "perplexity": 485.77997048225586}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-49/segments/1637964363229.84/warc/CC-MAIN-20211206012231-20211206042231-00588.warc.gz"}
https://docs.opencv.org/3.4/d7/d37/tutorial_mat_mask_operations.html
OpenCV  3.4.17-dev Open Source Computer Vision Next Tutorial: Operations with images Mask operations on matrices are quite simple. The idea is that we recalculate each pixel's value in an image according to a mask matrix (also known as kernel). This mask holds values that will adjust how much influence neighboring pixels (and the current pixel) have on the new pixel value. From a mathematical point of view we make a weighted average, with our specified values. ## Our test case Let's consider the issue of an image contrast enhancement method. Basically we want to apply for every pixel of the image the following formula: $I(i,j) = 5*I(i,j) - [ I(i-1,j) + I(i+1,j) + I(i,j-1) + I(i,j+1)]$ $\iff I(i,j)*M, \text{where } M = \bordermatrix{ _i\backslash ^j & -1 & 0 & +1 \cr -1 & 0 & -1 & 0 \cr 0 & -1 & 5 & -1 \cr +1 & 0 & -1 & 0 \cr }$ The first notation is by using a formula, while the second is a compacted version of the first by using a mask. You use the mask by putting the center of the mask matrix (in the upper case noted by the zero-zero index) on the pixel you want to calculate and sum up the pixel values multiplied with the overlapped matrix values. It's the same thing, however in case of large matrices the latter notation is a lot easier to look over. ## The Basic Method Now let us see how we can make this happen by using the basic pixel access method or by using the filter2D() function. Here's a function that will do this: We create an output image with the same size and the same type as our input. As you can see in the storing section, depending on the number of channels we may have one or more subcolumns. ## The filter2D function Applying such filters are so common in image processing that in OpenCV there is a function that will take care of applying the mask (also called a kernel in some places). For this you first need to define an object that holds the mask: Then call the filter2D() function specifying the input, the output image and the kernel to use: The function even has a fifth optional argument to specify the center of the kernel, a sixth for adding an optional value to the filtered pixels before storing them in K and a seventh one for determining what to do in the regions where the operation is undefined (borders). This function is shorter, less verbose and, because there are some optimizations, it is usually faster than the hand-coded method. For example in my test while the second one took only 13 milliseconds the first took around 31 milliseconds. Quite some difference. For example:
2022-01-25 07:55:18
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.4944530129432678, "perplexity": 525.5541400903597}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-05/segments/1642320304798.1/warc/CC-MAIN-20220125070039-20220125100039-00310.warc.gz"}
https://wizedu.com/questions?page=57
##### Drag the terms on the left to the appropriate blanks on the right to complete the sentences. Drag the terms on the left to the appropriate blanks on the right to complete the sentences. In: Biology ##### What are the magnitude and direction of the current in the 20 Ω resistor in (Figure 1)? (Figure 1) What are the magnitude and direction of the current in the 20 Ω resistor in (Figure 1)? Express your answer with the appropriate units. Enter positive value if the current is clockwise and negative value if the current is counterclockwise. In: Physics ##### What is the potential difference ΔVAB? What is the potential difference ΔVAB? In: Physics ##### What is the potential difference across the 10 ω resistor? figure What is the potential difference across the 10ω resistor in the figure? What is the potential difference across the 20ω resistor in the figure? In: Physics ##### A 193nm-wavelength UV laser for eye surgery emits a 0.500mJ pulse. A 193nm-wavelength UV laser for eye surgery emits a 0.500mJ pulse. (a) How many photons does the light pulse contain? In: Physics ##### Can you match these prefixes, suffixes, and word roots with their definitions? Can you match these prefixes, suffixes, and word roots with their definitions? In: Biology ##### What is the magnitude of the net force on the first wire in (figure 1)? Figure 1 What is the magnitude of the net force on the first wire in (Figure 1)? What is the magnitude of the net force on the second wire in (Figure 1)? What is the magnitude of the net force on the third wire in (Figure 1)? In: Physics ##### What magnitude and sign of charge Q will make the force on charge q zero? (Figure 1) shows four charges at the corners of a square of side L. What magnitude and sign of charge Q will make the force on charge q zero? Q = In: Physics ##### Multiple-choice questions each have five possible answers (a, b, c, d, e)​, one of which is correct Multiple-choice questions each have five possible answers (a, b, c, d, e)​, one of which is correct. Assume that you guess the answers to three such questions. a. Use the multiplication rule to find​ P(CCW), where C denotes a correct answer, and W denotes a wrong answer. In: Statistics and Probability ##### Determine whether the distribution is a discrete probability distribution. If not, state why. Determine whether the distribution is a discrete probability distribution. If not, state why. x P(x) 0 0.1 1 0.5 2 0.05 3 0.25 4 0.1 In: Statistics and Probability ##### Find the indicated z score. The graph depicts the standard normal distribution with mean 0 and standard deviation 1 1) Find the indicated z score. The graph depicts the standard normal distribution with mean 0 and standard deviation 1 (0.9616) 2) Find the area of the shaded region. The graph depicts the standard normal distribution with mean 0 and standard deviation 1. The shaded area is z=0.98 to the left. In: Statistics and Probability ##### Which of the following sets are not well defined? Explain. Which of the following sets are not well defined? Explain. a. The set of wealthy school teachers b. The set of great books c. The set of natural numbers greater than 100 d. The set of subsets of $$\{1,2,3,4,5,6\}$$ e. The set $$\{x \mid x \neq x$$ and $$x \in N\}$$ In: Chemistry In: Chemistry ##### In each reaction box, place the best reagent and conditions from the list below. 3-Hexanone should be the exclusive final product. In each reaction box, place the best reagent and conditions from the list below. 3-Hexanone should be the exclusive final product. (A reagent may be used more thanonce.) In: Chemistry
2021-09-16 10:08:12
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.5812731385231018, "perplexity": 857.8251592345026}, "config": {"markdown_headings": true, "markdown_code": false, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-39/segments/1631780053493.41/warc/CC-MAIN-20210916094919-20210916124919-00000.warc.gz"}
https://labs.tib.eu/arxiv/?author=Ricard%20Sole
• ### The Morphospace of Consciousness(1705.11190) We construct a complexity-based morphospace to study systems-level properties of conscious & intelligent systems. The axes of this space label 3 complexity types: autonomous, cognitive & social. Given recent proposals to synthesize consciousness, a generic complexity-based conceptualization provides a useful framework for identifying defining features of conscious & synthetic systems. Based on current clinical scales of consciousness that measure cognitive awareness and wakefulness, we take a perspective on how contemporary artificially intelligent machines & synthetically engineered life forms measure on these scales. It turns out that awareness & wakefulness can be associated to computational & autonomous complexity respectively. Subsequently, building on insights from cognitive robotics, we examine the function that consciousness serves, & argue the role of consciousness as an evolutionary game-theoretic strategy. This makes the case for a third type of complexity for describing consciousness: social complexity. Having identified these complexity types, allows for a representation of both, biological & synthetic systems in a common morphospace. A consequence of this classification is a taxonomy of possible conscious machines. We identify four types of consciousness, based on embodiment: (i) biological consciousness, (ii) synthetic consciousness, (iii) group consciousness (resulting from group interactions), & (iv) simulated consciousness (embodied by virtual agents within a simulated reality). This taxonomy helps in the investigation of comparative signatures of consciousness across domains, in order to highlight design principles necessary to engineer conscious machines. This is particularly relevant in the light of recent developments at the crossroads of cognitive neuroscience, biomedical engineering, artificial intelligence & biomimetics. • ### Zipf's law, unbounded complexity and open-ended evolution(1612.01605) Aug. 7, 2018 physics.soc-ph, q-bio.PE A major problem for evolutionary theory is understanding the so called {\em open-ended} nature of evolutionary change, from its definition to its origins. Open-ended evolution (OEE) refers to the unbounded increase in complexity that seems to characterise evolution on multiple scales. This property seems to be a characteristic feature of biological and technological evolution and is strongly tied to the generative potential associated with combinatorics, which allows the system to grow and expand their available state spaces. Interestingly, many complex systems presumably displaying OEE, from language to proteins, share a common statistical property: the presence of Zipf's law. Given an inventory of basic items (such as words or protein domains) required to build more complex structures (sentences or proteins) Zipf's law tells us that most of these elements are rare whereas a few of them are extremely common. Using Algorithmic Information Theory, in this paper we provide a fundamental definition for open-endedness, which can be understood as {\em postulates}. Its statistical counterpart, based on standard Shannon Information theory, has the structure of a variational problem which is shown to lead to Zipf's law as the expected consequence of an evolutionary process displaying OEE. We further explore the problem of information conservation through an OEE process and we conclude that statistical information (standard Shannon information) is not conserved, resulting into the paradoxical situation in which the increase of information content has the effect of erasing itself. We prove that this paradox is solved if we consider non-statistical forms of information. This last result implies that standard information theory may not be a suitable theoretical framework to explore the persistence and increase of the information content in OEE systems. • ### The morphospace of language networks(1803.01934) March 5, 2018 physics.soc-ph, cs.CL Language can be described as a network of interacting objects with different qualitative properties and complexity. These networks include semantic, syntactic, or phonological levels and have been found to provide a new picture of language complexity and its evolution. A general approach considers language from an information theory perspective that incorporates a speaker, a hearer, and a noisy channel. The later is often encoded in a matrix connecting the signals used for communication with meanings to be found in the real world. Most studies of language evolution deal in a way or another with such theoretical contraption and explore the outcome of diverse forms of selection on the communication matrix that somewhat optimizes communication. This framework naturally introduces networks mediating the communicating agents, but no systematic analysis of the underlying landscape of possible language graphs has been developed. Here we present a detailed analysis of network properties on a generic model of a communication code, which reveals a rather complex and heterogeneous morphospace of language networks. Additionally, we use curated data of English words to locate and evaluate real languages within this language morphospace. Our findings indicate a surprisingly simple structure in human language unless particles are introduced in the vocabulary, with the ability of naming any other concept. These results refine and for the first time complement with empirical data a lasting theoretical tradition around the framework of \emph{least effort language}. • ### Nonequilibrium entropic bounds for Darwinian replicators(1711.09180) Nov. 25, 2017 nlin.AO Life evolved on our planet by means of a combination of Darwinian selection and innovations leading to higher levels of complexity. The emergence and selection of replicating entities is a central problem in prebiotic evolution. Theoretical models have shown how populations of different types of replicating entities exclude or coexist with other classes of replicators. Models are typically kinetic, based on standard replicator equations. On the other hand, the presence of thermodynamical constrains for these systems remain an open question. This is largely due to the lack of a general theory of out of statistical methods for systems far from equilibrium. Nonetheless, a first approach to this problem has been put forward in a series of novel developements in non-equilibrium physics, under the rubric of the extended second law of thermodynamics. The work presented here is twofold: firstly, we review this theoretical framework and provide a brief description of the three fundamental replicator types in prebiotic evolution: parabolic, malthusian and hyperbolic. Finally, we employ these previously mentioned techinques to explore how replicators are constrained by thermodynamics. • ### Information theory, predictability, and the emergence of complex life(1701.02389) Oct. 17, 2017 physics.soc-ph, q-bio.PE Despite the obvious advantage of simple life forms capable of fast replication, different levels of cognitive complexity have been achieved by living systems in terms of their potential to cope with environmental uncertainty. Against the inevitable cost associated to detecting environmental cues and responding to them in adaptive ways, we conjecture that the potential for predicting the environment can overcome the expenses associated to maintaining costly, complex structures. We present a minimal formal model grounded in information theory and selection, in which successive generations of agents are mapped into transmitters and receivers of a coded message. Our agents are guessing machines and their capacity to deal with environments of different complexity defines the conditions to sustain more complex agents. • ### Rise of the humanbot(1705.05935) May 16, 2017 cs.AI, q-bio.NC The accelerated path of technological development, particularly at the interface between hardware and biology has been suggested as evidence for future major technological breakthroughs associated to our potential to overcome biological constraints. This includes the potential of becoming immortal, having expanded cognitive capacities thanks to hardware implants or the creation of intelligent machines. Here I argue that several relevant evolutionary and structural constraints might prevent achieving most (if not all) these innovations. Instead, the coming future will bring novelties that will challenge many other aspects of our life and that can be seen as other feasible singularities. One particularly important one has to do with the evolving interactions between humans and non-intelligent robots capable of learning and communication. Here I argue that a long term interaction can lead to a new class of "agent" (the humanbot). The way shared memories get tangled over time will inevitably have important consequences for both sides of the pair, whose identity as separated entities might become blurred and ultimately vanish. Understanding such hybrid systems requires a second-order neuroscience approach while posing serious conceptual challenges, including the definition of consciousness. • ### Synthetic associative learning in engineered multicellular consortia(1701.06086) Jan. 21, 2017 q-bio.CB Associative learning is one of the key mechanisms displayed by living organisms in order to adapt to their changing environments. It was early recognized to be a general trait of complex multicellular organisms but also found in "simpler" ones. It has also been explored within synthetic biology using molecular circuits that are directly inspired in neural network models of conditioning. These designs involve complex wiring diagrams to be implemented within one single cell and the presence of diverse molecular wires become a challenge that might be very difficult to overcome. Here we present three alternative circuit designs based on two-cell microbial consortia able to properly display associative learning responses to two classes of stimuli and displaying long and short-term memory (i. e. the association can be lost with time). These designs might be a helpful approach for engineering the human gut microbiome or even synthetic organoids, defining a new class of decision-making biological circuits capable of memory and adaptation to changing conditions. The potential implications and extensions are outlined. • ### Spatial dynamics of synthetic microbial hypercycles and their parasites(1701.04767) Jan. 17, 2017 physics.bio-ph Early theoretical work revealed that the simplest class of autocatalytic cycles, known as hypercycles, provide an elegant framework for understanding the evolution of mutualism. Furthermore, hypercycles are highly susceptible to parasites, spatial structure constituting a key protection against them. However, there is an insufficient experimental validation of these theoretical predictions, in addition to little knowledge on how environmental conditions could shape the spatial dynamics of hypercycles. Here, we constructed spatially extended hypercycles by using synthetic biology as a way to design mutualistic and parasitic {\em E. coli} strains. A mathematical model of the hypercycle front expansion is developed, providing analytic estimates of front speed propagation. Moreover, we explore how the environment affects the mutualistic consortium during range expansions. Interestingly, moderate improvements in environmental conditions (namely, increasing the availability of growth-limiting amino acids) can lead to a slowing-down of the front speed. Our agent-based simulations suggest that opportunistic depletion of environmental amino acids can lead to subsequent high fractions of stagnant cells at the front, and thus to the slow-down of the front speed. Moreover, environmental deterioration can also shape the interaction of the parasitic strain towards the hypercycle. On the one hand, the parasite is excluded from the population during range expansions in which the two species mutualism can thrive (in agreement with a classical theoretical prediction). On the other hand, environmental deterioration (e.g., associated with toxic chemicals) can lead to the survival of the parasitic strain, while reshaping the interactions within the three-species. The evolutionary and ecological implications for the design of synthetic consortia are outlined. • ### Population dynamics of synthetic Terraformation motifs(1612.09351) Dec. 30, 2016 q-bio.PE Ecosystems are complex systems, currently experiencing several threats associated with global warming, intensive exploitation, and human-driven habitat degradation. Such threats are pushing ecosystems to the brink of collapse. Because of a general presence of multiple stable states, including states involving population extinction, and due to intrinsic nonlinearities associated with feedback loops, collapse can occur in a catastrophic manner. Such catastrophic shifts have been suggested to pervade many of the future transitions affecting ecosystems at many different scales. Many studies have tried to delineate potential warning signals predicting such ongoing shifts but little is known about how such transitions might be effectively prevented. It has been recently suggested that a potential path to prevent or modify the outcome of these transitions would involve designing synthetic organisms and synthetic ecological interactions that could push these endangered systems out of the critical boundaries. Four classes of such ecological engineering designs or {\em Terraformation motifs} have been defined in a qualitative way. Here we develop the simplest mathematical models associated with these motifs, defining the expected stability conditions and domains where the motifs shall properly work. • ### Systems poised to criticality through Pareto selective forces(1510.08697) Pareto selective forces optimize several targets at the same time, instead of single fitness functions. Systems subjected to these forces evolve towards their Pareto front, a geometrical object akin to the thermodynamical Gibbs surface and whose shape and differential geometry underlie the existence of phase transitions. In this paper we outline the connection between the Pareto front and criticality and critical phase transitions. It is shown how, under definite circumstances, Pareto selective forces drive a system towards a critical ensemble that separates the two phases of a first order phase transition. Different mechanisms implementing such Pareto selective dynamics are revised. • ### Is nestedness in mutualistic networks an evolutionary spandrel?(1612.01606) Dec. 6, 2016 q-bio.PE Mutualistic networks have been shown to involve complex patterns of interactions among animal and plant species. The architecture of these webs seems to pervade some of their robust and fragile behaviour. Recent work indicates that there is a strong correlation between the patterning of animal-plant interactions and their phylogenetic organisation. Here we show that such pattern and other reported regularities from mutualistic webs can be properly explained by means of a very simple model of speciation and divergence. This model also predicts a co-extinction dynamics under species loss consistent with the presence of an evolutionary signal. The agreement between observed and model networks suggests that some patterns displayed by real mutualistic webs might actually represent evolutionary spandrels. • ### Spatial self-organization in hybrid models of multicellular adhesion(1601.02918) Jan. 12, 2016 q-bio.PE, q-bio.CB Spatial self-organization emerges in distributed systems exhibiting local interactions when nonlinearities and the appropriate propagation of signals are at work. These kinds of phenomena can be modeled with different frameworks, typically cellular automata or reaction-diffusion systems. A different class of dynamical processes involves the correlated movement of agents over space, which can be mediated through chemotactic movement or minimization of cell-cell interaction energy. A classic example of the latter is given by the formation of spatially segregated assemblies when cells display differential adhesion. Here we consider a new class of dynamical models, involving cell adhesion among two stochastically exchangeable cell states as a minimal model capable of exhibiting well-defined, ordered spatial patterns. Our results suggest that a whole space of pattern-forming rules is hosted by the combination of physical differential adhesion and the value of probabilities modulating cell phenotypic switching, showing that Turing-like patterns can be obtained without resorting to reaction-diffusion processes. If the model is expanded allowing cells to proliferate and die in an environment where diffusible nutrient and toxic waste are at play, different phases are observed, characterized by regularly spaced patterns. The analysis of the parameter space reveals that certain phases reach higher population levels than other modes of organization. A detailed exploration of the mean-field theory is also presented. Finally we let populations of cells with different adhesion matrices compete for reproduction, showing that, in our model, structural organization can improve the fitness of a given cell population. The implications of these results for ecological and evolutionary models of pattern formation and the emergence of multicellularity are outlined. • ### Emergence of proto-organisms from bistable stochastic differentiation and adhesion(1511.02079) Nov. 6, 2015 q-bio.PE, q-bio.TO The rise of multicellularity in the early evolution of life represents a major challenge for evolutionary biology. Guidance for finding answers has emerged from disparate fields, from phylogenetics to modelling and synthetic biology, but little is known about the potential origins of multicellular aggregates before genetic programs took full control of developmental processes. Such aggregates should involve spatial organisation of differentiated cells and the modification of flows and concentrations of metabolites within well defined boundaries. Here we show that, in an environment where limited nutrients and toxic metabolites are introduced, a population of cells capable of stochastic differentiation and differential adhesion can develop into multicellular aggregates with a complex internal structure. The morphospace of possible patterns is shown to be very rich, including proto-organisms that display a high degree of organisational complexity, far beyond simple heterogeneous populations of cells. Our findings reveal that there is a potentially enormous richness of organismal complexity between simple mixed cooperators and embodied living organisms. • ### Multiobjective Optimization and Phase Transitions(1509.04644) Sept. 15, 2015 physics.soc-ph Many complex systems obey to optimality conditions that are usually not simple. Conflicting traits often interact making a Multi Objective Optimization (MOO) approach necessary. Recent MOO research on complex systems report about the Pareto front (optimal designs implementing the best trade-off) in a qualitative manner. Meanwhile, research on traditional Simple Objective Optimization (SOO) often finds phase transitions and critical points. We summarize a robust framework that accounts for phase transitions located through SOO techniques and indicates what MOO features resolutely lead to phase transitions. These appear determined by the shape of the Pareto front, which at the same time is deeply related to the thermodynamic Gibbs surface. Indeed, thermodynamics can be written as an MOO from where its phase transitions can be parsimoniously derived; suggesting that the similarities between transitions in MOO-SOO and Statistical Mechanics go beyond mere coincidence. • ### Phase transitions in Pareto optimal complex networks(1505.06937) The organization of interactions in complex systems can be described by networks connecting different units. These graphs are useful representations of the local and global complexity of the underlying systems. The origin of their topological structure can be diverse, resulting from different mechanisms including multiplicative processes and optimization. In spatial networks or in graphs where cost constraints are at work, as it occurs in a plethora of situations from power grids to the wiring of neurons in the brain, optimization plays an important part in shaping their organization. In this paper we study network designs resulting from a Pareto optimization process, where different simultaneous constraints are the targets of selection. We analyze three variations on a problem finding phase transitions of different kinds. Distinct phases are associated to different arrangements of the connections; but the need of drastic topological changes does not determine the presence, nor the nature of the phase transitions encountered. Instead, the functions under optimization do play a determinant role. This reinforces the view that phase transitions do not arise from intrinsic properties of a system alone, but from the interplay of that system with its external constraints. • ### Field theory of molecular cooperators(1508.01422) Aug. 6, 2015 nlin.AO It has been suggested that major transitions in evolution require the emergence of novelties, often associated to the cooperative behaviour of previously existing objects or agents. A key innovation involves the first cooperative interactions among molecules in a prebiotic biosphere. One of the simplest scenarios includes two molecular species capable of helping each other forming a catalytic loop or hypercycle. The second order kinetics of the hypercycle implies a hyperbolic growth dynamics, capable of overcoming some selection barriers associated to non-cooperative molecular systems. Moreover, it has been suggested that molecular replicators might have benefited from a limited diffusion associated to their attachment to surfaces: evolution and escape from extinction might have been tied to living on a surface. In this paper we propose a field theoretical model of the hypercycle involving reaction and diffusion through the use of a many-body Hamiltonian. This treatment allows a characterisation of the spatially correlated dynamics of the system, where the critical dimension is found to be $d_c=2$. We discuss the role of surface dynamics as a selective advantage for the system's survival. • ### Synthetic circuit designs for Earth terraformation(1503.05043) March 17, 2015 q-bio.QM, nlin.AO Mounting evidence indicates that our planet might experience runaway effects associated to rising temperatures and ecosystem overexploitation, leading to catastrophic shifts on short time scales. Remediation scenarios capable of counterbalancing these effects involve geoengineering, sustainable practices and carbon sequestration, among others. None of these scenarios seems powerful enough to achieve the desired restoration of safe boundaries. We hypothesise that synthetic organisms with the appropriate engineering design could be used to safely prevent declines in some stressed ecosystems and help improving carbon sequestration. Such schemes would include engineering mutualistic dependencies preventing undesired evolutionary processes. We hypothesise that some particular design principles introduce unescapable constraints to the engineered organisms that act as effective firewalls. Testing this designed organisms can be achieved by using controlled bioreactor models and accurate computational models including different scales (from genetic constructs and metabolic pathways to population dynamics). Our hypothesis heads towards a future anthropogenic action that should effectively act as Terraforming agents. It also implies a major challenge in the existing biosafety policies, since we suggest release of modified organisms as potentially necessary strategy for success. • ### Non-Equilibrium Thermodynamics of Self-Replicating Protocells(1503.04683) March 16, 2015 cond-mat.soft, physics.bio-ph We provide a non-equilibrium thermodynamic description of the life-cycle of a droplet based, chemically feasible, system of protocells. By coupling the protocells metabolic kinetics with its thermodynamics, we demonstrate how the system can be driven out of equilibrium to ensure protocell growth and replication. This coupling allows us to derive the equations of evolution and to rigorously demonstrate how growth and replication life-cycle can be understood as a non-equilibrium thermodynamic cycle. The process does not appeal to genetic information or inheritance, and is based only on non-equilibrium physics considerations. Our non-equilibrium thermodynamic description of simple, yet realistic, processes of protocell growth and replication, represents an advance in our physical understanding of a central biological phenomenon both in connection to the origin of life and for modern biology. • ### Bioengineering the biosphere?(1410.8708) Nov. 21, 2014 nlin.AO, q-bio.PE Our planet is experiencing an accelerated process of change associated to a variety of anthropogenic phenomena. The future of this transformation is uncertain, but there is general agreement about its negative unfolding that might threaten our own survival. Furthermore, the pace of the expected changes is likely to be abrupt: catastrophic shifts might be the most likely outcome of this ongoing, apparently slow process. Although different strategies for geo-engineering the planet have been advanced, none seem likely to safely revert the large-scale problems associated to carbon dioxide accumulation or ecosystem degradation. An alternative possibility considered here is inspired in the rapidly growing potential for engineering living systems. It would involve designing synthetic organisms capable of reproducing and expanding to large geographic scales with the goal of achieving a long-term or a transient restoration of ecosystem-level homeostasis. Such a regional or even planetary-scale engineering would have to deal with the complexity of our biosphere. It will require not only a proper design of organisms but also understanding their place within ecological networks and their evolvability. This is a likely future scenario that will require integration of ideas coming from currently weakly connected domains, including synthetic biology, ecological and genome engineering, evolutionary theory, climate science, biogeography and invasion ecology, among others. • ### Towards a mathematical theory of meaningful communication(1004.1999) Aug. 21, 2013 cs.IT, math.IT, nlin.AO, q-bio.OT Despite its obvious relevance, meaning has been outside most theoretical approaches to information in biology. As a consequence, functional responses based on an appropriate interpretation of signals has been replaced by a probabilistic description of correlations between emitted and received symbols. This assumption leads to potential paradoxes, such as the presence of a maximum information associated to a channel that would actually create completely wrong interpretations of the signals. Game-theoretic models of language evolution use this view of Shannon's theory, but other approaches considering embodied communicating agents show that the correct (meaningful) match resulting from agent-agent exchanges is always achieved and natural systems obviously solve the problem correctly. How can Shannon's theory be expanded in such a way that meaning -at least, in its minimal referential form- is properly incorporated? Inspired by the concept of {\em duality of the communicative sign} stated by the swiss linguist Ferdinand de Saussure, here we present a complete description of the minimal system necessary to measure the amount of information that is consistently decoded. Several consequences of our developments are investigated, such the uselessness of an amount of information properly transmitted for communication among autonomous agents. • ### Measuring the Hierarchy of Feedforward Networks(1011.4394) In this paper we explore the concept of hierarchy as a quantifiable descriptor of ordered structures, departing from the definition of three conditions to be satisfied for a hierarchical structure: {\em order}, {\em predictability} and {\em pyramidal structure}. According to these principles we define a hierarchical index taking concepts from graph and information theory. This estimator allows to quantify the hierarchical character of any system susceptible to be abstracted in a feedforward causal graph, i.e., a directed acyclic graph defined in a single connected structure. Our hierarchical index is a balance between this predictability and pyramidal condition by the definition of two entropies: one attending the onward flow and other for the backward reversion. We show how this index allows to identify hierarchical, anti-hierarchical and non hierarchical structures. Our formalism reveals that departing from the defined conditions for a hierarchical structure, feedforward trees and the inverted tree graphs emerge as the only causal structures of maximal hierarchical and anti-hierarchical systems, respectively. Conversely, null values of the hierarchical index are attributed to a number of different configuration networks; from linear chains, due to their lack of pyramid structure, to full-connected feedforward graphs where the diversity of onward pathways is canceled by the uncertainty (lack of predictability) when going backwards. Some illustrative examples are provided for the distinction among these three types of hierarchical causal graphs. • ### Topological reversibility and causality in feed-forward networks(1007.1829) Systems whose organization displays causal asymmetry constraints, from evolutionary trees to river basins or transport networks, can be often described in terms of directed paths (causal flows) on a discrete state space. Such a set of paths defines a feed-forward, acyclic network. A key problem associated with these systems involves characterizing their intrinsic degree of path reversibility: given an end node in the graph, what is the uncertainty of recovering the process backwards until the origin? Here we propose a novel concept, \textit{topological reversibility}, which rigorously weigths such uncertainty in path dependency quantified as the minimum amount of information required to successfully revert a causal path. Within the proposed framework we also analytically characterize limit cases for both topologically reversible and maximally entropic structures. The relevance of these measures within the context of evolutionary dynamics is highlighted. • ### Universality of Zipf's Law(1001.2733) Zipf's law is the most common statistical distribution displaying scaling behavior. Cities, populations or firms are just examples of this seemingly universal law. Although many different models have been proposed, no general theoretical explanation has been shown to exist for its universality. Here we show that Zipf's law is, in fact, an inevitable outcome of a very general class of stochastic systems. Borrowing concepts from Algorithmic Information Theory, our derivation is based on the properties of the symbolic sequence obtained through successive observations over a system with an unbounded number of possible states. Specifically, we assume that the complexity of the description of the system provided by the sequence of observations is the one expected for a system evolving to a stable state between order and disorder. This result is obtained from a small set of mild, physically relevant assumptions. The general nature of our derivation and its model-free basis would explain the ubiquity of such a law in real systems. • ### Analysis of major failures in Europe's power grid(0906.1109) June 5, 2009 physics.data-an Power grids are prone to failure. Time series of reliability measures such as total power loss or energy not supplied can give significant account of the underlying dynamical behavior of these systems, specially when the resulting probability distributions present remarkable features such as an algebraic tail, for example. In this paper, seven years (from 2002 to 2008) of Europe's transport of electricity network failure events have been analyzed and the best fit for this empirical data probability distribution is presented. With the actual span of available data and although there exists a moderate support for the power law model, the relatively small amount of events contained in the function's tail suggests that other causal factors might be significantly ruling the system's dynamics. • ### Analytic solution of Hubbell's model of local community dynamics(physics/0305022) Recent theoretical approaches to community structure and dynamics reveal that many large-scale features of community structure (such as species-rank distributions and species-area relations) can be explained by a so-called neutral model. Using this approach, species are taken to be equivalent and trophic relations are not taken into account explicitly. Here we provide a general analytic solution to the local community model of Hubbell's neutral theory of biodiversity by recasting it as an urn model i.e.a Markovian description of states and their transitions. Both stationary and time-dependent distributions are analysed. The stationary distribution -- also called the zero-sum multinomial -- is given in closed form. An approximate form for the time-dependence is obtained by using an expansion of the master equation. The temporal evolution of the approximate distribution is shown to be a good representation for the true temporal evolution for a large range of parameter values.
2019-11-22 21:49:00
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.4898134768009186, "perplexity": 1431.6332250321188}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-47/segments/1573496671548.98/warc/CC-MAIN-20191122194802-20191122223802-00141.warc.gz"}
https://blog.jpolak.org/?tag=orbital-integral
# Tag Archives: orbital integral ## Calculation of an Orbital Integral In the Arthur-Selberg trace formula and other formulas, one encounters so-called ‘orbital integrals’. These integrals might appear forbidding and abstract at first, but actually they are quite concrete objects. In this post we’ll look at an example that should make orbital integrals seem more friendly and approachable. Let $k = \mathbb{F}_q$ be a finite field […]
2021-04-14 08:24:43
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9176939129829407, "perplexity": 1321.3325316245175}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-17/segments/1618038077336.28/warc/CC-MAIN-20210414064832-20210414094832-00022.warc.gz"}
http://mathhelpforum.com/trigonometry/88454-law-sines-problem-ssa.html
# Math Help - Law of Sines problem (SSA) 1. ## Law of Sines problem (SSA) The question reads: Solve triangle ABC if A = 75°, a = 5, and b = 7. a) B = 43.6°, C = 61.4°, c = 6.4 b) B = 136.4°, C = 31.6°, c = 2.7 c) B = 46.4°, C = 58.6°, c = 4.4 d) no solution First, I try to solve for angle B: $ \frac {a} {\sin A} = \frac {b} {\sin B} $ $ \frac {5} {\sin75} = \frac {7} {\sin B} $ $ 5 \sin B = 7\sin 75 $ $ \sin B = \frac {7sin 75} {5} $ $ \sin B = 1.3523 $ $ \sin^-1 (1.3523) =$ Undefined So is the answer "No Solution"? Or, did I make a mistake? 2. Originally Posted by AlderDragon The question reads: Solve triangle ABC if A = 75°, a = 5, and b = 7. a) B = 43.6°, C = 61.4°, c = 6.4 b) B = 136.4°, C = 31.6°, c = 2.7 c) B = 46.4°, C = 58.6°, c = 4.4 d) no solution First, I try to solve for angle B: $ \frac {a} {\sin A} = \frac {b} {\sin B} $ $ \frac {5} {\sin75} = \frac {7} {\sin B} $ $ 5 \sin B = 7\sin 75 $ $ \sin B = \frac {7sin 75} {5} $ $ \sin B = 1.3523 $ $ \sin^-1 (1.3523) =$ Undefined So is the answer "No Solution"? Or, did I make a mistake? I get an answer of no solution too using the sine rule 3. Originally Posted by AlderDragon The question reads: Solve triangle ABC if A = 75°, a = 5, and b = 7. a) B = 43.6°, C = 61.4°, c = 6.4 b) B = 136.4°, C = 31.6°, c = 2.7 c) B = 46.4°, C = 58.6°, c = 4.4 d) no solution in the ambiguous case, for a triangle to exist, the following inequality must be true ... $a \ge b\sin{A}$ note that $b\sin{A} = 7\sin(75) = 6.76 > 5$ no triangle is possible 4. Originally Posted by skeeter in the ambiguous case, for a triangle to exist, the following inequality must be true ... $a \ge b\sin{A}$ note that $b\sin{A} = 7\sin(75) = 6.76 > 5$ no triangle is possible Thank you! That clears it up
2016-05-30 05:42:04
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 16, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.7146216630935669, "perplexity": 2968.8909247389206}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 5, "enable": false}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2016-22/segments/1464049288709.66/warc/CC-MAIN-20160524002128-00084-ip-10-185-217-139.ec2.internal.warc.gz"}
http://mathhelpforum.com/calculus/36701-power-series-limit-help.html
# Math Help - Power series limit help 1. ## Power series limit help Hi im having trouble understanding part of the power series. Im trying to find the radius of convergence using the ratio test, the initial problem is the sum of k=1 as k goes to infinity of x^k/(1+k^2). When i use the ratio test i get it down to xk^2/(k+1)^2, now my book goes from this to just the absolute value of x. If you expand (k+1)^2 the k^2's cancel but what happens to the 2k+1 are you allowed to just drop it or am i missing something? Thanks for the help. 2. Originally Posted by cowboys111 Hi im having trouble understanding part of the power series. Im trying to find the radius of convergence using the ratio test, the initial problem is the sum of k=1 as k goes to infinity of x^k/(1+k^2). When i use the ratio test i get it down to xk^2/(k+1)^2, now my book goes from this to just the absolute value of x. If you expand (k+1)^2 the k^2's cancel but what happens to the 2k+1 are you allowed to just drop it or am i missing something? I fear that you may be missing a lot. $\frac{{k^2 }}{{\left( {1 + k} \right)^2 }} = \left( {\frac{1}{{\frac{1}{k} + 1}}} \right)^2 \to 1$ 3. Originally Posted by cowboys111 Hi im having trouble understanding part of the power series. Im trying to find the radius of convergence using the ratio test, the initial problem is the sum of k=1 as k goes to infinity of x^k/(1+k^2). When i use the ratio test i get it down to xk^2/(k+1)^2, now my book goes from this to just the absolute value of x. If you expand (k+1)^2 the k^2's cancel but what happens to the 2k+1 are you allowed to just drop it or am i missing something? Thanks for the help. $\sum_{k=1}^{\infty}\frac{x^k}{1+k^2}$ $\lim_{k \to \infty} \left| \frac{\frac{x^{k+1}}{1+(k+1)^2}}{\frac{x^k}{1+k^2} } \right|= \lim_{k \to \infty} \left| \frac{x^{k+1}}{k^2+2k+2} \cdot \frac{k^2+1}{x^k} \right|=\lim_{k \to \infty} \left| \frac{x(1+\frac{1}{k^2})}{1+\frac{2}{k}+\frac{2}{k ^2}} \right|=|x|$ the 2nd to last step comes from multiplying the numerator and denominator by $\frac{1}{k^2}$
2015-10-07 04:42:27
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 4, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8048810958862305, "perplexity": 174.148174426803}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2015-40/segments/1443736682102.57/warc/CC-MAIN-20151001215802-00070-ip-10-137-6-227.ec2.internal.warc.gz"}
https://eprints.iisc.ac.in/3188/
# Crystal and Molecular Structure of Benzyloxycarbonyl-\alpha-Aminoisobutyryl-L-Prolyl Methylamide: The Observation of an $X_2-{Pro}_3$ Type III \beta-Turn Prasad, Venkataram BV and Shamala, N and Nagaraj, R and Chandrasekaran, R and Balaram, P (1979) Crystal and Molecular Structure of Benzyloxycarbonyl-\alpha-Aminoisobutyryl-L-Prolyl Methylamide: The Observation of an $X_2-{Pro}_3$ Type III \beta-Turn. In: Biopolymers, 18 (7). pp. 1635-1646. Preview PDF molecular.pdf The crystal and molecular structure of N-benzyloxycarbonyl-\alpha-aminoisobutyryl-L-prolyl methylamide, the amino terminal dipeptide fragment of alamethicin, has been determined using direct methods. The compound crystallizes in the orthorhombic system with the space group ${P2}_12_12_1$. Cell dimensions are a = 7.705 A, b = 11.365 A, and c = 21.904 A. The structure has been refined using conventional procedures to a final R factor of 0.054. The molecular structure possesses a 4 \rightarrow 1 intramolecular N-H- - -0 hydrogen bond formed between the CO group of the urethane moiety and the NH group of the methylamide function. The peptide backbone adopts the type III \beta-turn conformation, with ${\phi}_2$ = -51.0 deg, ${\psi}_2$ = -39.7 deg, ${\phi}_3$ = -65.0 deg, ${\psi}_3$ = -25.4 deg. An unusual feature is the occurrence of the proline residue at position 3 of the \beta-turn. The observed structure supports the view that Aib residues initiate the formation of type III \beta-turn conformations. The pyrrolidine ring is puckered in $C^{\gamma}-exo$ fashion.
2022-08-14 13:18:44
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.7623430490493774, "perplexity": 5385.149312284565}, "config": {"markdown_headings": true, "markdown_code": false, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-33/segments/1659882572033.91/warc/CC-MAIN-20220814113403-20220814143403-00168.warc.gz"}
http://pnck.oenw.pw/paths-with-sum.html
# Paths With Sum Information about failed requests isn't published anywhere. CONJECTURES - Discovering Geometry Chapter 2 C-1 Linear Pair Conjecture - If two angles form a linear pair, then the measures of the angles add up to 180°. In Python 2. Enjoy the videos and music you love, upload original content, and share it all with friends, family, and the world on YouTube. what i mean is that for any number at a give position you have calculated the sum up to its above line so u have two choice to choose from so you choose the larger one. I've tried a few ways but I either get a sum of the whole dataset, or a sequence of numbers on all the underlying routes. 43 lines (34. Software Update Manager (SUM) introduction. "Leetcode: Path Sum" is published by Rachit Gupta. Therefore the "sum" is the number itself. One point is clear: the weight factor must be chosen such that the classical path is singled out in the limit ¯h → 0. In below code, this sum is stored in ‘max_single’ and returned by the recursive function. Learn, teach, and study with Course Hero. AU - Matsubara, Ryota. I am just learning using Ubuntu and when I tried to configure the DNS, I need to deal with something call localhost, loopback interface. The Update Tool Software Update Manager (SUM) The Software Update Manager (SUM) is a multi-purpose tool that supports various processes, such as performing a release upgrade, installing enhancement packages, applying Support Package Stacks, installing add-ons, or updating single components. This type of variable (with the exception of auto_resume and histchars) is defined in CAPITAL LETTERS. Print every path in the tree with sum of the nodes in the path as k. "The sum of the currents through each path is equal to the total current that flows from the source. The cost (or length or weight) of the path P is the sum of the weights of edges in the sequence. SUM Function is a very popular and useful formula in Microsoft Excel. 1071 79 Favorite Share. It will then be described a new version of CoSE, which will be designated as CoSE-MS, for solving the min-sum problem for SRLG diverse routing. This is a rather interesting result that says that in a blocked game of single spinner dominoes, the sum of the four arms of the tableau must always total to an even number. But sum is not. This number represents who you are at birth and the native traits that you will carry with you through life. Each step you may move to adjacent numbers on the row below. The sum of all activity variances is 64, while the sum of variances along the critical path is 36. Note: You can only move either down or right at any point in time. Because you are lively, you can accomplish everything planned for the day and you could make sure that you could make your path to your achievement. I have written 3 solutions: one recursive, one using 2 stacks (DFS), and last one using 2 queues (BFS). Design an algorithm to print all paths which sum up to that value. The path may start and end at any node in the tree. Much has been written about the impact HSM has had on CNC machine tools, spindles, toolholders, cutting tools, and controls. Educational games for grades PreK through 6 that will keep kids engaged and having fun. Get as many scores as possible until the game is over!. all_pairs_dijkstra_path (G, weight = weight) else: ## Find paths from. Jian Lu's blog and personal site. Can anyone explain me the differences between them and the m. set your path properly to Python lib directory. in Japan, is the leading provider of high-performance software tools for engineering, science, and mathematics. 0 and later the. Returns a delimited text string with the identifiers of all the parents of the current identifier, starting with the oldest and continuing until current. In this situation, you can copy the full path of the specified excel file and paste it to any other places you can quickly find. Java Solution. CONJECTURES - Discovering Geometry Chapter 2 C-1 Linear Pair Conjecture - If two angles form a linear pair, then the measures of the angles add up to 180°. The path may start and end at any node in the tree. Print every path in the tree with sum of the nodes in the path as k. Given a m x n grid filled with non-negative numbers, find a path from top left to bottom right which minimizes the sum of all numbers along its path. For example consider the following Binary Tree. Jul 27, 2017- Explore summerest's board "paths" on Pinterest. Madhukar Bagla, Nothing I say will be enough to sum up how happy I am that we went on this journey together and had each other to lean on, to argue with, and to find comfort in. Design an algorithm to print all paths which sum up to that value. Objective: Print all the paths from left top corner to right bottom corner in two dimensional array. But that still leaves us the question of which point P to choose on the line CD to minimize the sum of the distances. Let's look at some Excel HYPERLINK examples and explore how to use the HYPERLINK function as a worksheet function in Microsoft Excel: In our first example, we're using the HYPERLINK function to reference a file called "Doc1. get_json_object(string json_string, string path) Extracts json object from a json string based on json path specified, and returns json string of the extracted json object. The day of your birth indicates your primary birth path. Thus the increase in the sum of the valences is two. If you have a Master Number also read the Life Path description for your secondary trait. For problems encountered starting or stopping the SUM tool, problems with resetting and or downloading the SUM tool. path, we can assume that project completion time is described by a normal probability distribution with mean equal to the sum of the expected times of all activities on the critical path and variance equal to the sum of the variances of all activities on the critical path. 1 Manipulating the Load Path. The idea is to keep trace of four paths and pick up the max one in the end. 75 95 64 17 47 82. But i am getting incorrect output. Now recursively call pathSum(root. Learning Anywhere, Any Time, On Any Device. Function to check there is path in the binary tree with a given sum Posted on May 31, 2014 by Gyaneshwar Pardhi This function have trick call same function again for children nodes with remaining sum, at the end if sum become zero ie there is the path for a given sum. [Path] = PATH ( Nodes[Name], Nodes[Parent] ). 1) Recursively solve this problem 2) Get largest left sum and right sum 2) Compare to the stored maximum. For instance, let’s say you start at and you then have a displacement of 8 meters to the left followed by a second displacement of 3 meters right. The thin yellow-colored curve shows the trajectory of the sun, the yellow deposit shows the variation of the path of the sun throughout the year. Worksheets: 2-digit addition with missing numbers (sum under 100) Below are six versions of our grade 4 math worksheet; the students must find the missing number in each addition equation. At each of its ends, it makes the valence of the existing vertex go up by one. Write a program which takes as input an integer and a binary tree with integer weights, and checks if there exists a leaf whose path weight equals the specified sum. If this circuit was a string of light bulbs, and one blew out, the remaining bulbs would turn off. There are many measures for path optimality, depending on the problem. Money Mustache] on Amazon. When backscatter can be ignored, the exact solution is constructed as a formal sum (path integral) over all such paths. Path Graphs. The Shortest s-t Path Problem Min-Sum Conclusion Shortest s-t Paths Using Min-Sum Nicholas Ruozzi and Sekhar Tatikonda Yale University September 25, 2008 Nicholas Ruozzi and Sekhar Tatikonda Shortest s-t Paths Using Min-Sum. Given a binary tree and an integer k. Jian Lu's blog and personal site. 1 day ago · The surrendering of oneself does not lead to laxity either. Two paths are vertex-independent (alternatively, internally vertex-disjoint) if they do not have any internal vertex in. 解题思路:把当前要找的sum减去node节点的值作为新的sum的值,然后递归求解左指数,递归求解右指数,直到leaf节点,判断当前剩下的sum跟leaf节点的. I have already been carrying this out for six months using a simple software (follow the Loans Online With No Checking Account links to a website to discover more). Given a Binary Tree and a sum s, your task is to check whether there is a root to leaf path in that tree with the following sum. Because sum is changing with each new path taken the sum is constantly changing and is of different value based on path. Given a m x n grid filled with non-negative numbers, find a path from top left to bottom right which minimizes the sum of all numbers along its path. Only condition to satisfy is that you can change from one array to another only when the elements are matching. Design an algorithm to count # the number of paths that sum to a given value. Given a m x n grid filled with non-negative numbers, find a path from top left to bottom right which minimizes the sum of all numbers along its path. This material describes methods of presenting quantum mechanics using the path-integral formulation. With a lot choice available and so 400 Loans On Payments Bad Credit Ok much range, increasingly we have been counting on personal recommendations, and online reviews, to make up our thoughts on whether to commit to a retailer or service provider, particularly if chances are to become to get a significant sum or a long-term contractual. 59 billion, both up close to 17 percent from the year prior. United Auto Workers striking General Motors Co. Given a binary tree and a sum, find all root-to-leaf paths where each path's sum equals the given sum. Note: A leaf is a node with no children. A path graph is a graph consisting of a single path. EE302 Controls – Mason’s Gain Rule for Block Diagrams DePiero Mason’s Gain Rule is a technique for finding an overall transfer function. The Update Tool Software Update Manager (SUM) The Software Update Manager (SUM) is a multi-purpose tool that supports various processes, such as performing a release upgrade, installing enhancement packages, applying Support Package Stacks, installing add-ons, or updating single components. 75 95 64 17 47 82. PDF | We outline an introduction to quantum mechanics based on the sum-over-paths method originated by Richard P. Distributing Python Modules publishing modules for installation by others. Villagers then create a new path around it, destroying the school's hedges and gardens. Extract Data From Closed Workbooks Data from a closed workbook is duplicated in the open workbook. Often forgotten is high speed machining’s impact on tool path programming. Continue in a westward direction to reach Rata Sum (red path). if "Global Maximum" sum is less than the sum, "Global maximum" sum = the sum. Print every path in the tree with sum of the nodes in the path as k. This article demonstrates the use SAP Software Update Manager in applying Support Package Stack in a dual SAP PI system. Each node has value of an integer number (decimal). The path of the Sun across the celestial sphere is very close to that of the planets and the moon. Read honest and unbiased product reviews from our users. For more information, see the attached workbook. they need not be root node and leaf node; and negative numbers can also be there in the tree. - Get-FolderSize. Note: A leaf is a node with no children. Schulman Physics Departments Clarkson University, Potsdam, NY 13676 USA and, Technion, Haifa, Israel The three parts of this article are three kinds of introduction to the path integral. Do not assume that each path sum is added to dictionary once - Debugging and then find the design issue. Path with given sum in binary search tree. Stream Metal - Valve Driver (on a split path, so mixed with clean input) - SV Beast Brt (after sum) - 8x10 SV Beast (47 Cond) - Studio Comp - Simple EQ (+3dB Low) by Line 6 from desktop or your mobile device. Example: Given the below binary tree and sum = 22,. The syntax of the query is given as follows:. But i am getting incorrect output. Feynman used a different kind of integral, and the terminology's confused - the two types of path integrals don't mean the same thing. 5 / \ 4 8 / / \ 11 13 4 / \ / \ 7 2 5 1. In practice, an average path gain value is a large negative dB value. Abstract: Weak values, obtained from weak measurements, attempt to describe the properties of a quantum system as it evolves from an initial to a final state, without practically altering this evolution. No more passive learning. The task is to count the number of paths in the tree with the sum of the nodes equals to k. This is why on reports you may see it written as 11/2 or 29/2 to indicate the secondary traits. Objective: - Given a binary tree and X, write an algorithm to Print all the paths starting from root so that sum of all the nodes in path equals to a given number. A matrix of integer numbers is given, each cell representing a weight. 75->64->82=221. A "New Path" or "New Polygon" dialog will pop up. It is helpful when trying to simplify complex systems. It basically gives a undirected graph (tree-like: no multiple paths between two nodes) and asks for the sum of all possible paths between any pair of nodes in the graph (each path must be counted only once, in other words, if you have already counted the path from A to B, you shouldn't count the path from B to A). sum of activity times on the path Critical path The path with the longest path from AST 22201 at HHL Leipzig Graduate School of Management. It's time is too expensive and fails the online judgement. /* //A binary tree node struct Node { int. by David Stevens. Worksheets: 2-digit addition with missing numbers (sum under 100) Below are six versions of our grade 4 math worksheet; the students must find the missing number in each addition equation. Sun Path Products is the world's premier manufacturer of harness container systems for sport and military parachuting. LeetCode - Path Sum II (Java) Given a binary tree and a sum, find all root-to-leaf paths where each path's sum equals the given sum. For the paths themselves, you need to store the end node in the hashtable (we know where it starts, and there's only one path between two nodes in a tree). Input: First line consists of T test cases. Description of the illustration sys_connect_by_path. A matrix of integer numbers is given, each cell representing a weight. For example: Given the below binary tree and sum = 22, 5 / \ 4 8 / / \ 11 13 4 / \ \ 7 2 1 return true, as there exists a root-to-leaf path…. We will only consider paths that are made out of straight lines; call such a path a bent line. right, sum) to get all result from left and right children. Note: You can only move either down or right at any point in time. Vlookup is a very versatile function which can be combined with other functions to get some desired result, one such situation is to calculate the sum of the data ( in numbers) based on the matching values, in such situations we can combine sum function with vlookup function, the method is as follows =SUM(Vlookup(reference value, table array, index number, match). The day of your birth indicates your primary birth path. It involves two concepts in one problem. 2-02 TL623f202c A series circuit has these key features: •Current is the same in every part of the circuit. A path can start from any node and end at any node and must be downward only, i. For this problem, a path is defined as any sequence of nodes from some starting node to any node in the tree along the parent-child connections. In the following example: 1 / \ 2 4 / /\ 3 5 6 Possible paths are *123, 145, and 146, which adds to 414). Python Setup and Usage how to use Python on different platforms. SQL SELECT COUNT, SUM, AVG average. OK I'm just starting to read Hawkings new book, and am confused already. One popular way to update HPE firmware, drivers, and Smart Components is utilizing Service Pack for Proliant (SPP) and the HP SUM tool. Smart Update Manager (SUM) operates without the need for agents or other permanently installed software on the target nodes. Given a binary tree, find the maximum path sum. Welcome › Forums › General PowerShell Q&A › Powershell to find folder / file size. Leave a reply. Anti-augment the flow on this path—that is, reduce the flow in the path until the flow on some edge becomes 0. Ask Question Asked 6 years ago. 0,W3C Recommendation 23 January 2007, retrieved 16:38, 9 February 2010 (UTC). , an enforcement agent) decides how much of an inspection resource to spend along each arc in the network to capture a smuggler. AU - Matsumura, Hajime. left, sum) and pathSum(root. 12 OPTION. 解题思路:把当前要找的sum减去node节点的值作为新的sum的值,然后递归求解左指数,递归求解右指数,直到leaf节点,判断当前剩下的sum跟leaf节点的. Native American Rehab Centers!. Dijkstra's algorithm (or Dijkstra's Shortest Path First algorithm, SPF algorithm) is an algorithm for finding the shortest paths between nodes in a graph, which may represent, for example, road networks. Given a M*N matrix A in which all the elements in a row and all the elements in a column are strictly increasing. The demo use case is shown with the help of various screenshots. I came out the idea to use prefix sum and also hashmap to track any path from root to leaf using O(1) time. The game ends when no time is left. The idea is quite similar as we did for the problem all root to leaf paths (read: All root to leaf paths Here instead of printing them we are forming a number from the digits on the path and storing the numbers for addition. Only condition to satisfy is that you can change from one array to another only when the elements are matching. keep on traversing towards the bottom till the point when adding the node value to the sum will yield non negative value. If this circuit was a string of light bulbs, and one blew out, the remaining bulbs would turn off. It wasn’t until recently, post-1980s, that Dallas bloomed into a city with more dim sum restaurants than you could count on one hand, thanks to the Chinese. When you stop, you can find the sum by taking a 90-degree turn on your path to the right and stepping down one. Given a binary tree and a sum, find all root-to-leaf paths where each path’s sum equals the given sum. Given a binary tree and an integer k. In time between GCC 6. Find Complete Code at GeeksforGeeks Article: https://www. MakeLine with Path sized by Sum(Count) for each location Jeremy Cantell Aug 15, 2019 8:30 AM I am testing out the MakePoint and MakeLine functionality in the newest version of Tableau, but am a little too inexperienced to get the functionality I'm hoping for. The sum of all activity variances is 64, while the sum of variances along the critical path is 36. For example: Given the below binary tree and sum = 22, 5 / \ 4 8 / / \ 11 13 4 / \ / \ 7 2 5 1 return [ [5,4,11,2], [5,8,4,5] ]. Finding the path with maximal sum becomes relatively easy because of the tree structure. How to create a line graph with a line that represents the sum total of the data points of all other lines. We show that the integral satisfies a pathwise isometry property, analogous to the well-known Ito isometry for stochastic integrals. On occasion it is necessary to aggregate data from a number of rows into a single row, giving a list of data associated with a specific value. For example: Given the below binary tree and sum = 22,. Michael orders the ceremonial path be fenced off with barbed wire. By “path sum”, we mean the sum of all the numbers that make up a path. Stack Exchange network consists of 176 Q&A communities including Stack Overflow, the largest, most trusted online community for developers to learn, share their knowledge, and build their careers. E(V) = 1 2 + (8/6) 2 + (8/6) 2 + (10/6) 2 = 7. The cost (or length or weight) of the path P is the sum of the weights of edges in the sequence. Sum over all paths lengths and you get the total cost. 43 lines (34. Kinnersleyy, Daniel McDonald y, Nathan Orlowy, Gregory J. It returns the path of a column value from root to node, with column values separated by char for each row returned by CONNECT BY condition. 500 Dollar Cash Advance No Faxing. Kutools for Excel's Insert Workbook Information is a mutifunctional tool, it can help you insert worksheet name, workbook name, workbook path or workbook path & name into the Excel cells, header or footer quickly and conveniently. Jian Lu's blog and personal site. When sleeping in your best direction makes you shift your mattress to an awkward angle, then you had better hit off the concept. For example consider the following Binary Tree. 0 Arguments. Use the sum and the fact that the taylor expansion of $(I - \alpha A)^{-1}$ to get a proof for part (a). Given a binary tree and a sum, find all root-to-leaf paths where each path's sum equals the given sum. For example: Given the below binary tree and sum = 22, 5 / \ 4 8 / / \ 11 13 4 / \ \ 7 2 1 return true, as there exists a root-to-leaf path…. Find Summit Racing® Classic Cam and Lifter Kits SUM-K3601 and get Free Shipping on Orders Over $99 at Summit Racing! From low-end torque to top-end horsepower--and everything in between--one of our Summit® Classic cam kits has you covered. Learn more. Revista Brasileira de Ensino de F sica, v. Problem Description See the problem description on BTS - Find the Path With Maximal Sum in Pyramid. Abstract: Weak values, obtained from weak measurements, attempt to describe the properties of a quantum system as it evolves from an initial to a final state, without practically altering this evolution. You know that this algorithm will always stop because your graph doesn't have any cycles. The path does not need to start or end at the root or a leaf. (XML Path Language (XPath) 2. For example, given the below. For problems encountered starting or stopping the SUM tool, problems with resetting and or downloading the SUM tool. Students use software with a graphics interface to model sums associated with multiple paths for photons and electrons, leading to the concepts of electron wavefunction, the propagator, bound states, and stationary states. The tree has no more than 1,000 nodes and the values are in the range -1,000,000 to 1,000,000. See autosum. 3 b where the arcs in dashed red are from the shortest path; also note that arc (s, a) is shared by both paths. The first solves the min-min problem and the second the min-sum problem, considering SRLG-disjoint paths. The probability that the project will take 130 or more days to complete is 0. In addition, it encourages the kid to associate between what the written sum actually says, and why they’re drawing a specific quantity of tallies. Let's solve another python exercise. Root to leaf path sum equal to a given number Given a binary tree and a number, return true if the tree has a root-to-leaf path such that adding up all the values along the path equals the given number. Welcome › Forums › General PowerShell Q&A › Powershell to find folder / file size. All we need to do is pick…. The cost (or length or weight) of the path P is the sum of the weights of edges in the sequence. they need not be root node and leaf node, and negative numbers can also be there in the. Max path through Left Child + Node 3. The sum budgeted for this year alone,. Synonyms for sum at Thesaurus. Minimum Path Sum. Inside Sum 41’s Dramatic Comeback How pop-punk singer Deryck Whibley fought his way back from a drinking-induced coma to save his life and band. Example: Given following two arrays:. See also positive sum game and zero sum game. Path of Exile is an online Action RPG set in the dark fantasy world of Wraeclast. In Numerology, the most important number to look at in relationships, especially romantic relationships, is your Life Path number. Created and maintained by Linux bash shell itself. When sleeping in your best direction makes you shift your mattress to an awkward angle, then you had better hit off the concept. Design an algorithm to print all paths which sum to a given value. " If one path is drawing 1 amp and the other is drawing 1 amp then the total is 2 amps at the source. It is a typical dynamic programming. (If the flow is non-zero, there exists at least one such path. The next row's choice must be in a column that is different from the previous row's column by at most one. left, sum) and pathSum(root. In this example, we going to update all of the support. When it comes to Bill plus Loansafe Payday Loan Hillary Clinton, all of us look into the entry for Life Path (3) to find out how things workout. Given a number α with 0 < α < 1, a network G = (V, E) and two nodes s and t in G, we consider the problem of finding two disjoint paths P 1 and P 2 from s to t such that length(P1) ≤ length(P 2) and length(P 1) + α·length(P 2) is minimized. The solution from here should be very straightforward. The sum of all nodes on that path is defined as the sum of that path. Find a path from the smallest element (ie A[0][0]) to the largest element (ie A[M-1][N-1]) such that the sum of the elements in the path is maximum. As with any vector, it is merely the sum of its components (added together like a right triangle, of course). let’s consider the following binary tree as an example and path sum value is 9. Java Solution. According to Adobe, Black Friday saw over$5. Given a binary tree and a number,Write a function to find out whether there is a path from root to leaf with sum equal to given Number. An example is the root-to-leaf path 1->2->3 which represents the number 123. sum Function (XPath) 02/21/2011; 2 minutes to read; In this article. Ampere's Law states that for any closed loop path, the sum of the length elements times the magnetic field in the direction of the length element is equal to the permeability times the electric current enclosed in the loop. In inline math mode the integral/sum/product lower and upper limits are placed right of integral symbol. The Dirac equation. For a given binary tree and a sum, I have written the following function to check whether there is a root to leaf path in that tree with the given sum. Instead of looking at just a part of a transaction or experience, the customer journey documents the full experience of being a customer. In last post Paths in Binary Search Tree we discussed and solved problem to find all the paths in binary search tree. The solution from here should be very straightforward. Just want to highlight, to get the code works, the name of worksheet in the active workbook should be the same as multi-files. It is a typical dynamic programming. LeetCode - Path Sum II (Java) Given a binary tree and a sum, find all root-to-leaf paths where each path's sum equals the given sum. I want to calculate all sum of all possible paths from root to leaf. In the simple reach-ability problem, any path is optimal, as long as it exists. Largest Sum Path We can do a similar DFS traverse. T1 - Degree sum conditions for path-factors with specified end vertices in bipartite graphs. Critical Path Method (CPM): Any calculation method that shows the Critical Path in the schedule. The path followed from one point to the other does not matter. You can create these columns in DAX by leveraging a hidden calculated column that provides a string with the complete path to reach the node in the current row of the table. Note: You can only move either down or right at any point in time. The cost (or length or weight) of the path P is the sum of the weights of edges in the sequence. An important thing to note is, root of every subtree need to return maximum path sum such that at most one child of root is involved. A binary tree and a number k are given. Sun Path Products is the world's premier manufacturer of harness container systems for sport and military parachuting. Given a BST and a sum, write pseudo code to determine if the tree has a root- to-leaf path such that adding up all the values along the path equals the given sum. The demo use case is shown with the help of various screenshots. Taking our example above, we start on the first row and the number 2. Its first act has plenty of laughs, while the second half. Madhukar Bagla, Nothing I say will be enough to sum up how happy I am that we went on this journey together and had each other to lean on, to argue with, and to find comfort in. The next row's choice must be in a column that is different from the previous row's column by at most one. , sum of all the nodes on the path. If I understand correctly--a big if--the path integration method, at least when applied to plain old QM, is described as (1) every possible path the particle could take is assigned an amplitude, (2) sum up (integrate over) these amplitudes for all possible paths. For example: Given the below binary tree and sum = 22, 5 / \ 4 8 / / \ 11 13 4 / \ / \ 7 2 5 1. If an if statement’s false path contains the statement int sum = 0;, where can the sum variable be used? a. Consolidate data in multiple worksheets. A "New Path" or "New Polygon" dialog will pop up. The total distance traveled is the sum of the magnitudes of the individual. 75 95 64 17 47 82. For more information, see the attached workbook. You can change the suns positions for sunrise, selected time and sunset see. Find the number of paths that sum to a given value. A (path) - Creates a new where SSE is the sum of the squares of the errors and SSD is the sum of the squares of the. in Japan, is the leading provider of high-performance software tools for engineering, science, and mathematics. Madame Sum is a project that started as a pop-up in January 2018. The Wiener index of a vertex is the sum of the shortest path distances between v and all other vertices. Root to leaf path sum equal to a given number. index; modules |; next |; previous |; Python »; en 3. Algorithm: 1. Largest "Path-Sum" in a triangle of numbers -- Contest Problem #1 - Duration: 10:57. Create an m x n matrix that will keep the record of the maximum sum obtained at each cell in the grid. getValue() + findBestPath(path. Given a binary tree, where every node value is a number. This path holds great spiritual significance. What Does SUM do? SUM is a manager program to help streamline the use of multiple SkyProc patchers. It is an easy level tree algorithm. Native American Rehab Centers!. The sum of the kinetic energy of the system plus the gravitational potential energy of the system is a positive number. It is a typical dynamic programming. The path does not need to start or end at the root or a leaf, but it must go downwards (traveling only from parent nodes to child nodes). Path Sum: Given a binary tree and a sum, determine if the tree has a root-to-leaf path such that adding up all the values along the path equals the given sum. And this is what the Russians are aiming to achieve: to get Ankara to talk to Damascus, establish contacts and work together. The tree has no more than 1,000 nodes and the values are in the range -1,000,000 to 1,000,000. Binary Tree Maximum Path Sum II 475 Question. Input: Two Dimensional array Output: Print all the paths. If I understand correctly--a big if--the path integration method, at least when applied to plain old QM, is described as (1) every possible path the particle could take is assigned an amplitude, (2) sum up (integrate over) these amplitudes for all possible paths. Early Access puts eBooks and videos into your hands whilst they’re still being written, so you don’t have to wait to take advantage of new tech and new ideas. The demo use case is shown with the help of various screenshots. Excel Experts, Formula for file path This is a quirky one. Read this short blog to get a general understanding on what SUM is and when to use it. It is helpful when trying to simplify complex systems. there is a better way man, just start from the top and add the numbers together. Given a m x n grid filled with non-negative numbers, find a path from top left to bottom right which minimizes the sum of all numbers along its path. Upgrade path from ECC 6. There are many measures for path optimality, depending on the problem. Used by mathematical chemists (vertices = atoms, edges = bonds). 0 SP02, SAP PI 7.
2020-01-24 20:18:27
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.41079065203666687, "perplexity": 813.9542283144242}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 5, "enable": false}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-05/segments/1579250625097.75/warc/CC-MAIN-20200124191133-20200124220133-00022.warc.gz"}
http://reader.differentialist.info/page/2
March 27, 2014 "Tits of the World" Poster Process. (via notrare) March 26, 2014 Shrout & Fleiss’s ICC in mixed model form Shrout & Fleiss give guidelines for calculating intraclass correlation coefficients for estimating interrater reliability and agreement. The original paper shows how to calculate them using ANOVA but they can also be estimated in a mixed model framework. Noting these down so I remember them 1. "Repeatability" $y = a + e$ $ICC(1, k) = \frac{\sigma_a^2}{\sigma_a^2 + \frac{\sigma_e^2}{k}}$ 2. "Agreement" $y = a + r + \epsilon$ $ICC(2, k) = \frac{\sigma_a^2}{\sigma_a^2 + \frac{\sigma_r^2 + \sigma_\epsilon^2}{k}}$ 3. "Consistency" $y = a + r + \epsilon$ $ICC(3, k) = \frac{\sigma_a^2}{\sigma_a^2 + \frac{\sigma_\epsilon^2}{k}}$ March 7, 2014 Good book. (Source: bbww, via unfinished-photography) February 24, 2014 via Fat Birds February 17, 2014 February 13, 2014 "Every speciality changes its classification of illnesses every few years, as we learn more about illnesses, but only psychiatry gets abuse for doing so." February 6, 2014 Having done some research recently on collies, I like this. (Source: scancity) February 5, 2014 "Think of overfitting as memorizing as opposed to learning." — James Faghmous , New to Machine Learning? Avoid these three mistakes February 1, 2014 Feasibility and Uncertainty in Behavior Genetics for the Nonhuman Primate M. J. Adams International Journal of Primatology February 2014, Volume 35, Issue 1, pp 156-168 doi: 10.1007/s10764-013-9722-8 Nonhuman primates are good species to study for understand the genetic underpinnings of behavior, especially because their behaviors are so similar to our. One reason that primates are good to study is because they are individually identifiable and we tend to know which individuals are related (or we can discover this using genotyping). However, studying primates offers several challenges. The main one is that when studying nonhuman primates the sample sizes (dozens or at most hundreds) are much smaller than what is typical in studies of humans (hundreds to tens of thousands). This small sample size diminishes statistical power and limits the types of questions that can be investigated. I conducted some simulations to see just how gloomy the prospects are. I found that statistical power can be increased slightly by multiple measurements of each individual, something that is feasible since primates are long-lived. Abstract The analysis of phenotypic covariances among genetically related individuals is the basis for estimations of genetic and phenotypic effects on phenotypes. Beyond heritability, there are several other estimates that can be made with behavior genetic models of interest to primatologists. Some of these estimates are feasible with primate samples because they take advantage of the types of relatives available to compare in primate species and because most behaviors are expressed orders of magnitude more often and in a greater variety of contexts than morphological or life-history traits. The hypotheses that can be tested with these estimates are contrasted with hypotheses that will be difficult to achieve in primates because of sample size limitations. Feasible comparisons include the proportion of variance from interaction effects, the variation of genetic effects across environments, and the genetics of growth and development. Simulation shows that uncertainty of genetic parameters can be reduced by sampling each individual more than once. Because sample sizes are likely to remain relatively small in most primate behavior genetics, expressing uncertainty in parameter estimates is needed to move our inferences forward. January 25, 2014 "Practically no experimental work has been done upon individual differences and family resemblances in animal behavior. In most cases the behaviorist has been content to study the mass reaction of a group of animals to external stimuli, and in the main, has not attempted to treat the variability of his group because of the relatively small number of animals tested." — Halsey Bagg, 1920, Archives of Genetics Mono. Vol 43, p.1 (via Rosalind Arden) 2:46pm  |   URL: http://tmblr.co/Z23PQy15O3-iG Filed under: genetics variation January 24, 2014 Jose Vazquez The tower and the park The Arts Tower and Western Bank Library at the University of Sheffield. January 23, 2014 "Hurrah then for confusion and mystery in medicine." A mesmeric physician taking advantage of his female patient. Colour lithograph, 1852 Wellcome Images L0034922 January 21, 2014 "…the shy-animal mental model of experimentation. The effect is there; you just need to create the right circumstances to coax it out of its hiding place." January 18, 2014 Never Trust Passerine Nomenclature by Albertonykus via @hylopsar. January 15, 2014 March 25, 1984 — see The Complete Peanuts 1983-1986
2014-07-22 19:20:37
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.33882468938827515, "perplexity": 3742.2785026296783}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2014-23/segments/1405997862121.60/warc/CC-MAIN-20140722025742-00026-ip-10-33-131-23.ec2.internal.warc.gz"}
http://statml.cs.cmu.edu/blog/2017/03/22/yining.html
#### On the Power of Truncated SVD for General High-rank Matrix Estimation Problems Mar 22 (Wednesday) at 2pm GHC-8102 Speaker: Yining Wang Abstract: We show that given an estimate \$\widehat{A}\$ that is close to a general high-rank positive semi-definite (PSD) matrix \$\widehat{A}\$ in spectral norm, the simple truncated SVD of Ab produces a multiplicative approximation of A in Frobenius norm. This observation leads to many interesting results on general high rank matrix estimation problems, including high rank matrix completion, high rank matrix denoising and low-rank estimation of high-dimensional covariance. Link: https://arxiv.org/abs/1702.06861
2019-02-15 18:54:42
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8640875220298767, "perplexity": 1444.5208076871368}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-09/segments/1550247479101.30/warc/CC-MAIN-20190215183319-20190215205319-00260.warc.gz"}
http://clay6.com/qa/27286/equation-of-conic-defined-by-9yy-4x-0-qquad-y-0-2-find-length-of-latus-rect
Equation of conic defined by : $9yy'+4x=0 \qquad y(0)=2$ find length of latus rectum $(a)\;\frac{7}{3}\\(b)\;\frac{2}{3}\\ (c)\;2 \\ (d)\;\frac{8}{3}$
2018-01-17 07:28:08
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.7631518840789795, "perplexity": 1138.1097211892893}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-05/segments/1516084886830.8/warc/CC-MAIN-20180117063030-20180117083030-00325.warc.gz"}
http://geekwentfreak.com/tags/python/
Python - Geek went Freak! # Python Datasets or toy datasets, as sklearn calls it, reside in sklearn.datasets package. A dataset can be loaded by using sklearn.datasets.load_*() function. In this post, let us consider iris dataset. iris dataset can be loaded using sklearn.datasets.load_iris(). By default sklearn provides datasets as sklearn.datasets.base.Bunch. from sklearn.datasets import load_iris print(type(irisData)) The Bunch structure is convenient since it holds data, target, feature_names and target_names.data and target fields are both numpy.ndarray containing independent and dependent variables respectively. from sklearn.datasets import load_iris print(type(irisData)) print(type(irisData.data), type(irisData.target)) print(irisData.feature_names) print(irisData.target_names) print(irisData.data) print(irisData.target) sklearn datasets’ load methods can also provide the features and targets directly as numpy.ndarray by using the return_X_y argument. from sklearn.datasets import load_iris print(irisData[0]) print(irisData[1]) statsmodels comes with some sample datasets built-in. In this tutorial, we are going to learn how to use datasets in statsmodels. The built-in datasets are available in package statsmodels.api.datasets. In this tutorial lets explore statsmodels.api.datasets.fair. One can load data from the datasets either as numpy.recarray or pandas.core.frame.DataFrame. statsmodels.api.datasets.fair.load().data provides data as numpy.recarray. statsmodels.api.datasets.fair.load_pandas().data provides data as pandas.core.frame.DataFrame. The following code will display the dataset as table in ipython notebook. import statsmodels.api as sm dta # Negative values in numpy's randn If you are used to rand function, which generates neat uniformly distributed random numbers in the range of [0, 1), you will be surprised when you use randn for the first time. For two reasons: 1. randn generated negative numbers 2. randn generates numbers greater than 1 and lesser than -1 ## Examples ### Negative lRandom = np.random.randn(10) print(lRandom[lRandom < 0]) The above code produced the following output during a sample run: [-0.52004631 -0.4080691 -0.04164258 -0.46942423 -0.84344794 -0.01001501] ### Greater than 2 lRandom = np.random.randn(500) lRandom[lRandom > 2] The above code produced the following output during a sample run: [ 2.09666448 2.29351194 2.16025808 2.78635893 2.3467666 2.54232853 2.35466425 2.26961216 2.62167745 2.0261606 2.00743211] ## Reason This is because randn unlike rand generates random numbers backed by normal distribution with mean = 0 and variance = 1. If you plot the histogram of the samples from randn, it becomes quite obvious: lRandom = np.random.randn(5000) lHist, lBin = np.histogram(lRandom) plot = plt.plot(lBin[:-1], lHist, 'r--', linewidth=1) plt.show() The above code produced the following output during a sample run: # Binary arithmetic using python ## Convert unsinged integer to binary string bin(10) ‘0b1010’ ## Convert binary string to unsigned integer int('1010',2) 10 It also works if you try the binary string with the prefix ‘0b’. For example, int('0b1010',2) ## Convert signed integer to binary string It is a little bit difficult to deal with negative numbers. Trying to convert it the same way we did with unsigned numbers doesn’t work as expected, bin(-10) ’-0b1010’ You would have expected a Two’s complement number as the output but it just prints the binary string of positive number with a ‘-’ prefix. This problem can be fixed by specifying the length of the bits you want as output. bin(-10 & 0xff) ‘0b11110110’ If you want the length to be dynamic, int("1" * 8, 2) ## Convert singed binary string to signed integer I am not sure if there is a direct way to do this in python. If you find any please let me know! I have written a small function to do it,
2019-05-24 04:47:34
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.19754354655742645, "perplexity": 2797.30411211873}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-22/segments/1558232257514.68/warc/CC-MAIN-20190524044320-20190524070320-00367.warc.gz"}
http://www.theinfolist.com/html/ALL/s/Luminance.html
TheInfoList Luminance is a photometricPhotometry can refer to: * Photometry (optics), the science of measurement of visible light in terms of its perceived brightness to human vision * Photometry (astronomy), the measurement of the flux or intensity of an astronomical object's electroma ... measure of the luminous intensity In photometryPhotometry can refer to: * Photometry (optics), the science of measurement of visible light in terms of its perceived brightness to human vision * Photometry (astronomy), the measurement of the flux or intensity of an astronomical o ... per unit area of light Light or visible light is electromagnetic radiation within the portion of the electromagnetic spectrum that is visual perception, perceived by the human eye. Visible light is usually defined as having wavelengths in the range of 400–700 nan ... travelling in a given direction. It describes the amount of light that passes through, is emitted from, or is reflected from a particular area, and falls within a given solid angle In geometry Geometry (from the grc, γεωμετρία; ' "earth", ' "measurement") is, with , one of the oldest branches of . It is concerned with properties of space that are related with distance, shape, size, and relative position o ... . Brightness Brightness is an attribute of visual perception in which a source appears to be radiating or reflecting light. In other words, brightness is the perception elicited by the luminance of a visual target. It is not necessarily proportional to lumina ... is the for the ''subjective'' impression of the ''objective'' luminance measurement standard (see for the importance of this contrast). The SI unit The International System of Units, known by the international abbreviation SI in all languages and sometimes pleonastically as the SI system, is the modern form of the metric system The metric system is a that succeeded the decimal ... for luminance is candela per square metre The candela per square metre (symbol: cd/m2) is the derived SI unit of luminance Luminance is a Photometry (optics), photometric measure of the luminous intensity per units of measurement, unit area of light travelling in a given direction. It ... (cd/m2), as defined by the International System of Units (SI is from the French ''Système international d'unités'') standard for the modern metric system Modern may refer to: History *Modern history ** Early Modern period ** Modern age, Late Modern period *** 18th century *** 19th century *** 20th century ** Contemporary history * Moderns, a faction of Freemasonry that existed in the 18th century ... . A non-SI term for the same unit is the nit. The unit in the Centimetre–gram–second system of units (CGS) (which predated the SI system) is the stilb, which is equal to one candela per square centimetre or 10 kcd/m2. # Description Luminance is often used to characterize emission or reflection from flat, diffuse 250px, Diffusion from a microscopic and macroscopic point of view. Initially, there are solution, solute molecules on the left side of a barrier (purple line) and none on the right. The barrier is removed, and the solute diffuses to fill the wh ... surfaces. Luminance levels indicate how much luminous power up Integrating sphere used for measuring the luminous flux of a light source. In photometry, luminous flux or luminous power is the measure of the perceived power of light Light or visible light is electromagnetic radiation within the port ... could be detected by the human eye The human eye is a sense organ A sense is a biological system A biological system is a complex biological network, network which connects several biologically relevant entities. Biological organization spans several scales and are determined ... looking at a particular surface from a particular angle of view The angle of view is the decisive variable for the visual perception of the size or projection of the size of an object. Angle of view and perception of size The perceived size of an object depends on the size of the image projected onto the ... . Luminance is thus an indicator of how bright Bright may refer to: Common meanings *Bright, an adjective meaning giving off or reflecting illumination; see Brightness *Bright, an adjective meaning someone with intelligence People *Bright (surname) *Bright (given name) *Bright, the stage name ... the surface will appear. In this case, the solid angle of interest is the solid angle subtended by the eye's pupil The pupil is a black hole located in the center of the iris of the eye Eyes are organs of the visual system The visual system comprises the sensory organ (the eye) and parts of the central nervous system (the retina containing photore ... . Luminance is used in the video industry to characterize the brightness of displays. A typical computer display emits between 50 and . The sun has a luminance of about at noon. Luminance is invariant in geometric optics Geometry (from the grc, γεωμετρία; ''wikt:γῆ, geo-'' "earth", ''wikt:μέτρον, -metron'' "measurement") is, with arithmetic, one of the oldest branches of mathematics. It is concerned with properties of space that are related ... . This means that for an ideal optical system, the luminance at the output is the same as the input luminance. For real, passive optical systems, the output luminance is ''at most'' equal to the input. As an example, if one uses a lens to form an image that is smaller than the source object, the luminous power is concentrated into a smaller area, meaning that the illuminance In photometryPhotometry can refer to: * Photometry (optics), the science of measurement of visible light in terms of its perceived brightness to human vision * Photometry (astronomy), the measurement of the flux or intensity of an astronomical ob ... is higher at the image. The light at the image plane, however, fills a larger solid angle so the luminance comes out to be the same assuming there is no loss at the lens. The image can never be "brighter" than the source. # Health effects Retinal damage can occur when the eye is exposed to high luminance. Damage can occur because of local heating of the retina. Photochemical effects can also cause damage, especially at short wavelengths. # Luminance meter A luminance meter is a device used in photometryPhotometry can refer to: * Photometry (optics), the science of measurement of visible light in terms of its perceived brightness to human vision * Photometry (astronomy), the measurement of the flux or intensity of an astronomical object's electroma ... that can measure the luminance in a particular direction and with a particular solid angle In geometry Geometry (from the grc, γεωμετρία; ' "earth", ' "measurement") is, with , one of the oldest branches of . It is concerned with properties of space that are related with distance, shape, size, and relative position o ... . The simplest devices measure the luminance in a single direction while imaging luminance meters measure luminance in a way similar to the way a digital camera A digital camera is a camera A camera is an optical Optics is the branch of physics Physics is the natural science that studies matter, its Elementary particle, fundamental constituents, its Motion (physics), motion and behav ... records color images. # Mathematical definition The luminance of a specified point of a light source, in a specified direction, is defined by the derivative In mathematics Mathematics (from Greek: ) includes the study of such topics as numbers (arithmetic and number theory), formulas and related structures (algebra), shapes and spaces in which they are contained (geometry), and quantities ... :$L_\mathrm = \frac$ where * v is the luminance (/ m2), * d2v is the luminous flux In photometryPhotometry can refer to: * Photometry (optics), the science of measurement of visible light in terms of its perceived brightness to human vision * Photometry (astronomy), the measurement of the flux or intensity of an astronomical ... ( lm) leaving the area d in any direction contained inside the solid angle dΣ, * d is an infinitesimal In mathematics, infinitesimals or infinitesimal numbers are quantities that are closer to zero than any standard real number, but are not zero. They do not exist in the standard real number system, but do exist in many other number systems, such a ... area ( m2) of the source containing the specified point, * dΣ is an infinitesimal solid angle In geometry Geometry (from the grc, γεωμετρία; ' "earth", ' "measurement") is, with , one of the oldest branches of . It is concerned with properties of space that are related with distance, shape, size, and relative position o ... () containing the specified direction, * Σ is the angle In Euclidean geometry Euclidean geometry is a mathematical system attributed to Alexandrian Greek mathematics , Greek mathematician Euclid, which he described in his textbook on geometry: the ''Euclid's Elements, Elements''. Euclid's method ... between the normal nΣ to the surface d and the specified direction. If light travels through a lossless medium, the luminance does not change along a given light ray. As the ray crosses an arbitrary surface ''S'', the luminance is given by :$L_\mathrm = \frac$ where * d is the infinitesimal area of ''S'' seen from the source inside the solid angle dΣ, * dS is the infinitesimal solid angle subtended In geometry, an angle is subtended by an arc (geometry), arc, line segment or any other section of a curve when its two ray (geometry), rays pass through the endpoints of that arc, line segment or curve section. Conversely, the arc, line segment ... by d as seen from d, * S is the angle between the normal nS to d and the direction of the light. More generally, the luminance along a light ray can be defined as :$L_\mathrm = n^2\frac$ where * d is the etendueEtendue or étendue (; ) is a property of light in an optics, optical system, which characterizes how "spread out" the light is in area and angle. It corresponds to the beam parameter product (BPP) in Gaussian beam optics. From the source point of v ... of an infinitesimally narrow beam containing the specified ray, * dv is the luminous flux carried by this beam, * is the index of refraction In optics Optics is the branch of physics Physics is the that studies , its , its and behavior through , and the related entities of and . "Physical science is that department of knowledge which relates to the order of nature, or ... of the medium. # Relation to Illuminance The luminance of a reflecting surface is related to the illuminance In photometryPhotometry can refer to: * Photometry (optics), the science of measurement of visible light in terms of its perceived brightness to human vision * Photometry (astronomy), the measurement of the flux or intensity of an astronomical ob ... it receives: :$\begin \int_ L_\mathrm \mathrm\Omega_\Sigma \cos \theta_\Sigma & = M_\mathrm \\ & = E_\mathrm R \end$ where the integral covers all the directions of emission , and * v is the surface's luminous exitance * v is the received illuminance, and * is the reflectance The reflectance of the surface of a material is its effectiveness in reflecting radiant energy In physics Physics is the natural science that studies matter, its Elementary particle, fundamental constituents, its Motion (physics), motion ... . In the case of a perfectly diffuse reflector Diffuse reflection is the reflection of light Light or visible light is electromagnetic radiation within the portion of the electromagnetic spectrum that can be visual perception, perceived by the human eye. Visible light is usually defined ... (also called a Lambertian reflector), the luminance is isotropic, per Lambert's cosine law In optics Optics is the branch of physics Physics (from grc, φυσική (ἐπιστήμη), physikḗ (epistḗmē), knowledge of nature, from ''phýsis'' 'nature'), , is the natural science that studies matter, its Motion (physics ... . Then the relationship is simply :$L_\mathrm = E_\mathrm R / \pi$ # Units A variety of units have been used for luminance, besides the candela per square metre. One candela per square metre is equal to: *10−4 stilbs (the unit of luminance) *π apostilbs *π×10−4 lamberts *0.292 foot-lambert A foot-lambert or footlambert (fL, sometimes fl or ft-L) is a unit of luminance in United States customary units and some other unit systems. A foot-lambert equals 1/π or 0.3183 candela per square foot, or 3.426 candela per square meter (the corres ... s * Relative luminance Relative luminance follows the photometric definition of luminance, but with the values normalized to 1 or 100 for a reference white. Like the photometric definition, it is related to the luminous flux density in a particular direction, which is ... * Orders of magnitude (luminance) * Diffuse reflection Diffuse reflection is the reflectionReflection or reflexion may refer to: Philosophy * Self-reflection Science * Reflection (physics), a common wave phenomenon ** Specular reflection, reflection from a smooth surface *** Mirror image, a reflec ... * EtendueEtendue or étendue (; ) is a property of light in an optics, optical system, which characterizes how "spread out" the light is in area and angle. It corresponds to the beam parameter product (BPP) in Gaussian beam optics. From the source point of v ... * * Lambertian reflectance Lambertian reflectance is the property that defines an ideal "matte" or diffusely reflecting surface. The apparent brightness of a Lambertian surface to an observer is the same regardless of the observer's angle of view. More technically, the surf ... * Lightness (color) In colorimetry and color theory, lightness, also known as value or tone, is a representation of a color's brightness. It is one of the Color appearance model#Color appearance parameters, color appearance parameters of any color appearance model ... * Luma, the representation of luminance in a video monitor * Lumen (unit) The lumen (symbol: lm) is the SI derived unit of luminous flux, a measure of the total quantity of visible light emitted by a source per unit of time. Luminous flux differs from power (physics), power (radiant flux) in that radiant flux includes ... * Radiance In radiometry Radiometry is a set of techniques for measuring Measurement is the quantification of attributes of an object or event, which can be used to compare with other objects or events. The scope and application of measurement are ... , radiometric quantity analogous to luminance * Brightness Brightness is an attribute of visual perception in which a source appears to be radiating or reflecting light. In other words, brightness is the perception elicited by the luminance of a visual target. It is not necessarily proportional to lumina ... , the subjective impression of luminance * Glare (vision) Glare is difficulty of seeing in the presence of bright light such as direct or reflected sunlight or artificial light such as car headlamps at night. Because of this, some cars include mirrors with automatic anti-glare functions and in buildings, ... {{SI light units
2022-09-25 01:55:07
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 5, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.6250610947608948, "perplexity": 1649.366301720738}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030334332.96/warc/CC-MAIN-20220925004536-20220925034536-00200.warc.gz"}
http://docs.jtl-connector.de/en/latest/faq/technical.html
# Technical¶ What is logged and where can I find it? Read Server site debugging for detailed information. What is the config.json file located in the config folder? This file can be used to define configurations you want to access during the sync progress. They can be access by Application()->getConfig(). A predefined key is developer_logging which has the effect, if it is set to true, that also messages with a DEBUG level are logged. How can write an abstract mapper? By using the jtl\Connector\Type\DataType and jtl\Connector\Model\DataModel class an abstract mapper can be implemented. An example on how you get the type of the called mapper can be found in the example connector’s jtl\Connector\Example\Mapper\DataMapper class. Exceptions has to be caught from within the Endpoint. To make that error public to the JTL-Wawi you have to set an jtl\Connector\Core\Rpc\Error on the jtl\Connector\Result\Action object which is returned.
2019-02-22 10:47:11
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.28219303488731384, "perplexity": 2482.0503436159474}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-09/segments/1550247515149.92/warc/CC-MAIN-20190222094419-20190222120419-00023.warc.gz"}
https://www.vedantu.com/question-answer/given-cos-12o+cos-84o+cos-156o+cos-132odfrac1a-class-11-maths-cbse-5ee4933fbe1b52452d364ba0
Courses Courses for Kids Free study material Free LIVE classes More # Given $\cos {{12}^{o}}+\cos {{84}^{o}}+\cos {{156}^{o}}+\cos {{132}^{o}}=-\dfrac{1}{a}$. Find the value of a. Last updated date: 27th Mar 2023 Total views: 307.2k Views today: 4.87k Verified 307.2k+ views Hint: We should know about the common trigonometric identities and numeric values to solve this problem. To solve the above question, we can see that there are four terms containing cosine. Thus, to solve, we will group these four terms in pairs of two and then use the below property on each of the pairs to solve the problem- $\cos A+\cos B=\cos \left( \dfrac{A+B}{2} \right)\cos \left( \dfrac{A-B}{2} \right)$ Further, we should also know the values to following trigonometric angles- $\sin $${{18}^{o}} = \dfrac{\sqrt{5}-1}{4} \cos$${{36}^{o}}$= $\dfrac{\sqrt{5}+1}{4}$ Now, to begin the question, we need to decide which terms to club together to solve the problem efficiently. To explain this, We can club ($\cos $${{12}^{o}} and \cos$${{84}^{o}}$) and ($\cos $${{156}^{o}} and \cos$${{132}^{o}}$) pair of terms together or any other pairs. However, solving any two random pair of terms may not lead to desirable results. In this case, we group ($\cos$ ${{12}^{o}}$ and $\cos $${{132}^{o}}) and (\cos$${{84}^{o}}$ and $\cos $${{156}^{o}}) together. The reason behind this is that after solving, we would get the angles in familiar terms. This would be more clear as solve this question below- = (\cos {{12}^{o}} and \cos$${{132}^{o}}$)+ ($\cos $${{84}^{o}} and \cos$${{156}^{o}}$) = 2$\cos $$\dfrac{{{12}^{o}}+{{132}^{o}}}{2}$$\cos $$\dfrac{{{12}^{o}}-{{132}^{o}}}{2} + 2\cos$$\dfrac{{{84}^{o}}+{{156}^{o}}}{2}$$\cos$$\dfrac{{{84}^{o}}-{{156}^{o}}}{2}$ = 2 $\cos $${{72}^{o}}$$\cos $$(-{{60}^{o}})+2 \cos$${{120}^{o}}$$\cos$$(-{{36}^{o}})$ Now, $\cos$ (-x) = $\cos$(x), Thus, we have, = 2 $\cos $${{72}^{o}}$$\cos $${{60}^{o}}+2 \cos$${{120}^{o}}$$\cos$${{36}^{o}}$ -- (A) Further, $\sin$(90-x) =$\cos$ (x) Thus, $\cos $${{72}^{o}}=\sin$${{(90-72)}^{o}}$= $\sin $${{18}^{o}} Substituting this value in (A), we get, = 2\sin$${{18}^{o}}$$\cos$${{60}^{o}}$+2$\cos $${{120}^{o}}$$\cos $${{36}^{o}} Thus, we were able to get familiar terms by clubbing (cos {{12}^{o}} and cos{{132}^{o}}) and (cos{{84}^{o}} and cos{{156}^{o}}) together since, we know the numeric values of all these sine and cosine values. Now using the values of \sin$${{18}^{o}}$, $\cos $${{36}^{o}}, \cos$${{60}^{o}}$and $\cos$${{120}^{o}}$ =$\left( 2\times \dfrac{\sqrt{5}-1}{4}\times \dfrac{1}{2} \right)+\left( 2\times \dfrac{-1}{2}\times \dfrac{\sqrt{5}+1}{4} \right)$ =$\left( \dfrac{\sqrt{5}-1}{4} \right)-\left( \dfrac{\sqrt{5}+1}{4} \right)$ =$-\dfrac{1}{2}$ According to the question, $\cos {{12}^{o}}+\cos {{84}^{o}}+\cos {{156}^{o}}+\cos {{132}^{o}}=-\dfrac{1}{a}$ Thus, $-\dfrac{1}{2}$=$-\dfrac{1}{a}$ Thus, a = 2. Hence, the final answer is a = 2. Note: While solving trigonometric expressions, it is always important to know the numeric values of sine and cosine of following angles- ${{0}^{o}},{{18}^{o}},{{30}^{o}},{{36}^{o}},{{45}^{o}},{{60}^{o}},{{90}^{o}}$. While solving, we can also group different terms to solve the question. Although the final answer would be the same, it would be more difficult to arrive at the final answer since we would have to manipulate the terms more to get the same answer.
2023-03-31 06:26:47
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.715386152267456, "perplexity": 2427.890226991575}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2023-14/segments/1679296949573.84/warc/CC-MAIN-20230331051439-20230331081439-00239.warc.gz"}
https://tex.stackexchange.com/questions/62259/change-reference-position-for-footnote-link?noredirect=1
# Change reference position for footnote link With the recent package footnotebackref it is possible to backrefence footnotes, one can then jump directly from the main text to the footnote and back. The link in the footnote that points back to the main text can be set to symbols inside the footnote text or, like I prefer it, directly to the footnote number in the footnote. The hyperref package links the footnote number in the main text to the text and not the number of the footnote. When the document is viewed at a higher zoom level, so that the page width does not fit in the window of pdf viewer anymore, this causes an unwanted result. After clicking on the footnote number inside the main text the footnote number and therefore backreference is hardly or not at all displayed. So I would like know if it is possible to change the position for the hyperref reference for footnotes directly to the footnote number. ## Visual Example Here you can see what I mean, in case I was unable to explain right or anybody does want to read my description. ## MWE \documentclass{article} \usepackage{footnotebackref} \textheight=3cm \begin{document} Text\footnote{The first footnote.} Text\\ \end{document} • It might be a problem with Sumatra? I cannot verify this behaviour for Okular (v0.13.2) or Adobe Reader (v9.5.1). – clemens Jul 5 '12 at 15:41 • @cgnieder I can confirm this issue for Adobe Reader (10.1.3), Adobe Acrobat (9.5.1) and Foxit Reader (5.0.1.0523). As I mentioned the zoom settings have to be high enough so that page does not fully fit into the window. Might that be the reason? I wonder why Acrobat (9.5.1) and Reader (9.5.1) would differ. – maetra Jul 6 '12 at 7:27 • Curious... maybe the linux version (that I have) and the windows version of the Adobe programs differ? (I just tested your MWE again to make sure I wasn't mistaken: with both my pdf readers the hyperlink jumps in a way that the “1” is perfectly visible in the upper left corner (document zoomed to 400%). – clemens Jul 6 '12 at 10:28 • I can confirm the issue for Adobe Reader 10.1.3 and Windows XP. – Stephen Aug 1 '12 at 18:06 The trick is to move code used by hyperref for setting the footnote anchor/target as part of the footnote text (so after the footnote mark has been 'drawn') into the code for defining the 'footnote' part of the footnote mark! (The footnote mark is used twice for every footnote, once in the text and once in the footnote itself; I am referring to the latter here.) Default LaTeX makes no distinction between the two footnote marks, which may be why hyperref originally does things this way around. footnotebackref does make the distinction in some sense, but not in a way we can cleanly patch, so we undo their changes and patch them back in using our method (I copied the code for this from the current version of footnotebackref, so this may get out of sync). The following code should work whether or not footnotebackref is used (but if it is used, it must be loaded before this patch). hyperref must also be loaded before this patch. Original definitions of \@makefnmark are preserved except when footnotebackref is used, which clobbers them anyway. \documentclass{article} % comment/uncomment various combinations of these for testing %\PassOptionsToPackage{symbol=$\wedge$}{footnotebackref} \usepackage{footnotebackref} \usepackage{hyperref} \makeatletter % distinguish between footnote marks in the text and in the footnotes themselves! \let\hyperfoot@oldmakefntext\@makefntext \renewcommand*{\@makefntext}{% \let\hyperfoot@base@makefnmark\@makefnmark \let\@makefnmark\hyperfoot@infootnote@makefnmark \hyperfoot@oldmakefntext} % set the hypertarget (through \hyper@@anchor) before placing \@thefnmark \newcommand*{\hyperfoot@infootnote@makefnmark}{% \fn@settarget \hyperfoot@base@makefnmark } \let\hyperfoot@oldmakefntext\BHFN@OldMakefntext% undo their patch \renewcommand\hyperfoot@infootnote@makefnmark{%... and merge with ours \fn@settarget \mbox{\textsuperscript{\normalfont% \hyperref[\BackrefFootnoteTag]{\@thefnmark}}}\,}% \fi }{} % reset hyperref's overriding of \@footnotetext, which basically just adds the \hyper@@anchor as part of the footnote text \long\def\@footnotetext#1{% \H@@footnotetext{#1}} % and put the hyperref magic into \fn@settarget instead % note that we set the anchor text to be \relax. previously, it was sometimes (=when allowing links to be nested) the footnote text. I don't actually know what the effect is of having anchor text... but apparently no drivers support nesting at the moment anyway. \newcommand\fn@settarget{% a form of this was previously part of hyperref's \@footnotetext %\message{^^J^^J**setting target!**^^J^^J}% we should only see as many 'setting target' messages in the log as there are footnotes! \ifHy@nesting \expandafter\ltx@firstoftwo \else \expandafter\ltx@secondoftwo \fi {% \expandafter\hyper@@anchor\expandafter{% \Hy@footnote@currentHref }{\relax}% }{% \expandafter\hyper@@anchor\expandafter{% \Hy@footnote@currentHref }{\relax}% }% \let\@currentHref\Hy@footnote@currentHref \let\@currentlabelname\@empty }% } % following makes testing slightly easier on my monitor \hypersetup{pdfstartview={XYZ null null 4}} \textheight=3cm \begin{document} Text\footnote{The first footnote.} Text. Let's have more than one footnote!\footnote{For better testing!} \end{document} • 2018 4Q TexShop (MacTeX) TexLive 2016. Back-referenced footnote scenario. cyberSingularity code (still) works like a champ. Original link to footnote scrolled one text-line too low. The "fix" arrived via compiling with xelatex (at least twice). See Amphipolis and Kotthoff. – Saphar Koshet Oct 13 '18 at 14:57
2019-12-09 10:35:50
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8433504104614258, "perplexity": 2549.7472385440456}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-51/segments/1575540518627.72/warc/CC-MAIN-20191209093227-20191209121227-00529.warc.gz"}
https://zbmath.org/?q=an:1143.54019
# zbMATH — the first resource for mathematics ##### Examples Geometry Search for the term Geometry in any field. Queries are case-independent. Funct* Wildcard queries are specified by * (e.g. functions, functorial, etc.). Otherwise the search is exact. "Topological group" Phrases (multi-words) should be set in "straight quotation marks". au: Bourbaki & ti: Algebra Search for author and title. The and-operator & is default and can be omitted. Chebyshev | Tschebyscheff The or-operator | allows to search for Chebyshev or Tschebyscheff. "Quasi* map*" py: 1989 The resulting documents have publication year 1989. so: Eur* J* Mat* Soc* cc: 14 Search for publications in a particular source with a Mathematics Subject Classification code (cc) in 14. "Partial diff* eq*" ! elliptic The not-operator ! eliminates all results containing the word elliptic. dt: b & au: Hilbert The document type is set to books; alternatively: j for journal articles, a for book articles. py: 2000-2015 cc: (94A | 11T) Number ranges are accepted. Terms can be grouped within (parentheses). la: chinese Find documents in a given language. ISO 639-1 language codes can also be used. ##### Operators a & b logic and a | b logic or !ab logic not abc* right wildcard "ab c" phrase (ab c) parentheses ##### Fields any anywhere an internal document identifier au author, editor ai internal author identifier ti title la language so source ab review, abstract py publication year rv reviewer cc MSC code ut uncontrolled term dt document type (j: journal article; b: book; a: book article) Existence of fixed point for the nonexpansive mapping of intuitionistic fuzzy metric spaces. (English) Zbl 1143.54019 The existence of a fixed point of an intuitionistic fuzzy nonexpansive mapping is proved. The theorem generalizes a result of {\it V. Gregori} and {\it A. Sapena} [Fuzzy Sets Syst. 125, No. 2, 245--252 (2002; Zbl 0995.54046)]. Also, the Edelstein periodic point theorem for locally contractive mappings is extended from metric spaces to intuitionistic fuzzy metric spaces. For related articles the reader is referred to [{\it V. Gregori, S. Romaguera} and {\it P. Veeramani}, Chaos Solitons Fractals 28, No. 4, 902--905 (2006; Zbl 1096.54003)], [{\it D. Miheţ}, Fixed Point Theory Appl. 2007, Article ID 87471, 5 p. (2007; Zbl 1152.54008)], [{\it L. B. Ćirić, S. N. Ješić} and {\it J. S. Ume}, Chaos Solitons Fractals 37, No. 3, 781--791 (2008; Zbl 1137.54326)]. ##### MSC: 54H25 Fixed-point and coincidence theorems in topological spaces 03E72 Fuzzy set theory 54A40 Fuzzy topology Full Text: ##### References: [1] Atanassov K. Intuitionistic fuzzy sets. In: Sgurev V, editor. VII ITKR’s Session, Sofia June, 1983 Control Sci. and Tech. Library, Bulg. Academy of Sciences, 1984. [2] Atanassov, K.: Intuitionistic fuzzy sets. Fuzzy set syst 20, 87-96 (1986) · Zbl 0631.03040 [3] Atanassov, K.: New operations defined over the intuitionistic fuzzy sets. Fuzzy set syst 61, 137-142 (1994) · Zbl 0824.04004 [4] El Naschie, M. S.: On the uncertainty of Cantorian geometry and two-slit experiment. Chaos, solitons & fractals 9, 517-529 (1998) · Zbl 0935.81009 [5] El Naschie, M. S.: On the verifications of heterotic string theory and &z.epsiv;$(\infty )$ theory. Chaos, solitons & fractals 11, 397-407 (2000) [6] El Naschie, M. S.: On a fuzzy Kähler-like manifold which is consistent with the two slit experiment. Int J nonlinear sci numer simulat 6, 517-529 (2005) [7] El Naschie, M. S.: A review of E-infinity theory and the mass spectrum of high energy particle physics. Chaos, solitons & fractals 19, 209-236 (2004) · Zbl 1071.81501 [8] George, A.; Veeramani, P.: On some results in fuzzy metric spaces. Fuzzy set syst 64, 395-399 (1994) · Zbl 0843.54014 [9] Ghaemi MB, Razani A. Fixed and periodic points in the probabilistic normed and metric spaces. Chaos, Solitons & Fractals, in press, doi:10.1016/j.chaos.2005.08.192. [10] Park, J. H.: Intuitionistc fuzzy metric spaces. Chaos, solitons & fractals 22, 1039-1046 (2004) · Zbl 1060.54010 [11] Razani A. A contraction theorem in fuzzy metric spaces. Fixed Point Theory Applications 2005;3:257 -- 65. · Zbl 1102.54005 [12] Razani A. A fixed point theorem in the Menger probabilistic metric space. New Zealand J Math, to appear. · Zbl 1130.47061 [13] Rodrígues-López, J.; Romaguera, S.: The Hausdorff fuzzy metric on compact sets. Fuzzy set syst 147, 273-283 (2004) · Zbl 1069.54009 [14] Sadati, R.; Park, J. H.: On the intuitionistc fuzzy topological spaces. Chaos, solitons & fractals 27, 331-344 (2006) [15] Schweizer, B.; Sklar, A.: Statistical metric spaces. Pacific J math 10, 314-334 (1960) · Zbl 0091.29801
2016-05-02 23:27:41
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.838494062423706, "perplexity": 11427.658155494546}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2016-18/segments/1461860117914.56/warc/CC-MAIN-20160428161517-00051-ip-10-239-7-51.ec2.internal.warc.gz"}
https://www.transtutors.com/questions/pretax-cost-savings-475091.htm
Pretax Cost Savings 1 answer below » A proposed cost-saving device has an installed cost of $640,000. The device will be used in a five-year project but is classified as three-year MACRS property for tax purposes. The required initial net working capital investment is$55,000, the marginal tax rate is 35 percent, and the project discount rate is 12 percent. The device has an estimated Year 5 salvage value of $60,000. What level of pretax cost savings do we require for this project to be profitable? MACRS schedule. (Do not round intermediate calculations and round your final answer to 2 decimal places. (e.g., 32.16)) 1 Approved Answer Ankita G 5 Ratings, (9 Votes) Tax rate 35% Calculation of annual depreciation Depreciation Year-1 Year-2 Year-3 Year-4 Total Cost$       640,000 $640,000$       640,000 $640,000 Dep Rate 33.33% 44.45% 14.81% 7.41% Depreciation Cost * Dep rate$       213,312 $284,480$         94,784 $47,424$       640,000 Calculation of after-tax salvage value Cost of machine $640,000 Depreciation$      640,000 WDV Cost less accumulated... Looking for Something Else? Ask a Similar Question
2021-05-16 21:35:00
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.193505197763443, "perplexity": 9731.855038383404}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-21/segments/1620243989914.60/warc/CC-MAIN-20210516201947-20210516231947-00152.warc.gz"}
https://math.stackexchange.com/questions/1265330/conformal-isomorphisms-from-disc-to-half-disc-fixing-1-and-1
# Conformal isomorphisms from disc to half-disc fixing 1 and -1 I am preparing for a complex analysis qual and ran into this problem on an old exam. Find all conformal isomorphisms from the unit disk $\mathbf{D}=\{z\in\mathbf{C} : |z| < 1\}$ to the semi-disk $S=\{z\in\mathbf{D} : \operatorname{Im}z>0\}$ fixing $1$ and $-1$. I know how to map $\mathbf{D}$ conformally onto $S$, but I can not find a mapping that fixes $1$ and $-1$. • If we look the other way, a conformal map $S\to \mathbf{D}$ fixing $1$ and $-1$ must convert a right angle to a straight $\pi$ angle at these points. What map do you know that doubles angles at a certain point? – Daniel Fischer May 3 '15 at 21:43 • Every conformal isomorphism $D \to S$ differs by a conformal automorphism of the disc (i.e., if $f_1, f_2$ are conformal isomorphisms $D \to S$, then there is some conformal automorphism $g: D \to D$ such that $f_2g = f_1$). So start with a given conformal isomorphism, modify it by an automorphism of the disc so that it fixes 1 and -1, and find out what other automorphisms of the disc fix these points. – user98602 May 3 '15 at 21:43 Thanks, I think I have it now. We begin by applying $f(z)=\frac{z+1}{z-1}$. Note that \begin{align*} f(-1)&=0 \\ f(1)&=\infty \\ f(0)&=-1 \\ f(i)&=-i \end{align*} and so $f$ maps the semi-disk to the third quadrant. Multiplication by $-1$ followed by squaring gives us a conformal bijection to the upper half-plane, $g(z)=\left(\frac{z+1}{z-1}\right)^2$. Let $h(z)=\frac{z-i}{z+i}$ and note that then \begin{align*} h(0)&=-1 \\ h(1)&=-i \\ h(-1)&=i \\ h(i)&=0 \end{align*} and so $h$ maps the upper half-plane conformally onto the unit disk. Hence $h(g(z))$ maps the semi-disk to the unit disk. Note that \begin{align*} h(g(1))&=h(\infty)=1 \\ h(g(-1))&=h(0)=-1 \end{align*} and so we have a conformal mapping that takes the semi-disk to the unit disk and fixes $1$ and $-1$. Suppose $f$ and $g$ are two such mappings. Then $g\circ f^{-1}$ gives an automorphism of the unit disk that fixes $1$ and $-1$. However, by the Schwarz lemma the only such automorphism is the identity and so we have that the mapping above is the unique mapping satisfying the conditions. EDIT: As Daniel Fischer pointed out, there are automorphisms of the closed unit disk that have two fixed points. Suppose $f$ and $g$ are two mappings satisfying the conditions from before. Then $g=d\circ f$ where $d:\overline{\mathbf{D}}\to\overline{\mathbf{D}}$ is an automorphism of the unit disk that fixes $1$ and $-1$. Note that $d(\mathbf{D})=\mathbf{D}$ by the open mapping theorem and thus every automorphism of the closed unit disc restricts to an automorphism of the open unit disk. Hence it suffices to consider automorphisms of the form \begin{align*} d(z)=e^{i\theta}\frac{z-\alpha}{1-\overline{\alpha}z} \end{align*} for $\alpha\in \mathbf{D}$. Since $d$ must fix $1$ and $-1$ we have \begin{align*} 1-\overline{\alpha}=e^{i\theta}(1-\alpha) \\ -1-\overline{\alpha}=e^{i\theta}(-1-\alpha) \\ \end{align*} which implies that $e^{i\theta}=1$ and $\overline{\alpha}=\alpha$. Hence $$d(z)=\frac{z-r}{1-rz}$$ for some $r\in(-1,1)$. • Everything is right except the last bit, there are more automorphisms of the unit disk fixing $1$ and $-1$. For $-1 < r < 1$, consider $$T_r(z) = \frac{z-r}{1-rz}.$$ These are automorphisms of the unit disk fixing $1$ and $-1$. It remains to see that these are all. – Daniel Fischer May 7 '15 at 19:57
2020-01-19 16:47:33
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9999686479568481, "perplexity": 93.21495804715609}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-05/segments/1579250594662.6/warc/CC-MAIN-20200119151736-20200119175736-00356.warc.gz"}
https://mathoverflow.net/questions/26416/what-is-your-favorite-proof-of-tychonoffs-theorem
# What is your favorite proof of Tychonoff's Theorem? Here is mine. It's taken from page 11 of "An Introduction To Abstract Harmonic Analysis", 1953, by Loomis: (By the way, I don't know why this book is not more famous.) To prove that a product $K=\prod K_i$ of compact spaces $K_i$ is compact, let $\mathcal A$ be a set of closed subsets of $K$ having the finite intersection property (FIP) --- viz. the intersection of finitely many members of $\mathcal A$ is nonempty ---, and show $\bigcap\mathcal A\not=\varnothing$ as follows. By Zorn's Theorem, $\mathcal A$ is contained into some maximal set $\mathcal B$ of (not necessarily closed) subsets of $K$ having the FIP. The $\pi_i(B)$, $B\in\mathcal B$, having the FIP and $K_i$ being compact, there is, for each $i$, a point $b_i$ belonging to the closure of $\pi_i(B)$ for all $B$ in $\mathcal B$, where $\pi_i$ is the $i$-th canonical projection. It suffices to check that $\mathcal B$ contains the neighborhoods of $b:=(b_i)$. Indeed, this will imply that the neighborhoods of $b$ intersect all $B$ in $\mathcal B$, hence that $b$ is in the closure of $B$ for all $B$ in $\mathcal B$, and thus in $A$ for all $A$ in $\mathcal A$. For each $i$ pick a neighborhood $N_i$ of $b_i$ in such a way that $N_i=K_i$ for almost all $i$. In particular the product $N$ of the $N_i$ is a neighborhood of $b$, and it is enough to verify that $N$ is in $\mathcal B$. As $N$ is the intersection of finitely many $\pi_i^{-1}(N_i)$, it even suffices, by maximality of $\mathcal B$, to prove that $\pi_i^{-1}(N_i)$ is in $\mathcal B$. We have $N_i\cap\pi_i(B)\not=\varnothing$ for all $B$ in $\mathcal B$ (because $b_i$ is in the closure of $\pi_i(B)$), hence $\pi_i^{-1}(N_i)\cap B\not=\varnothing$ for all $B$ in $\mathcal B$, and thus $\pi_i^{-1}(N_i)\in\mathcal B$ (by maximality of $\mathcal B$). Many people credit the general statement of Tychonoff's Theorem to Cech. But, as pointed out below by KP Hart, Tychonoff's Theorem seems to be entirely due to ... Tychonoff. This observation was already made on page 636 of Chandler, Richard E.; Faulkner, Gary D. Hausdorff compactifications: a retrospective. Handbook of the history of general topology, Vol. 2 (San Antonio, TX, 1993), 631--667, Hist. Topol., 2, Kluwer Acad. Publ., Dordrecht, 1998 The statement is made by Tychonoff on p. 272 of "Ein Fixpunktsatz" where he says that the proof is the same as the one he gave for a product of intervals in "Über die topologische Erweiterung von Räumen" - Definitely, the one I like the most is the proof via ultrafilters. You only have to state the compactness of a topological space in terms of ultrafilters, which is easily obtained by the definition via open coverings (warning: the equivalence of the definitions is where one uses AC) X is compact if and only if every ultrafilter is convergent. Then one observes that 1. any image of an ultrafilter is an ultrafilter (in particular, any projection from a product space) 2. any filter in the product space converges if and only if all its projections converge . You really only need a few definitions and few natural properties. My test about how nice is a proof is: can I teach it to somebody just while standing in the queue at the canteen, on into subway car? - Dear Pietro Majer: I think Loomis's proof (the one I gave) is the same as the one you mention (due to H. Cartan if I'm not mistaken). Loomis hides the (ultra)filters. But I think he hides them very nicely. (Loomis credit the proof he gives to Bourbaki.) –  Pierre-Yves Gaillard May 30 '10 at 7:58 After a closer look at Loomis' one, I agree: in the substance it's the same as H.Cartan (it seems to me your're right with the attribution to Cartan, but I'm not certain either). But, I think here's an important issue: to give or not to give a name to the objects one uses? Sometimes there is no need at all (you know those papers with definitions of weird objects that only enter once in a proof). In this case, I think the proof greatly gains in semplicity introducing the notion of filter. I would even say, the notion of filter is the most important byproduct of the compactness theorem. –  Pietro Majer May 30 '10 at 8:28 Actually, the axiom of choice is used twice in the proof. First, you have to use it to characterize compactness by ultrafilters. For this, you do not need the full axiom of choice, the boolean prime ideal theorem suffices. The second application is in picking for each coordinate a point the projection converges to. Here you need the full axiom of choice. For Hausdorff spaces, you don't need the second part though, because than a filter cannot converge to different points. –  Michael Greinecker May 30 '10 at 8:55 I also like Cartan's ultrafilter proof best. As people have said, all the cleverness is hidden in the definition of ultrafilters. With this in hand, Tychonoff's theorem becomes a consequence of a few very straightforward facts about filters on products. As to whether it's better to hide the ultrafilters or not, I think the test should be whether ultrafilters are of any use outside of this specific context, the answer to which is of course YES! Thus I would recommend this proof even to someone who doesn't already know about ultrafilters -- learning about them is good in and of itself. –  Pete L. Clark May 30 '10 at 9:11 Dear Pietro, Pete, Spencer: In "L'intégration dans les groupes topologiques" Weil explains that his proof of the existence of a Haar measure on locally compact groups was obtained by hiding filters. I think the situations are very similar. (And I find Weil's proof incredibly beautiful - for the ideas and the style.) –  Pierre-Yves Gaillard May 30 '10 at 10:50 I like the proof from Alexander's subbase lemma. E.g. A proof here. That lemma also gives the compactness criterion in ordered spaces (completeness implies compactness). - Thank you for your answer. I noticed that Walter Rudin gives this proof in his "Functional Analysis". –  Pierre-Yves Gaillard May 31 '10 at 4:51 My favorite proof is the one from Johnstone's Stone spaces for locales because it works without the axiom of choice. - Tychonoff and AC are equivalent in ZF. –  Martin Brandenburg May 30 '10 at 8:07 @Martin: Tychonoff's theorem for locales does not require any form of the axiom of choice. However, the categories of spatial locales and sober topological spaces are equivalent only if AC is true. –  Dmitri Pavlov May 30 '10 at 8:39 No, the categories of spatial locales and sober spaces are always equivalent. The reason that Tychonoff for locales doesn't imply Tychonoff for spaces is that the locale product of a family of spatial locales may no longer be spatial. –  Mike Shulman May 31 '10 at 2:41 @Mike: Yes, you are right. What I actually meant is that equivalence of categories of compact regular locales and compact regular (or Hausdorff) topological spaces requires some form of AC. –  Dmitri Pavlov May 31 '10 at 8:03 Here is Tychonoff's original proof, for powers of the unit interval. He builds a complete accumulation point of a given infinite set by transfinite recursion along the index set. On page 772 of this paper one finds the formulation of the general theorem (in my translation): "The product of compact spaces is again compact. One proves this theorem word for word as in he case of the compactness of the product of intervals". Some authors (Folland, see comment below and Walter Rudin in his Functional Analysis') credit Čech with proving the general result but Čech's proof is the same as Tychonoff's and, based on a reading of his papers, I think Tychonoff deserves full credit for the theorem and its proof. @Henno: not Fundamenta but Mathematische Annalen. - Dear KP Hart: Thank you for your answer. (It contains at least one typo ("the the proof").) --- Folland (Real Analysis) claims that the general statement is due to Cech (On bicompact spaces, Ann. of Math. 38 (1937), 823-844. MR 1503374). What's your opinion? –  Pierre-Yves Gaillard May 30 '10 at 16:45 Walter Rudin expressed the same opinion in his Functional Analysis' but on pae 772 of the second paper that I linked to you'll find (in my translation): The product of compact spaces is again compact. This one proves word for word as in he case of the compactness of the product of intervals. I think Tychonoff deserves full credit for the theorem and its proof. The proof in \v{C}ech's paper is the same as Tychonoff's. –  KP Hart May 30 '10 at 19:01 Very interesting!!! Thank you! --- I was wondering if you couldn't insert the contents of your comment into your answer, to make them more visible. –  Pierre-Yves Gaillard May 31 '10 at 7:01 Thank you! Two more suggestions: (1) Replace "This one proves word for word as in he case" by "One proves this theorem word for word as in the case". (2) To type Cech with the caron, copy and paste "&#268;ech" (without the quotation marks). (It should work.) –  Pierre-Yves Gaillard May 31 '10 at 10:45 Here are different links to Tychonoff's articles: Über die topologische Erweiterung von Räumen (springerlink.com/content/l656352441w67612/…). Ein Fixpunktsatz (springerlink.com/content/n61706447r886l58/…). –  Pierre-Yves Gaillard May 31 '10 at 12:28 My favorite is the proof via nets by Paul Chernoff. A VERY clever use of generalized convergence in point set topology! http://www.jstor.org/pss/2324485 - Thank you! Unfortunately I don't have access to JSTOR. Chernoff's proof is also in Folland's Real Analysis. Does anybody know a public link to Chernoff's paper? –  Pierre-Yves Gaillard May 30 '10 at 7:47 @Pierre-Yves: Google "Tychonoff sets". @AnrewL: I agree, this is really the most intuitive proof. –  Martin Brandenburg May 30 '10 at 8:18 @Pierre Yikes,I forgot,that's true,Folland DOES have the proof in his text! I learned the proof from Chernoff's original paper,which was required reading in John Terilla's point set topology course.Interestingly,John learned of the proof in James Stasheff's topology course when he was a graduate student at the University Of North Carolina. –  Andrew L May 30 '10 at 15:49 This proof is also in Volker Runde's "A taste of topology". –  Bruno Stonek Jul 8 '11 at 21:55 Since all of the answers to this question(except the one involving Alexander's subbase lemma) refer to a usually strange rehashing of the ultrafilter proof (BOO), I decided to give two nice proofs to Tychonoff's theorem here for Hausdorff spaces. The first proof of Tychonoff's theorem for Hausdorff spaces uses the Stone-Cech compactification. This proof is useful when one constructs the Stone-Cech compactification before Tychonoff's theorem. Proof: Assume that $X_{i}$ is compact for $i\in I$. Let $X=\prod_{i\in I}X_{i}$ be the product space. Then each projection $\pi_{i}:X\rightarrow X_{i}$ extends to a continuous map $\overline{\pi_{i}}:\beta X\rightarrow X_{i}$ since each $X_{i}$ is compact. Therefore the map $f:\beta X\rightarrow X$ where $f(x_{i})_{i\in I}=(\overline{\pi_{i}}(x))_{i\in I}$ is a continuous surjection, so $X$ is compact being the continuous surjective image of $\beta X$. QED For the second proof we use the following facts about uniform spaces that every mathematician should be aware of. i. Every compact Hausdorff space has a unique compatible uniformity and that uniformity is complete and totally bounded. ii. If a uniform space is complete and totally bounded, then it is compact. Tychonoff's theorem then immediately follows from the fact that the product of complete uniform spaces is complete and that the product of totally bounded uniform spaces is totally bounded. And this proof is intuitive because it is easier to imagine that the product of complete and totally bounded uniform spaces is complete and totally bounded than to imagine that the product of compact spaces is compact. - Is there a good reason that this answer was downvoted? Is it because these proofs only work for Hausdorff spaces or is it because it is a sin to construct the Stone-Cech compactification without Tychonoff's theorem? –  Joseph Van Name Sep 9 '13 at 14:50 The non-standard analysis proof is an interesting "application" of the ultrafilter proof: a topological space $A$ is compact if and only if every point in the associated "non-standard topological space" ${}^*A$ is near-standard, that is to say, if and only if each $x \in {}^*A$ is contained in every open neighborhood of some standard point $y \in A$ ($i.e.,$ for all $U \subset A$, $U$ open and $y \in U$ implies $x \in {}^*U \subset {}^*A$). So let $\mathcal{X}$ be a set of topological spaces indexed by $I$, and $P$, the product of these spaces; write ${}^\*P$ for the "non-standard product" of the set ${}^*\mathcal{X}$ of topological spaces indexed by ${}^*I$, and let $x \in {}^\*P$. It suffices to show that $x$ is near-standard. For each $\kappa \in I$, let $x_\kappa \in {}^\*X_\kappa \in {}^*\mathcal{X}$ be the $\kappa$th factor of $x$. Then $x_\kappa$ is necessarily near-standard, because $X_\kappa \in \mathcal{X}$ is compact. But this means we can find a point $y \in P$ with factors $y_\kappa \in X_\kappa$ such that $U \subset X_\kappa$ open and $y_\kappa \in U$ implies $x_\kappa \in {}^*U \subset {}^*X_\kappa$, thus $V \subset P$ open and $y \in V$ implies $x \in {}^*V \subset {}^*P$. But this means $x$ is near-standard, so $P$ is compact. "Under the hood," this is basically the ultrafilter proof (my favorite, to answer the original question), so the axiom of choice is required in more or less the same places: while the non-standard objects exist by the Boolean prime ideal theorem, "finding" the $y_\kappa$ in non-Hausdorff spaces requires the full axiom of choice. - How do you define the associated "non standard space" to a standard space? –  mathahada May 20 '11 at 8:42 W.A.J. Luxembourg noted an interesting consequence of doing the proof this way. If the spaces are Hausdorff, then no additional application of choice is needed. Only the use of AC that produces the nonstandard model ... and for that the Boolean Algebra Maximal Ideal Theorem (strictly weaker than AC) suffices. –  Gerald Edgar May 20 '11 at 12:17 Wait a minute, now I'm confused. I have never heard of the Boolean Algebra Maximal Ideal Theorem, but if it implies Tychonoff in this proof without AC and if Tychonoff implies AC then how can the Boolean Theorem be strictly weaker than AC? –  David White May 20 '11 at 16:03 What I'm asking I guess is: Is Tychonoff for Hausdorff spaces equivalent to AC? It seems the answer must be no or else the Boolean Theorem would be equivalent to AC –  David White May 20 '11 at 16:05 Tychonoff for Hausdorff equals Boolean principal ideal Thm. The ultrafilter proof only needs Boolean principal ideal Thm in the Hausdorff case because you dont need to choose a limit point in each factor. –  Benjamin Steinberg Feb 27 '13 at 3:38 I'm surprised that nobody has mentioned the proof using universal nets. (It can be found, e.g., in Pedersen's 'Analysis NOW' and in Bredon's 'Topology and geometry'.) A universal net in a set X is a net which, for every $Y\subset X$, ultimately lives in $Y$ or $X\backslash Y$. One easily sees that composition of a universal net in X with a function $f:X\rightarrow Y$ gives a universal net in $Y$. Using the ultrafiler lemma, one proves that every net has a universal subnet. All this involves no topology. Combining the above with standard facts, the proof of Tychonov is extremely short. All one needs is: - a space is compact if and only if every net has a limit point (equiv., a convergent subnet), - a net in $\prod_iX_i$ converges if and only if it converges coordinate-wise. - This seems to be another way of "hiding" the ultrafilters in the ultrafilter proof. –  Andreas Blass Oct 12 '12 at 15:29 I have been teaching general topology for several years, but remained unsatisfied by the proofs given in the books that I based the course upon. Finally I wound up writing my own lecture notes, still not quite finished. In those notes, I give four different proofs. Two of them use (ultra)filters, but one of them avoids the terminology. The other two proofs use nets, namely Chernoff's proof without and Kelley's with universal nets. The notes can be found at http://www.math.ru.nl/~mueger/topology2012.pdf - I first learnt from Munkres' Topology. He gave a different motivation to use the maximal principle (Hausdorff's to be precise, but Zorn's work too) instead of the historic motivation to characterize compact spaces with a generalized version of "sequence"; i.e. filters. What was Tychonoff's original proof? To me every proof seem to use some maximal principle; Alexander's subbase theorem also uses Zorn's lemma. - Every proof uses the axiom of choice, because Tychonoff is equivalent to AC. For the converse, that Tychonoff's theorem implies the axiom of choice, it's hard to beat the original proof by Kelley in Fundamenta Mathematicae 37 (1950) 75--76. It may be viewed here: matwbn.icm.edu.pl/ksiazki/fm/fm37/fm3716.pdf –  John Stillwell May 30 '10 at 6:31 The original proof used the characterization (of compactness) that every infinite set has a point of complete accumulation, and involved a transfinite recursion, IIRC. It's probably in Fundamenta as well. –  Henno Brandsma May 30 '10 at 6:35 Dear John Stillwell: As a bourbakist, the expression "axiom of choice" makes no sense to me. But I thank you very much for your comment. (Unrelated aside: Thank you very much also for your wonderful translation of Dirichlet! It changed my life!) Thank you to all contributors! –  Pierre-Yves Gaillard May 30 '10 at 7:42 Dear Amadeus: I had the impression that Munkres's proof was almost the same as the Loomis's (the one I gave), which was written long before. (Thank you for correcting me if I'm wrong.) –  Pierre-Yves Gaillard May 30 '10 at 7:53 As Georges Elencwajg points out, there is a minor flaw in the original proof by Kelley that Tychonoff implies AC. Kelley uses the fact that the product of cofinite topologies is compact, a weaker statement equivalent to the theorem that every filter can be extended to an ultrafilter as shown here: math.vanderbilt.edu/~schectex/papers/kelley.pdf One can easily correct the original proof by Kelley by adding {Lambda} to the open sets, so the original proof is easily corrected. –  Michael Greinecker May 30 '10 at 13:59
2014-03-10 00:33:08
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9005945324897766, "perplexity": 508.30285184883354}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2014-10/segments/1394010502819/warc/CC-MAIN-20140305090822-00074-ip-10-183-142-35.ec2.internal.warc.gz"}
https://quantumcomputing.stackexchange.com/questions/1353/can-classical-algorithms-be-improved-by-using-quantum-simulation-as-an-intermedi
# Can classical algorithms be improved by using quantum simulation as an intermediary step? I'm wondering whether even if we cannot create a fast quantum computer, simulating quantum algorithms can be a reasonable method for classical algorithms. In particular, I'd like to see any results of classical algorithms that have been sped up by using a quantum simulation as a subroutine. Second, the next logical step would be to 'cut out the middleman' and see if we can remove the simulator. Perhaps this can even be done semi-automatically! So, is there any result or research on this? Suggestions are welcome. To be clear, I'm asking whether there exists any problem such that running a simulation of a quantum computer, on a classical computer, can offer any improvement (time or memory) over (trying to) solve the same problem on a classical computer without running any sort of simulation of a quantum computer. Second, I am wondering how one then would attempt to adapt this algorithm such that all 'useless' parts of the quantum algorithm and the simulation are removed, hopefully improving the method even further. • So are you asking if a quantum computer can do a better job at simulating a quantum computer, than a classical computer? – Niel de Beaudrap Mar 26 '18 at 21:56 • Perhaps you would like to revise the title, and the second paragraph, to avoid the sort of misunderstanding I have suggested. ("... in classical simulation": simulation of what? "... see if we can remove the simulator": and replace it with what, exactly?) – Niel de Beaudrap Mar 26 '18 at 22:28 • This reminds me of "quantum-inspired evolutionary algorithms", which are heuristics that use representations at least reminiscent to multi-qubit states, but I'm not sure if the actual computations done would constitute a simulation of a quantum computer or if "quantum" is just used like a buzzword. I'm actually supposed to carefully read a paper dealing with such methods this week, so perhaps that will put me in a position to write an answer. – Kiro Mar 27 '18 at 5:54 • @Kiro If you can take the time to share the papers or look into it and answer, I would be most satisfied – Discrete lizard Mar 27 '18 at 7:06 • This looks pretty clear to me - you're asking if there exists any problem such that running a simulation of a quantum computer, on a classical computer, can offer any improvement (time or memory) over (trying to) solve the same problem on a classical computer without running any sort of simulation of a quantum computer, right? It does raise an interesting question about, if it does exist, would it then be considered a classical algorithm? Or would it be classified as 'classical with quantum influences'? Nevertheless, the question looks clear to me – Mithrandir24601 Mar 27 '18 at 21:30 I will attempt to address the following question only. I'm asking whether the method of 'running' quantum algorithms on a 'quantum computer' 'simulated' on a classical computer would be able to outperform normal classical algorithms (preferably for problems that not obviously involve quantum simulation) The closest thing to this that I am aware of are heuristic methods that employ natural computing, in particular the ones that take inspiration from quantum physics for the development of novel problem-solving techniques. These are known as quantum inspired algorithms. Please notice that: i) I do not claim that such methods could be rigorously shown to be superior to conventional algorithms, but it seems that they can be at least competitive; ii) the algorithms may or may not actually simulate a quantum computer, the faithfulness to the original source of inspiration varies. I will briefly outline the framework of a particular type of a quantum-inspired evolutionary algorithm (QIEA). A more complete treatment may be found in chapter 24 of the book "Natural computing algorithms" by Anthony Brabazon et al [1]. Concrete examples can be found for example in arXiv. The basic ingredients of a conventional evolutionary algorithm (EA) are a population of individuals $P(t)$, an update rule for the population, and a fitness function $f$. Here, each individual in $P(t)$ represents a possible solution to some problem, and $f$ quantifies how good the solution is. After initialization, for each step $t$ one evaluates $f$ on every individual in $P(t)$, records best ones and updates $P(t)$. This is iterated until a stopping criterion is reached, and the best found individual(s) are returned. In the simplest case, the update rule could be just random variation of individuals, but it can also be more complicated and engineered to introduce selection pressure towards better values of $f$. In a QIEA, solutions are represented by bit strings of a fixed length, say, $m$. A quantum population $Q(t)$ is used, where each quantum individual consists of $m$ qubits. At each $t$, classical population $P(t)$ is determined from $Q(t)$ by "measuring" the qubits. $P(t)$ is ranked by $f$ and best results are recorded. $Q(t)$ is updated by acting on each qubit with a local gate, and iteration is continued. Often for $Q(0)$, all qubits are set to balanced superposition $(1/\sqrt{2},1/\sqrt{2})^T$, making each particular solution equally likely in the beginning. As the quantum individuals remain essentially in a product state of $m$ qubits, there is no entanglement involved at any point, making QIEA not very quantum. On the other hand, we can effectively simulate the evolution of $Q(t)$ and make as many measurements as we want without needing extra qubits. The claimed advantage is over conventional EAs, based on supposedly needing fewer individuals or being better at maintaining diversity as the population evolves, as even a fixed $Q(t)$ can lead to many $P(t)$. All in all, QIEA by its design is meant to be run only as a simulation. As a final remark, suppose that we wish to make QIEA more quantum without making it intractable. Can we? Perhaps. Consider the update rule of QIEA as a quantum circuit. It is rather boring, with a qubit register of size $m$ and a local gate acting once on each qubit. One could try to introduce some tractable quantumness to QIEA by taking the update circuit to be some Clifford quantum circuit, mentioned and outlined for example here and here. I do not know if this could offer any benefits at all, and as far as I know, this hasn't been tried. [1] S. M. Anthony Brabazon, Michael O’Neill, Natural Computing Algorithms. Springer-Verlag Berlin Heidelberg, 2015. The question here seems to be: "can a classical computer be more efficient by simulating a quantum computer?" and "what research has been done on this?" I think it's important, first, to point out that no one is 100% sure that a quantum computer is even actually better than a classical computer, whether or not we have the fastest possible algorithms for a classical or quantum computer for really any particular problem, and so forth. I found an article from October 2017 that details an experiment IBM did simulating a 56 qubit quantum computer on a supercomputer. Here's what the study author said: For instance, whereas a perfect 56-qubit quantum computer can perform the experiments "in 100 microseconds or less, we took two days, so a factor of a billion times slower" (See their paper on arXiv for more information.) I also found a paper submitted to arXiv in February of 2018 which simulates a 64 qubit quantum computer, building on the work of IBM. They also estimate a 72 qubit circuit could be simulated. What seems to be prevalent in all of this, though, is that these simulations are for help in comparison to quantum computing results and times, and none of them claim to show quantum computing "useless" or "replicable". So, my final answer would be no, this is not a thing. • I think to say "I think it's important, first, to point out that no one is 100% sure that a quantum computer is even actually better than a classical computer" is not correct since we have already quantum algorithms that are much better than classical ones. So, if one can build a quantum computer able to run these algorithms we can see that quantum computers are better than classical ones. Another thing is to say that the quantum computer can take some "metric of time" to run these algorithms. – Gustavo Banegas Mar 27 '18 at 2:51 • @GustavoBanegas We have quantum algorithms that are better than the best known classical algorithms, but we don't have proofs that the quantum algorithms are better than any possible classical algorithm. (Well, we do in settings like query complexity and communication complexity, but not for decision problems or sampling problems.) – Jalex Stark Mar 27 '18 at 4:29 • "The question here seems to be: "can a classical computer be more efficient by simulating a quantum computer?" and "what research has been done on this?" Unfortunately, this is not the question, I'm sorry this might have looked like this. Also, I think this is already askes somewhere, so perhaps you can move your answer there? – Discrete lizard Mar 27 '18 at 4:43 • @JalexStark Indeed, I understand the difference. Thank you. Also, I think that it needs to be consider some things such as "architecture" that a QC will run and how it will affect the performance of the QC. – Gustavo Banegas Mar 27 '18 at 11:35 • @Discretelizard could you maybe edit to clarify your question then? – heather Mar 27 '18 at 19:13
2020-01-25 06:30:13
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.5525319576263428, "perplexity": 513.9782053553334}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-05/segments/1579251669967.70/warc/CC-MAIN-20200125041318-20200125070318-00499.warc.gz"}
https://www.vedantu.com/question-answer/find-the-area-in-square-meters-of-the-triangle-class-10-maths-cbse-5f5fa86168d6b37d16353caf
# Find the area, in square meters, of the triangle, whose base and altitude are:Base=7.5cm , altitude = 4cm. Verified 146.4k+ views Hint: We will use the formula of area of the triangle to calculate the required value of the area of the triangle Formula used: $\Delta = \dfrac{1}{2} \times base \times height$ to find the area of the triangle in square meters. Height and altitude are the same in a triangle. Complete step by step solution: We have to find the area of a triangle whose base is given as 7.5cm and altitude is given as 4cm. So, let the triangle be $\Delta ABC$ such that : Now we know that Area of a $\Delta = \dfrac{1}{2} \times base \times height.$ Then, Area of $\Delta ABC = \dfrac{1}{2} \times base \times altitude$ Now in $\Delta ABC$ , base$BC = 7.5cm$ and altitude or height $OA = 4cm$ . Next, we will put the values of the base and height of $\Delta ABC$ in the formula for Area of $\Delta$ and get: Area of $\Delta ABC = \dfrac{1}{2} \times base \times altitude$ $Area Of\Delta ABC = \dfrac{1}{2} \times BC \times OA \\ Area Of\Delta ABC = \dfrac{1}{2} \times 7.5 \times 4 \\ Area Of\Delta ABC = 15c{m^2} \\$ So we will get an area of $\Delta ABC$=$= 15c{m^2}$ . However the solution does not end here, since we need to find the area in square meters. So we need to convert the area we have calculated from square centimeters to square meters. Now, $1c{m^2} = 1cm \times 1cm$ And, $1cm = 0.01m \\ \because 100cm = 1m \\$ Then, if $1cm = 0.01m$, $\Rightarrow 1c{m^2} = 1cm \times 1cm \\ = 0.01m \times 0.01m \\ = \dfrac{1}{{100}}m \times \dfrac{1}{{100}}m \\ = \dfrac{1}{{10000}}{m^2} \\ = 0.0001{m^2} \\$ If $1c{m^2} = 0.0001{m^2}$ then $15c{m^2}$ will be $= 15 \times 0.0001{m^2} \\ = 0.0015{m^2} \\$ Therefore, the area of $\Delta ABC$ is $0.0015{m^2}$ , when expressed in square meters. Hence area of the triangle with base =7.5cm and altitude = 4cm. is $0.0015{m^2}$. Note:If the height of the triangle is not given and only the sides of the triangle are given, then we cannot use this formula. We will use Heron’s Formula to find the area of a triangle whose three sides are given to us. For Heron’s formula first we will calculate ‘s’ which is the semi perimeter as : $s = \dfrac{{a + b + c}}{2}$ , where a,b and c are the three sides of a triangle respectively. And then we will find the Area of $\Delta = \sqrt {s(s - a)(s - b)(s - a)}$ . This is Heron’s formula for finding the area of a triangle whose all sides are given to us.
2022-01-19 11:47:09
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9169459939002991, "perplexity": 192.83606843039178}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-05/segments/1642320301309.22/warc/CC-MAIN-20220119094810-20220119124810-00119.warc.gz"}
http://mathhelpforum.com/calculus/71965-ricci-curvature-tensor-stuoid-question-print.html
# Ricci Curvature Tensor - Stuoid Question $ What's the purpouse of $\lambda$ and $\beta$? What values can they take?
2014-09-18 09:44:04
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 3, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.7663602828979492, "perplexity": 13231.4313176359}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2014-41/segments/1410657127222.34/warc/CC-MAIN-20140914011207-00245-ip-10-196-40-205.us-west-1.compute.internal.warc.gz"}
http://physics.stackexchange.com/tags/brownian-motion/hot
Tag Info 12 So, whenever I want to find a nice introduction to a concept in physics, I check the American Journal of Physics, as it is full of articles with clever descriptions of phenomenon appropriate for presentation in university courses. In this case, this yields many results. In particular, I found the following three articles very helpful: The mathematics of ... 10 I would say that one experiment that demonstrates the atomic nature of things is the observation of Brownian motion. But it is not the experiment itself that convinces that things are made of atoms, rather its theoretical explanation given by Einstein in one of his 1905 papers (actually Einsteins work for his PhD was on the subject of atomic theory and there ... 6 I once heard Uhlenbeck give a lecture on this to high school students over the Christmas break at the Rockefeller Univ. years ago. He recounted a published argument he attributed to Einstein around 1905 (I think), which was that atoms were real if you could count the number of them/mole (Avogadro's number) many different independent ways, and you always ... 5 The article makes no sense. Einstein realized that matter was composed out of atoms, so the number of collisions of a Brownian particle with the surrounding molecule is finite in a finite period of time. However, for times $t$ much longer than the typical scale between the collisions, the particle moves by a distance scaling like $\sqrt{t}$. It follows that ... 5 For Brownian motion, Langevin equation, Fokker-Planck equations, Stochastic process.. from the viewpoint of physicists, the following are standard references: Brownian Motion: Fluctuations, Dynamics, and Applications The Fokker-Planck Equation: Methods of Solutions and Applications Handbook of Stochastic Methods: for Physics, Chemistry and the Natural ... 5 A thought experiment: after N steps, each of which create a change in angle $\Delta \theta$, we should end up with a normal distribution of angles with a standard deviation of $\frac{\sigma}{\sqrt{N}}$. When you change the step length, you therefore need to scale the standard deviation by the square root of that change, so that after moving the same distance ... 4 The Brownian motion $x(t)$ is non-differentiable, so a particular trajectory $x(t)$ can't extremize an action $S$ which would be a functional of $x(t)$ and its derivative, $\dot x(t)$, because the derivative isn't even well-defined and any expression of the type $\int [\dot x(t)]^2 dt$, the usual kinetic term in the action, diverges. (See e.g. middle of page ... 4 The differential is used to specify that the number is for a "differential range", which is a way to remind you that the notions involved are somewhat fuzzy. Let me give a purely mathematical example. Suppose I tell you that I am going to pick an arbitrary real number between 0 and 10, with the likely hood of a number being picked being proportional to the ... 4 No experiments prove any theory. Experiments can only refute theories. 4 There are tons of papers on the connection between quantum processes and probability theory (though I don't understand why you single out coherent states - they don't play a special role in this connection). The theory of stochastic processes and the theory of quantum processes are the commutative and noncommutative side of the same coin, with many ... 4 Many different types of connection can be made between stochastic states over commutative algebras of observables and quantum states over noncommutative algebras of observables. As Arnold says, there is a substantial literature. One approach is to construct both classical and quantum models in a formalism that accommodates both; within the structured ... 4 Since the comment answered your question I'll just go ahead and set out a more generalised version. It's straightforward to simplify things back down to your case. Consider the following continuity equation: $$\dot{N}(x,t) = -\nabla\cdot\vec{\Gamma}(x,t) + S(x,t),$$ where $\vec{\Gamma}(x,t)$ is the flux (in your case $\vec{\Gamma}(x,t)= -D\nabla N(x,t)$, ... 3 $R(t)$ is a function of time that represents complicated time-dependence of forces due to other molecules on the studied molecule. Since only correlation function is assumed, there is no single unique function $R(t)$ assumed; although not all, many functions would be appropriate. You can generate many of them in computer using Cholesky decomposition of ... 3 Here's how I would come to some intuition for it. I would think about the rate of "probability flow" into a region by integrating the equation over a region in space. For now, let's suppose that no diffusion occurs at all, since that is more complicated (although directly doable and understandable). Then $$\int_a^b dx\frac{\partial p(x,t|x_0)}{\partial ... 2 Here is another reason why a ratchet should not work: it would define a directional arrow of time even in thermal equilibrium. To see this look at the modified ratchet in the graph (the triangular piece is attached to a spring). the brownian particle would move easier to the right as it can push the triangular piece down, but if it is at the right and ... 2 Here is the intuitive explanation: When a particle is moving, it will "run into" things. Thus, the "random force" from impacting another particle is not completely random: it is in part correlated to the motion of the particle before the collision - the force of the impact is more likely to be in a direction opposite to the current motion than any other ... 2 Let us use the definition \langle\eta(s)\eta(t)\rangle=\Gamma\gamma^2\delta(t-s). First of all, C(s,t) depends on t because$$C(s,t)=\Gamma\min(s,t).$$It is clear, from causality, that \frac{\delta x(t)}{\delta \eta(s)}=0 if t<s. If t>s, compute the difference \delta x(t) caused by two realisations of noise that differ only at time s ... 2 The OP is correct in stating that the Fourier transform$$\xi(\omega) = \int\mathrm{d}t\, \mathrm{e}^{\mathrm{i}\omega t} \xi(t), $$vanishes upon averaging over realisations, \langle \xi(\omega)\rangle = 0, so long as we assume that the noise is also zero on average in the time domain, \langle \xi(t)\rangle = 0 . However, the noise is not only ... 2 Some kinds of mutation provide an example of this kind of indeterminacy. UV light can be bad for our health. One of the reasons is that, when we are exposed to sunlight, UVB photons are absorbed by double bonds in pyrimidines, which break open, become reactive, and dimerize (photo-dimerization). This damages the DNA in the same way that it would damage a ... 2 Apparently the search term I was missing was "Brownian motion". With that, I found several leads. They contradict each other somewhat, but I can at least post a partial answer: Geisler - Sound to Synapse: Physiology of the Mammalian Ear: Estimates for the first of these sources, the pressure fluctuations due to the Brownian motion of air molecules ... 2 This is not a answer to your question but a close cousin to it, perhaps you will find it of interest. The Schrodinger equation can be analytically continued to give the Heat Diffusion equation. t->-i*t Google can point you further elaborations and references. 2 The history of atoms is definitely intertwined with quantum mechanics. There are many features of the quantum theory that make atomic nature of our world apparent. But here I'd like to state an earlier result. Thomson's 1897 discovery of the electron not only showed that atoms exist but also that they have substructure. 1 I think that the points made about Einstein's theoretical explanation for the observed Brownian motion and the observed Perrin experiments on it are quite valid. But perhaps one could quibble that actually the forces on the pollen were produced by molecules...not by atoms... and perhaps one could resist the point by what is more than a quibble: it proved ... 1 The basic idea is illustrated on page 64 of the paper you cited. In this case, "directed transport" refers to a bias in the direction the wheel can turn, which could then be used to lift the weight and do work. A full explanation of this example can be found in section 2.1, beginning on page 64. This can be generalised to other situations. Let's ... 1 This is a bit strange. The Langevin equation$$ \frac{dv}{dt}~+~\beta v~=~\frac{F}{m} for the motion of a free particle under a stochastic force $F$ evaluates the velocity as an average or in an interval. The stochastic force has a Gaussian probability distribution $\langle F(t)F(t’)\rangle~=$ $2\beta kT\delta(t~-~t’)$, which is also a Markov ... 1 The autocorrelationfunction is actually some kind of measure for memory. The $C_X(t')= \langle X_s(t)X_s(t+t')\rangle$ should be compared with the statistical correlation (Wikipedia: http://en.wikipedia.org/wiki/Correlation_and_dependence). With the statistical correlation one measures the dependency between two veriables. If two variables are independent ... 1 If you describe the combined system of the molecules of the liquid and the Brownian particle and you know the mechanism of the collisions and all initial conditions, then it is deterministic. If you want to describe only the Brownian particle, then you would do so by a stochastic processes (called Brownian motion or the Wiener process) and it would be ... 1 A possibility is maybe to consider the probability flow. At the origin of the modelisation of Brownian motion or heat equation, you have a conservation equation $\frac{\partial \rho}{\partial t} + div \vec j = 0$ Here $\rho(x,t)$ must be considered as a probability density and $\vec j(x,t)$ as a probability flow. Then, supposing a simple relation \$\vec j = ... 1 No - thermal equilibrium means no heat transfer (essentially when the temperature of the gas inside the box remains constant but not necessarily zero). The atoms will continue to move, vibrate, rotate, etc... but do so now at a constant temperature. Also, just to be clear, an individual atom does not have a temperature - you need a large collection of ... Only top voted, non community-wiki answers of a minimum length are eligible
2016-02-14 21:33:40
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.7898501753807068, "perplexity": 333.23892563120376}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2016-07/segments/1454702032759.79/warc/CC-MAIN-20160205195352-00037-ip-10-236-182-209.ec2.internal.warc.gz"}
http://archive.numdam.org/item/M2AN_2009__43_5_973_0/
Finite volume scheme for two-phase flows in heterogeneous porous media involving capillary pressure discontinuities ESAIM: Mathematical Modelling and Numerical Analysis - Modélisation Mathématique et Analyse Numérique, Volume 43 (2009) no. 5, p. 973-1001 We study a one-dimensional model for two-phase flows in heterogeneous media, in which the capillary pressure functions can be discontinuous with respect to space. We first give a model, leading to a system of degenerated nonlinear parabolic equations spatially coupled by nonlinear transmission conditions. We approximate the solution of our problem thanks to a monotonous finite volume scheme. The convergence of the underlying discrete solution to a weak solution when the discretization step tends to $0$ is then proven. We also show, under assumptions on the initial data, a uniform estimate on the flux, which is then used during the uniqueness proof. A density argument allows us to relax the assumptions on the initial data and to extend the existence-uniqueness frame to a family of solution obtained as limit of approximations. A numerical example is then given to illustrate the behavior of the model. DOI : https://doi.org/10.1051/m2an/2009032 Classification:  35R05,  65M12 Keywords: capillarity discontinuities, degenerate parabolic equation, finite volume scheme @article{M2AN_2009__43_5_973_0, author = {Canc\es, Cl\'ement}, title = {Finite volume scheme for two-phase flows in heterogeneous porous media involving capillary pressure discontinuities}, journal = {ESAIM: Mathematical Modelling and Numerical Analysis - Mod\'elisation Math\'ematique et Analyse Num\'erique}, publisher = {EDP-Sciences}, volume = {43}, number = {5}, year = {2009}, pages = {973-1001}, doi = {10.1051/m2an/2009032}, zbl = {1171.76035}, mrnumber = {2559741}, language = {en}, url = {http://www.numdam.org/item/M2AN_2009__43_5_973_0} } Cancès, Clément. Finite volume scheme for two-phase flows in heterogeneous porous media involving capillary pressure discontinuities. ESAIM: Mathematical Modelling and Numerical Analysis - Modélisation Mathématique et Analyse Numérique, Volume 43 (2009) no. 5, pp. 973-1001. doi : 10.1051/m2an/2009032. http://www.numdam.org/item/M2AN_2009__43_5_973_0/` [1] Adimurthi and G.D. Veerappa Gowda, Conservation law with discontinuous flux. J. Math. Kyoto Univ. 43 (2003) 27-70. | MR 2028700 | Zbl 1063.35114 [2] Adimurthi, J. Jaffré and G.D. Veerappa Gowda, Godunov-type methods for conservation laws with a flux function discontinuous in space. SIAM J. Numer. Anal. 42 (2004) 179-208 (electronic). | MR 2051062 | Zbl 1081.65082 [3] Adimurthi, S. Mishra and G.D. Veerappa Gowda, Optimal entropy solutions for conservation laws with discontinuous flux-functions. J. Hyperbolic Differ. Equ. 2 (2005) 783-837. | MR 2195983 | Zbl 1093.35045 [4] Adimurthi, S. Mishra and G.D. Veerappa Gowda, , Existence and stability of entropy solutions for a conservation law with discontinuous non-convex fluxes. Netw. Heterog. Media 2 (2007) 127-157 (electronic). | MR 2291815 | Zbl 1142.35508 [5] H.W. Alt and S. Luckhaus, Quasilinear elliptic-parabolic differential equations. Math. Z. 183 (1983) 311-341. | MR 706391 | Zbl 0497.35049 [6] F. Bachmann, Analysis of a scalar conservation law with a flux function with discontinuous coefficients. Adv. Differ. Equ. 9 (2004) 1317-1338. | MR 2099558 | Zbl 1102.35063 [7] F. Bachmann, Equations hyperboliques scalaires à flux discontinu. Ph.D. Thesis, Université Aix-Marseille I, France (2005). [8] F. Bachmann, Finite volume schemes for a non linear hyperbolic conservation law with a flux function involving discontinuous coefficients. Int. J. Finite Volumes 3 (2006) (electronic). | MR 2465463 [9] F. Bachmann and J. Vovelle, Existence and uniqueness of entropy solution of scalar conservation laws with a flux function involving discontinuous coefficients. Comm. Partial Differ. Equ. 31 (2006) 371-395. | MR 2209759 | Zbl 1102.35064 [10] M. Bertsch, R. Dal Passo and C.J. Van Duijn, Analysis of oil trapping in porous media flow. SIAM J. Math. Anal. 35 (2003) 245-267 (electronic). | MR 2001474 | Zbl 1049.35108 [11] D. Blanchard and A. Porretta, Stefan problems with nonlinear diffusion and convection. J. Differ. Equ. 210 (2005) 383-428. | MR 2119989 | Zbl 1075.35112 [12] H. Brézis, Analyse Fonctionnelle : Théorie et applications. Masson (1983). | MR 697382 | Zbl 0511.46001 [13] C. Cancès, Écoulements diphasiques en milieux poreux hétérogènes : modélisation et analyse des effets liés aux discontinuités de la pression capillaire. Ph.D. Thesis, Université de Provence, France (2008). [14] C. Cancès, Nonlinear parabolic equations with spatial discontinuities. Nonlinear Differ. Equ. Appl. 15 (2008) 427-456. | MR 2465972 | Zbl pre05617589 [15] C. Cancès, Asymptotic behavior of two-phase flows in heterogeneous porous media for capillarity depending only of the space. I. Convergence to an entropy solution. arXiv:0902.1877 (submitted). [16] C. Cancès, Asymptotic behavior of two-phase flows in heterogeneous porous media for capillarity depending only of the space. II. Occurrence of non-classical shocks to model oil-trapping. arXiv:0902.1872 (submitted). [17] C. Cancès and T. Gallouët, On the time continuity of entropy solutions. arXiv:0812.4765v1 (2008). [18] C. Cancès, T. Gallouët and A. Porretta, Two-phase flows involving capillary barriers in heterogeneous porous media. Interfaces Free Bound. 11 (2009) 239-258. | MR 2511641 | Zbl 1178.35196 [19] J. Carrillo, Entropy solutions for nonlinear degenerate problems. Arch. Ration. Mech. Anal. 147 (1999) 269-361. | MR 1709116 | Zbl 0935.35056 [20] C. Chainais-Hillairet, Finite volume schemes for a nonlinear hyperbolic equation. Convergence towards the entropy solution and error estimate. ESAIM: M2AN 33 (1999) 129-156. | Numdam | MR 1685749 | Zbl 0921.65071 [21] J. Droniou, A density result in Sobolev spaces. J. Math. Pures Appl. (9) 81 (2002) 697-714. | MR 1968338 | Zbl 1033.46029 [22] G. Enchéry, R. Eymard and A. Michel, Numerical approximation of a two-phase flow in a porous medium with discontinuous capillary forces. SIAM J. Numer. Anal. 43 (2006) 2402-2422. | MR 2206441 | Zbl 1145.76046 [23] R. Eymard, T. Gallouët, M. Ghilani and R. Herbin, Error estimates for the approximate solutions of a nonlinear hyperbolic equation given by finite volume schemes. IMA J. Numer. Anal. 18 (1998) 563-594. | MR 1681074 | Zbl 0973.65078 [24] R. Eymard, T. Gallouët and R. Herbin, Finite volume methods, in Handbook of numerical analysis, P.G. Ciarlet and J.-L. Lions Eds., North-Holland, Amsterdam (2000) 713-1020. | MR 1804748 | Zbl 0981.65095 [25] G. Gagneux and M. Madaune-Tort, Unicité des solutions faibles d'équations de diffusion-convection. C. R. Acad. Sci. Paris Sér. I Math. 318 (1994) 919-924. | MR 1278152 | Zbl 0826.35057 [26] J. Jimenez, Some scalar conservation laws with discontinuous flux. Int. J. Evol. Equ. 2 (2007) 297-315. | MR 2337529 | Zbl 1133.35405 [27] K.H. Karlsen, N.H. Risebro and J.D. Towers, On a nonlinear degenerate parabolic transport-diffusion equation with a discontinuous coefficient. Electron. J. Differ. Equ. 2002 (2002) n${}^{\circ }$ 93, 1-23 (electronic). | MR 1938389 | Zbl 1015.35049 [28] K.H. Karlsen, N.H. Risebro and J.D. Towers, Upwind difference approximations for degenerate parabolic convection-diffusion equations with a discontinuous coefficient. IMA J. Numer. Anal. 22 (2002) 623-664. | MR 1937244 | Zbl 1014.65073 [29] K.H. Karlsen, N.H. Risebro and J.D. Towers, ${L}^{1}$ stability for entropy solutions of nonlinear degenerate parabolic convection-diffusion equations with discontinuous coefficients. Skr. K. Nor. Vidensk. Selsk. 3 (2003) 1-49. | MR 2024741 | Zbl 1036.35104 [30] C. Mascia, A. Porretta and A. Terracina, Nonhomogeneous Dirichlet problems for degenerate parabolic-hyperbolic equations. Arch. Ration. Mech. Anal. 163 (2002) 87-124. | MR 1911095 | Zbl 1027.35081 [31] A. Michel and J. Vovelle, Entropy formulation for parabolic degenerate equations with general Dirichlet boundary conditions and application to the convergence of FV methods. SIAM J. Numer. Anal. 41 (2003) 2262-2293 (electronic). | MR 2034615 | Zbl 1058.35127 [32] A. Michel, C. Cancès, T. Gallouët and S. Pegaz, Numerical comparison of invasion percolation models and finite volume methods for buoyancy driven migration of oil in discontinuous capillary pressure fields. (In preparation). [33] F. Otto, ${L}^{1}$-contraction and uniqueness for quasilinear elliptic-parabolic equations. J. Differ. Equ. 131 (1996) 20-38. | MR 1415045 | Zbl 0862.35078 [34] N. Seguin and J. Vovelle, Analysis and approximation of a scalar conservation law with a flux function with discontinuous coefficients. Math. Models Methods Appl. Sci. 13 (2003) 221-257. | MR 1961002 | Zbl 1078.35011 [35] J.D. Towers, Convergence of a difference scheme for conservation laws with a discontinuous flux. SIAM J. Numer. Anal. 38 (2000) 681-698 (electronic). | MR 1770068 | Zbl 0972.65060 [36] J.D. Towers, A difference scheme for conservation laws with a discontinuous flux: the nonconvex case. SIAM J. Numer. Anal. 39 (2001) 1197-1218 (electronic). | MR 1870839 | Zbl 1055.65104 [37] C.J. Van Duijn, J. Molenaar and M.J. De Neef, The effect of capillary forces on immiscible two-phase flows in heterogeneous porous media. Transport Porous Med. 21 (1995) 71-93. [38] J. Vovelle, Convergence of finite volume monotone schemes for scalar conservation laws on bounded domains. Numer. Math. 90 (2002) 563-596. | MR 1884231 | Zbl 1007.65066
2020-01-27 08:57:08
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 4, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.3008613884449005, "perplexity": 3035.4948439999366}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-05/segments/1579251696046.73/warc/CC-MAIN-20200127081933-20200127111933-00121.warc.gz"}
https://www.semanticscholar.org/paper/Quadratic-relations-of-the-deformed-W-superalgebra-Kojima/3b7ffe71a6ae5054fb25c4f695f8b1d2ea5ab211
# Quadratic relations of the deformed W-superalgebra Wq,t(sl(2|1)) @article{Kojima2019QuadraticRO, title={Quadratic relations of the deformed W-superalgebra Wq,t(sl(2|1))}, author={Takeo Kojima}, journal={arXiv: Quantum Algebra}, year={2019} } • T. Kojima • Published 5 December 2019 • Mathematics • arXiv: Quantum Algebra This paper is a continuation of study by J.Ding and B.Feigin. We find a bosonization of the deformed $W$-superalgebras ${\cal W}_{q t}(\mathfrak{sl}(2|1))$ that commutes up-to total difference with deformed screening currents. Using our bosonization, we derive a set of quadratic relations of generators of the deformed $W$-superalgebra ${\cal W}_{q t}(\mathfrak{sl}(2|1))$. The deformed $W$-superalgebra is independent of the choice of a Dynkin-diagram for the superalgebra $\mathfrak{sl}(2|1… 7 Citations The quantum toroidal algebra of$gl_1$provides many deformed W-algebras associated with (super) Lie algebras of type A. The recent work by Gaiotto and Rapcak suggests that a wider class of deformed • Go Noshita • Mathematics Journal of High Energy Physics • 2022 Abstract We discuss the 5d AGT correspondence of supergroup gauge theories with A-type supergroups. We introduce two intertwiners called positive and negative intertwiners to compute the instanton • Mathematics Journal of High Energy Physics • 2022 Recently, new classes of infinite-dimensional algebras, quiver Yangian (QY) and shifted QY, were introduced, and they act on BPS states for non-compact toric Calabi-Yau threefolds. In particular, • T. Kojima • Mathematics Symmetry, Integrability and Geometry: Methods and Applications • 2022 . We revisit the free field construction of the deformed W -algebra by Frenkel and Reshetikhin, Commun. Math. Phys. 197 , 1-31 (1998), where the basic W -current has been identified. Herein, we • Mathematics Journal of High Energy Physics • 2022 Recently, Li and Yamazaki proposed a new class of infinite-dimensional algebras, quiver Yangian, which generalizes the affine Yangian gl\documentclass[12pt]{minimal} \usepackage{amsmath} • Mathematics • 2021 We revisit the free field construction of the deformed W -algebra by Frenkel and Reshetikhin, Commun. Math. Phys. 197, 1-31 (1998), where the basic W -current has been identified. Herein, we We find the free field construction of the basic W-current and screening currents for the deformed W-superalgebra Wq,tA(M,N) associated with Lie superalgebra of type A(M, N). Using this free field ## References SHOWING 1-10 OF 49 REFERENCES We obtain an explicit expression for the defining relation of the deformed WN algebra,${\rm DWA}(\widehat{\mathfrak{sl}}_N)_{q,t}$. Using this expression we can show that, in the q→1 limit,${\rm • Mathematics • 2019 We introduce and study the quantum toroidal algebra $\mathcal{E}_{m|n}(q_1,q_2,q_3)$ associated with the superalgebra $\mathfrak{gl}_{m|n}$ with $m\neq n$, where the parameters satisfy $q_1q_2q_3=1$. • Mathematics • 2006 Abstract.From the defining exchange relations of the $$\mathcal{A}_{{q,p}} {\left( {\widehat{{gl}}_{N} } \right)}$$ elliptic quantum algebra, we construct subalgebras which can be characterized as • Mathematics • 1995 AbstractWe define a quantum-algebra associated to $$\mathfrak{s}\mathfrak{l}_N$$ as an associative algebra depending on two parameters. For special values of the parameters, this algebra becomes • Mathematics • 2003 After reviewing the recent results on the Drinfeld realization of the face type elliptic quantum group B_{q,lambda}(sl_N^) by the elliptic algebra U_{q,p}(sl_N^), we investigate a fusion of the • Mathematics • 1998 Starting from bosonization, we study the operator that commute or commute up-to a total difference with of any quantized screen operator of a free field. We show that if there exists a operator in • Mathematics • 2006 We review the deformed W -algebra Wq,t(ŝlN ) and its screening currents. We explicitly construct the Local Integrals of Motion In (n = 1, 2, · · ·) for this deformed W -algebra. We explicitly In this paper, we give defining relations of the affine Lie superalgebras an and defining relations of a super-version of the Drinfeld[D]-Jimbo[J] affine quantized universal enveloping algebras. As a • Mathematics • 1998 Abstract:We propose a q-difference version of the Drinfeld-Sokolov reduction scheme, which gives us q-deformations of the classical -algebras by reduction from Poisson-Lie loop groups. We consider in • Mathematics, Art Journal of Algebra and Its Applications • 2018 Let [Formula: see text] be a finite-dimensional vector space over a field [Formula: see text] of characteristic zero, [Formula: see text] an anti-commutative product on [Formula: see text] and
2023-01-30 21:52:54
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8177567720413208, "perplexity": 3014.9471785382793}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2023-06/segments/1674764499829.29/warc/CC-MAIN-20230130201044-20230130231044-00464.warc.gz"}
http://mymathforum.com/pre-calculus/342386-why-moving-2-comes-first-then-horizontal-reflection.html
My Math Forum Why moving by 2 comes first then horizontal reflection? Pre-Calculus Pre-Calculus Math Forum October 22nd, 2017, 02:21 AM #1 Newbie   Joined: Oct 2017 From: canada Posts: 3 Thanks: 0 Why moving by 2 comes first then horizontal reflection? Untitled.png Why moving by 2 comes first, then horizontal reflection? Thanks. Last edited by skipjack; November 8th, 2017 at 08:21 AM. October 22nd, 2017, 09:43 AM #2 Senior Member     Joined: Sep 2015 From: USA Posts: 2,009 Thanks: 1042 if you moved by 2 first and then reflected you would end up with $-(x-2) = -x + 2 \neq -x - 2$ November 8th, 2017, 04:45 AM #3 Math Team   Joined: Jan 2015 From: Alabama Posts: 3,244 Thanks: 887 "Move by 2" in which direction? -x- 2= -(x+ 2) so you can "move by 2" to the right on the number line, then "reflect" across the x-axis. For example, if x= 3, moving to the right makes it 5 and reflecting across the x-axis give -5. But you don't have to do it that way. You could, first, reflect across the x-axis so that x becomes -x, then "move by 2" to the left. If x= 3, reflecting across the x-axis give -3 and then moving to the left by 2, again -5. Last edited by skipjack; November 8th, 2017 at 08:20 AM. November 8th, 2017, 08:28 AM   #4 Global Moderator Joined: Dec 2006 Posts: 19,183 Thanks: 1647 Quote: Originally Posted by ninjaapple Why moving by 2 . . . Translating what by 2 in what direction? Quote: Originally Posted by ninjaapple . . . then horizontal reflection? Reflection of what in what line? Tags horizontal, moving, reflection Thread Tools Display Modes Linear Mode Similar Threads Thread Thread Starter Forum Replies Last Post julian21 Algebra 1 July 21st, 2010 06:20 PM mfetch22 Calculus 0 June 13th, 2010 01:38 PM envision Algebra 0 October 3rd, 2009 06:30 PM envision Applied Math 0 December 31st, 1969 04:00 PM Contact - Home - Forums - Cryptocurrency Forum - Top
2018-07-20 10:47:29
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.5960350632667542, "perplexity": 7001.753348695554}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-30/segments/1531676591578.1/warc/CC-MAIN-20180720100114-20180720120114-00439.warc.gz"}
https://mathhelpboards.com/threads/met-office-mean-average-temperatures-and-margin-of-error.8336/#post-38863
# Met Office mean average temperatures and margin of error #### UncleBucket ##### New member I have been debating with a few other people on another board, regarding the correct way to calculate the mean average of a list of values, where those values are recorded temperatures. A great many people seem to believe, for example, that if each temperature reading is accurate to +/- 0.5°, the reading should be rounded to the nearest integer. I don't have a problem with that, but when it comes to the calculation of the mean average, we have a difference of opinion. Many people seem to believe that if the list of values, for which the mean average is to be calculated, are all integers, then the mean average itself must also be an integer! This makes no sense at all to me. Another person argues that because the list of values contains "measured approximations", it is not appropriate to express the mean average to 1 or more decimal places, and that any decimal digits must be truncated! I welcome your views on these matters. Just recently, I was browsing the Met Office website, and found a document which gives details of the accuracy of each instrument, and the margin of error when calculating the mean average. I will quote it: The random error in a single thermometer reading is about 0.2°C (1 σ) [Folland et al., 2001]; the monthly average will be based on at least two readings a day throughout the month, giving 60 or more values contributing to the mean. So the error in the monthly average will be at most 0.2/√60 = 0.03°C and this will be uncorrelated with the value for any other station or the value for any other month. I am not an expert in this field, so could someone please explain to me the validity of that formula and why it is correct, assuming it is. Clearly, the accuracy of the mean average appears to be greater than that of any of the individual values in the original list. Would this be because with a large sample of data, the errors within each instrument reading tend to cancel each other out? #### Klaas van Aarsen ##### MHB Seeker Staff member Welcome to MHB, UncleBucket! I have been debating with a few other people on another board, regarding the correct way to calculate the mean average of a list of values, where those values are recorded temperatures. A great many people seem to believe, for example, that if each temperature reading is accurate to +/- 0.5°, the reading should be rounded to the nearest integer. Not true. A reading should always be made to a digit more than the measurement markers can indicate. The last digit will be somewhat of a guess, but it still improves precision. If the precision is for instance $\pm 0.7°$, it is customary to register the measurement in as many digits as this precison. So you would have for instance $20.1 \pm 0.7°$. On the other hand, when a measurement is given as $21°$ without any precision, it is customary to assume a precision of $\pm 0.5°$. The measurement could then be written as $21.0 \pm 0.5°$. I don't have a problem with that, but when it comes to the calculation of the mean average, we have a difference of opinion. Many people seem to believe that if the list of values, for which the mean average is to be calculated, are all integers, then the mean average itself must also be an integer! This makes no sense at all to me. Agreed. That makes no sense at all. An average of integers will typically be a fractional number. Moreover, a temperature is not an integer to begin with. Furthermore, when you take the average of, say, 100 temperatures that are all supposed to measure the same temperature, the effective precision is 10 times more accurate. So if the original measurements have a precision of $\pm 0.5°$, then the mean measurement will have a precision of $\pm 0.05°$. This means the result should be written down with 2 digits after the decimal point. Another person argues that because the list of values contains "measured approximations", it is not appropriate to express the mean average to 1 or more decimal places, and that any decimal digits must be truncated! Not true. For starters, any intermediary results should always have a couple of digits more than the final result to avoid unnecessary rounding errors. The final result should have as many digits as is appropriate for the final precision. A precision is usually specified in 1 significant digit, although in cutting-edge research 2 digits might be used. I welcome your views on these matters. Just recently, I was browsing the Met Office website, and found a document which gives details of the accuracy of each instrument, and the margin of error when calculating the mean average. I will quote it: I am not an expert in this field, so could someone please explain to me the validity of that formula and why it is correct, assuming it is. Clearly, the accuracy of the mean average appears to be greater than that of any of the individual values in the original list. Would this be because with a large sample of data, the errors within each instrument reading tend to cancel each other out? Yes. The errors in the measurements will partially cancel each other out. The precision of an average of $n$ measurements is $\sqrt n$ times more accurate than each individual measurement under a couple of assumptions. The most important assumptions are that the precisions of all measurements are the same and that all measurements have been executed independently from each other. #### Klaas van Aarsen ##### MHB Seeker Staff member All this assumes of course that people care about the precision. Suppose you have a digital thermometer that measures the temperature up to 3 decimal digits, but people only want to know if it is warm or cold, it makes little sense to accurately specify the temperature like a fusspot. #### HallsofIvy ##### Well-known member MHB Math Helper All this assumes of course that people care about the precision. Suppose you have a digital thermometer that measures the temperature up to 3 decimal digits, but people only want to know if it is warm or cold, it makes little sense to accurately specify the temperature like a fusspot. We fusspots object! String that temperature out to as many decimal digits as possible! #### UncleBucket ##### New member And on the other board, still the arguing goes on! It is now being said... As SF (significant figures) says you cannot gain precision better than you least accurate instrument, any global average that includes NOAA data cannot be (following the rules of science taught at all universities in freshman science class) more accurate than 1 degree Fahrenheit. I think that's tosh, for all the reasons we've discussed in this thread so far. #### Klaas van Aarsen ##### MHB Seeker Staff member And on the other board, still the arguing goes on! It is now being said... I think that's tosh, for all the reasons we've discussed in this thread so far. I don't know what this other board is, but can I assume it is not a dedicated math or physics forum? There is only so much to discuss before reaching a resolution about something that is as fundamental as this. #### UncleBucket ##### New member I don't know what this other board is, but can I assume it is not a dedicated math or physics forum? There is only so much to discuss before reaching a resolution about something that is as fundamental as this. No, it isn't. It's a board dedicated to global warming denialism, where the findings of organisations like NOAA and NASA are dismissed as a "liberal agenda" or "communist conspiracy". Even the basic rules of maths are distorted to help these people believe what they want to believe.
2022-07-07 03:52:26
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.6730616092681885, "perplexity": 492.7387761129392}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 5, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-27/segments/1656104683683.99/warc/CC-MAIN-20220707033101-20220707063101-00162.warc.gz"}
https://www.xaprb.com/blog/2011/05/16/how-to-make-software-testable/
# How to make software testable I’m going to try to explain how to make software testable in really practical terms. I won’t use words like “dependency injection.” Those things annoy smart programmers and make them stop listening. Here is a pseudocode snippet. There is some function that parses some IP address out of the output of the “ifconfig” command, and some other code that uses this to get an IP address and do something with it. parse_ip_address() { hostname = system.execute("hostname"); ifconfig = system.execute("/sbin/ifconfig"); ip = regex.capture(ifconfig, "/some/regex/" + hostname + "/some/other/regex/"); return ip; } // ... some other code ... ip = parse_ip_address(); // do something with the ip address. This code is extremely hard to test. If someone says “it doesn’t work on my computer,” you can only respond “it works on mine and I can’t reproduce it.” The code relies on the server’s hostname and the output of the ifconfig command, so the only way you can reproduce it is if you get access to your reporter’s computer and run the code there. (Imagine if it relied on the time of day or the date!) Let’s rewrite the code. parse_ip_address(hostname, ifconfig) { ip = regex.capture(ifconfig, "/some/regex/" + hostname + "/some/other/regex/"); return ip; } // ... some other code ... hostname = system.execute("hostname"); ifconfig = system.execute("/sbin/ifconfig"); ip = parse_ip_address(hostname, ifconfig); // do something with the ip address. Now you can write back to the person who reported the issue and say “please send me the output of /sbin/ifconfig, and your hostname.” You can write a test case, verify that it breaks, fix the code, and verify that all of your other tests still pass. That is the absolutely essential core practice in testing: write code in units (be it functions, modules, programs, or something else) that accept input, cause no side effects, and return output. Then write test suites that begin with known input, execute the code, capture the output, and compare it to what is expected. Now you’ve learned in ten minutes what took me many years to learn. When they taught me about software engineering in my Computer Science classes, they didn’t teach me how to test. They said “you must test rigorously!” and moved on to the next topic. They left me with the vague understanding that testing was an advanced practice that takes enormous time and effort. It turns out to be simple – if you start out right. And it saves enormous time and effort. Testing has enabled me to avoid becoming a good programmer. I can’t write good code, but I can write good tests, and with good tests, you can see clearly how broken your code is. I'm Baron Schwartz, the founder and CEO of VividCortex. I am the author of High Performance MySQL and lots of open-source software for performance analysis, monitoring, and system administration. I contribute to various database communities such as Oracle, PostgreSQL, Redis and MongoDB. More about me.
2017-02-27 20:22:27
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.19289909303188324, "perplexity": 1878.6788193414154}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": false}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-09/segments/1487501173761.96/warc/CC-MAIN-20170219104613-00243-ip-10-171-10-108.ec2.internal.warc.gz"}
http://www.ams.org/mathscinet-getitem?mr=2003f:37068
MathSciNet bibliographic data MR1900702 37E05 (37A25 37A35 37B40) Ruette, Sylvie Mixing \$C\sp r\$$C\sp r$ maps of the interval without maximal measure. Israel J. Math. 127 (2002), 253–277. Article For users without a MathSciNet license , Relay Station allows linking from MR numbers in online mathematical literature directly to electronic journals and original articles. Subscribers receive the added value of full MathSciNet reviews.
2016-08-25 10:33:43
{"extraction_info": {"found_math": true, "script_math_tex": 1, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9983792901039124, "perplexity": 12358.95527670811}, "config": {"markdown_headings": true, "markdown_code": false, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2016-36/segments/1471982293150.24/warc/CC-MAIN-20160823195813-00006-ip-10-153-172-175.ec2.internal.warc.gz"}
https://socratic.org/questions/what-is-the-oxidation-number-for-nitrogen
# What is the oxidation number for nitrogen? Mar 13, 2014 Nitrogen has several oxidation numbers. However, the most common oxidation state is -3 or ${N}^{- 3}$ Nitrogen has as an electron configuration of $1 {s}^{2} 2 {s}^{2} 2 {p}^{3}$ In order to satisfy the rule of octet the nitrogen atom will take on three electrons to become $1 {s}^{2} 2 {s}^{2} 2 {p}^{6}$ In doing so Nitrogen becomes a -3 ion of ${N}^{- 3}$ I hope this was helpful. SMARTERTEACHER
2020-01-19 04:22:13
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 4, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.6059795022010803, "perplexity": 2652.842064475267}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 5, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-05/segments/1579250594209.12/warc/CC-MAIN-20200119035851-20200119063851-00141.warc.gz"}
https://www.gradesaver.com/textbooks/math/algebra/elementary-and-intermediate-algebra-concepts-and-applications-6th-edition/chapter-10-exponents-and-radicals-10-4-dividing-radical-expressions-10-4-exercise-set-page-654/64
# Chapter 10 - Exponents and Radicals - 10.4 Dividing Radical Expressions - 10.4 Exercise Set: 64 $\dfrac{15}{\sqrt{30}}$ #### Work Step by Step Rationalizing the numerator, the given expression, $\dfrac{3\sqrt{10}}{2\sqrt{3}} ,$ is equivalent to \begin{array}{l} \dfrac{3\sqrt{10}}{2\sqrt{3}}\cdot\dfrac{\sqrt{10}}{\sqrt{10}} \\\\= \dfrac{3\sqrt{10^2}}{2\sqrt{30}} \\\\= \dfrac{3(10)}{2\sqrt{30}} \\\\= \dfrac{30}{2\sqrt{30}} \\\\= \dfrac{15}{\sqrt{30}} .\end{array} After you claim an answer you’ll have 24 hours to send in a draft. An editor will review the submission and either publish your submission or provide feedback.
2018-08-20 13:59:37
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9701521992683411, "perplexity": 8411.176511581878}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 5, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-34/segments/1534221216453.52/warc/CC-MAIN-20180820121228-20180820141228-00692.warc.gz"}
https://socratic.org/questions/what-is-the-leading-term-leading-coefficient-and-degree-of-this-polynomial-f-x-3-2
# What is the leading term, leading coefficient, and degree of this polynomial f(x) = -3x^3 + 4x^2 + 3x - 1? The leading term is $- 3 {x}^{3}$ , the leading coefficient is $- 3$ and the degree is $3$
2022-05-29 00:08:39
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 3, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.3513195216655731, "perplexity": 141.07616744681133}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-21/segments/1652663021405.92/warc/CC-MAIN-20220528220030-20220529010030-00727.warc.gz"}
http://www.fightfinance.com/?t=tailing_the_hedge
# Fight Finance #### CoursesTagsRandomAllRecentScores Question 598  future, tailing the hedge, cross hedging The standard deviation of monthly changes in the spot price of lamb is $0.015 per pound. The standard deviation of monthly changes in the futures price of live cattle is$0.012 per pound. The correlation between the spot price of lamb and the futures price of cattle is 0.4. It is now January. A lamb producer is committed to selling 1,000,000 pounds of lamb in May. The spot price of live cattle is $0.30 per pound and the June futures price is$0.32 per pound. The spot price of lamb is $0.60 per pound. The producer wants to use the June live cattle futures contracts to hedge his risk. Each futures contract is for the delivery of 50,000 pounds of cattle. How many live cattle futures should the lamb farmer sell to hedge his risk? Round your answer to the nearest whole number of contracts. An equity index fund manager controls a USD500 million diversified equity portfolio with a beta of 0.9. The equity manager expects a significant rally in equity prices next year. The market does not think that this will happen. If the fund manager wishes to increase his portfolio beta to 1.5, how many S&P500 futures should he buy? The US market equity index is the S&P500. One year CME futures on the S&P500 currently trade at 2,155 points and the spot price is 2,180 points. Each point is worth$250. The number of one year S&P500 futures contracts that the fund manager should buy is:
2019-06-27 06:18:55
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.18582327663898468, "perplexity": 6646.160129846765}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-26/segments/1560628000894.72/warc/CC-MAIN-20190627055431-20190627081431-00156.warc.gz"}
https://mrblog.nl/2013/11/working-with-jekyll/
# Working with Jekyll and Org-Mode Most of my notetaking and task management is in org-mode so it makes sense to use that as the basic format of my postings too. This is usually done by publishing a project from org-mode into the location where jekyll keeps its files and let jekyll convert that into something publishable. I'm using a slightly different setup which is more effective for blog posting. Publishing with org-mode turned out harder than I thought. The publish process is pretty demanding in org-mode and you only end up with raw documents that still need to be processed by jekyll. It occurred to me that http://github.com has the ability to render org-mode documents directly and perhaps that library could be used to turn it into a jekyll plugin so I could use the org-mode format directly. {% pullquote left %} Turns out there was a recent commit which mentioned an org-mode converter plugin; long story short: installed it and never looked back. {" Having jekyll convert org-mode files directly saves the whole publishing configuration step in Emacs "} which would otherwise be needed from within org-mode. {% endpullquote %} The converter is simple; it uses an org-ruby call to convert the org-file to html and that's it really: 1module Jekyll 2 # Convert org-mode files. 3 require 'org-ruby' 4 class OrgConverter < Converter 5 safe true 6 7 def setup 8 # No-op 9 end 10 11 def matches(ext) 12 ext =~ /org/i 13 end 14 15 def output_ext(ext) 16 ".html" 17 end 18 19 def convert(content) 20 setup 21 Orgmode::Parser.new(content).to_html 22 end 23 end 24end I can now just write org-mode files with a frontmatter and they'll end up automatically as blog postings. As yaml frontmatter needs to come first in the file to make jekyll happy this can't be hidden in an org-mode construct like a comment block or something else that org-mode itself ignores. This makes it harder to use the blog postings for anything else than jekyll because the frontmatter will get in the way; exporting the file to PDF for example. There is obviously room for improvement, but this simple plugin directly gives a workable system. To have a reference document for writing I created a test org-mode file, with the rendered result here. This file helps to check what org-mode constructs render into something useful and verifying visual layout of them. Not everything worked as I had hoped, but given the amount of complexity that got eliminated I'm quite happy with it. Issues that I found in the rendering: • the headers start at level 1 which is probably 1 or 2 levels too high for my purpose; I haven't found a way to correct this yet. I probably should file a feature request for this; • footnotes do not work, which I would use to keep links nicely at the bottom of an article. • some rendering is ugly (blockquotes for example), but that's probably not a direct consequence of the org-mode converter • there are only a couple of org-mode environments supported; • the use of liquid tags that jekyll uses is somewhat cumbersome. I was pleasantly surprised by the code highlighting though, which worked out of the box for me. The next step is finding or making some helper functions in emacs lisp to support working with drafts and publishing.
2021-08-05 23:12:22
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.3731600344181061, "perplexity": 3585.7160310406825}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-31/segments/1627046152085.13/warc/CC-MAIN-20210805224801-20210806014801-00005.warc.gz"}
https://stats.stackexchange.com/questions/139690/marginal-joint-and-conditional-distributions-of-a-multivariate-normal
# Marginal, joint, and conditional distributions of a multivariate normal Let $Y$ ~ $MVN_3(\mu, \Sigma)$ where $\mu = (5,6,7)$ and $\Sigma = \begin{bmatrix}2 & 0 & 1\\0 & 3 & 2\\1&2&4\end{bmatrix}$ Find (a) The marginal distribution of $Y_1$ (b) The joint distribution of $Y_3$ and $Y_1$ (c) The conditional distribution of $Y_3$ given $Y_1$. Now, my experience with probability prior to this class is limited to Sheldon Ross's intro book and Wackerley, Mendenhall, and Schaeffer, which I'm currently using for my Into Math Stat class. I've never before done conditional, joint, or marginal densities with matrices, nor have I covered them for variables that are dependent.... So please forgive the embarassing attempts I have made to solve this. It also does not help that the class this problem is for does not have a textbook, just some notes frantically scrambled onto a whiteboard twice a week while our prof reminds us that he hates teaching and would rather be researching... My attempts (a) I can write $Y_1 = AY + b$, where $b=(0,0,0)^T$ and $A = (1,0,0)^T$. Thus: $Y_1$~$MVN(\cdot,\cdot)$ because it is a linear transformation of a multivariate normal. Thus: $E[Y_1]=E[AY+b]=AE[Y]+b=(1,0,0)^T(5,6,7)^T=5$ $Cov(AY) = ACov(Y)A^T = A\begin{bmatrix}2 & 0 & 1\\0 & 3 & 2\\1&2&4\end{bmatrix}A^T$ This becomes: $(2,0,1)\begin{bmatrix}1\\0\\0\end{bmatrix}=2$ However, knowing that $Y_1$ and $Y_3$ are not independent makes me question this result... (b) The joint distribution of $Y_1$ and $Y_3$ I basically did the same thing as above but with $A = \begin{bmatrix}1 & 0 & 0\\0&0&1\end{bmatrix}$. This yielded $W = (Y_1, Y_3)^T$ ~ $MVN(\mu=(5,7)^T, \Sigma = \begin{bmatrix}2 & 1\\1&4\end{bmatrix})$ (c) The conditional distribution of $Y_3$ given $Y_1$ This part I'm honestly at a loss for...unless I'm just supposed to take the integral of the density function for $(Y_1, Y_3)$ divided by the density function for $(Y_1)$, i.e., conditional = $\int f_{Y_1,Y_3}/f_{Y_1} dy_1$. If I am supposed to do this, how do I get the actual joint density function and perform the integration and division? The only formulas I've been provided for multivariate normal distributions involve matrices, and I have never seen matrix integration. I apologize if my comprehension is lacking, my classmates and I have been thrown into this material with no assistance from our professor. • Forget about integration for a moment. You know all these distributions are normal or bivariate normal, so it remains only to find their parameters. For (a), what are the mean and variance of $Y_1$? For (b), what are the means, variances, and covariances of $(Y_1, Y_3)$? (c) is a little harder but can be addressed the same way, understanding that its parameters will be functions of $Y_1$--and they can be figured out from your answer to (b). Or, if you know about ordinary least squares regression, you already know the answer to (c) (perhaps in disguise). – whuber Feb 28 '15 at 0:31 • Hey whuber, thanks for the speedy response! I think I follow you, but I'm not quite sure. $\mu_{Y_1} = 5$, $Var(Y_1) = 2$, $\mu_{Y_1,Y_3} = (5,7)$, $Var(Y_1,Y_3) = \begin{bmatrix}2 & 1\\1&4\end{bmatrix}$, where Cov($Y_1,Y_3$) = 1. – Patrick Feb 28 '15 at 0:33 • That's correct. You've done the easy part :-). Presumably you have already seen how to compute conditional distributions with bivariate Normals, which is what you have now reduced (c) to. – whuber Feb 28 '15 at 0:38 • I have. I believe I would use $E[Y_3|Y_1]=\mu_{Y_3} + Cov(Y_1,Y_3)*(Y_1-\mu_{Y_1}) / Var(Y_1) = \frac{9+Y_1}{2}$ and $V(Y_3 | Y_1) = Var(Y_3) - Cov(Y_1,Y_3)^2 / Var(Y_1) = 7/2$ – Patrick Feb 28 '15 at 0:43 • Now that you have figured out the answer all by yourself (following the hints from @whuber), please write up your work neatly and post it as an answer to your own question. Otherwise, your question will pop up month after month as an unanswered question that needs some attention, and nobody will be inclined to write an answer. – Dilip Sarwate Feb 28 '15 at 14:29 First we need to find the joint distribution of $(Y_1, Y_3)$. Since $Y\sim MVN( \mu, \Sigma)$ we know that any subset of the components of $Y$ is also $MVN$. Thus we use $$A = \begin{pmatrix} 1 & 0 & 0 \\ 0 & 0 & 1 \\ \end{pmatrix}$$ And see that $$AY = (Y_1, Y_3)^T$$ $$\Sigma = \begin{pmatrix} 2 & 1 \\ 1 & 4 \\ \end{pmatrix}$$ $$\mu(Y_1,Y_2) = (5,7)^T$$ Therefore, using the theorem for conditional distributions of a multivariate normal yields: \begin{align} \v(Y_3|Y_1) &= \v(Y_3) - \frac{\c(Y_1,Y_3)^2}{\v(Y_1)} \\ &= 4 - \frac{1}{2} = \frac{7}{2} \end{align}
2019-06-20 21:33:37
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 2, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9273566603660583, "perplexity": 397.94147768100095}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-26/segments/1560627999273.79/warc/CC-MAIN-20190620210153-20190620232153-00406.warc.gz"}
https://www.physicsforums.com/threads/basic-thermo-confusion.331782/
Basic Thermo confusion 1. Aug 20, 2009 VillageIdiot Hello... first post here so take it easy on me. From the Book: "Aquatic Systems Engineering: Devices and How They Function, Selection Installation Operation" by P.R. Escobal In the section about heat loss (section 12.9) the author gives the basic conductivity equation in the form: Q = (A * 2.54) * k * T * time/d Where: Q = calories A = Area in square inches (using 2.54 to convert to centimeters) k = cal cm/sec in Celcius T = Temperature differential between Hot and Cold side of panel in degrees Celcius d = thickness in inches So using an example of A = 3505 square inches k = .002 (thermal conductivity of glass) T = 4.44 degress celcious (8 degrees F) t = 1 second d = .375 inches Q = 3505 * 2.54 * .002 * 4.44 * (1/.375) Q = 251.85 So far so good. The result is consistent with any other form of the conductivity equation I find. I tried a few different variation and got the same result. So here is where I am confused. The equation outputs 251.85 calories. I presume that because I chose 1 second as the TIME, then thermal loss to the aquarium is 251.84 calories/sec? 1 Watt = 4.184 calories/sec So 251.84 calories/sec = ~1054 Watts HOWEVER: The author indicates that to convert the result to Watts, you must multiply Q by 0.23889, giving a result of 60.16 Watts! If I use the conductivity equation in the form found here: http://hyperphysics.phy-astr.gsu.edu/Hbase/thermo/heatcond.html I get the result of 1055.5 Watts or 3601.62 btu/h There is no "time" input. I would assume that it is an HOUR based on the BTU/h form fields. That exactly matches the results from the authors equation before his conversion to Watts. What am I missing here? Where is the authors multiplier to get Watts coming from? Oddly, if I divide the BTU/h result by 60 I get the authors Wattage number. But I don't understand where the factor of 60 is coming from. The authors equation is in calories/sec and the reference equation from the website is in btu/h both give the same answer that is a factor of 60 from the authors Wattage. Last edited: Aug 20, 2009 2. Aug 20, 2009 sylas one calorie is 4.184 Joules. Therefore 1 Watt is 1/4.184 = 0.239 cal/sec Added in edit: but I note you've gone back to a correct form in the second line. I'll look at it all again... Added again: I note you are using 2.54 to go from square inches to square centimeters. That should be 2.542 Cheers -- sylas 3. Aug 21, 2009 sylas Starting over. Let's be real scientists and put everything in SI units. One thing is a bit odd. The units for thermal conductivity are energy per time per distance per temperature. Glass is typically around 1 W/(m.K), which is 0.00239 cal/(s.cm.K); not far off your value. But note that the the units are an inverse distance, where you've written cal cm/sec per K. Anyhow, moving on: A = 3505*0.02542 m2 = 2.261 m2 k = 0.002 * 4.184 * 100 = 0.837 Js-1m-1K-1 T = 4.44 K t = 1 s d = 0.375 * 0.0254 = 0.009525 m Hence Q = A k T t / d = 2.261 * 0.837 * 4.44 * 1 / 0.009525 = 882 J which is 210.8 cal You've already got the t = 1 in there, so remove that to get the same number with units cal/sec Cheers -- sylas PS. There also seems to be an error in your multiplication. Last edited: Aug 21, 2009 4. Aug 21, 2009 VillageIdiot Forgive the stupidity, but I am still lost. The author lists Q as calories. The time entered into the equation is 1 second. The result is Q = 251.85 calories in one second. That equates to 1053.698 Watts (unless I am not understanding something). 0.239 cal/sec = 1 Watt 251.85 cal/sec = 1053.698 Watts The result is exactly confirmed by the calculator found here: http://www.unitconversion.org/power/watts-to-calories-th--per-second-conversion.html Again, this also EXACTLY matches the output from another form of the conductivity equation found here: Note that the outout is in both Watts and BTU/h http://hyperphysics.phy-astr.gsu.edu/Hbase/thermo/heatcond.html What am I missing? 5. Aug 21, 2009 sylas What result is given in the book? Where did you get 251.85? In the original post you wrote: Q = 3505 * 2.54 * .002 * 4.44 * (1/.375)​ When I do that multiplication, I get 210.816. That has units of cal/sec, given units for the five numbers, which is 882 Watts. Cheers -- sylas 6. Aug 21, 2009 VillageIdiot Sorry, I rounded in my posts... I used more decimal places in the spreadsheet for k (0.00238933) . Small changes in k equate to fairly large swings in heat flow :) The book does not give a result, only the equation and the additional instructions that: His outcome for a 3/8" thick 12" x 24 " 60" (WHL) tank with an insulated bottom and 1/8" glass top and a 15F delta T is 169.25 Watts. Using the equation he posted, the result is fairly close to that (he takes into account the insulative properties of the air gap between the top of the water and the lid for the example) So the 169.25 Watts makes sense with regard to his text and that unexplaned Watts conversion. My Calcs Using K = .002 for the sides and top and k= .0005 for the bottom (wood stand) with the straight thermal conductivity equation, I get a little over 3000 Watts (per hour?) of thermal loss to the room. Not even in the ballpark of his answer. Again, the results from his base equation match that of the .edu link I posted above. This author is a well respected engineer who has authored several physics texts. SORRY for all of the edits... it is 2:30 AM and the brain is a bit slow! Last edited: Aug 21, 2009 7. Aug 21, 2009 VillageIdiot I have made serious edits to the post above. I at first used the 4.44 DeltaT instead of the new 8.3 DeltaT So with the listed glass tank and a 15 Degree F (8.3C) temperature differential between the tank and room, he predicts a 169 Watt thermal loss to the room and the standard conductivity equation predicts 3000 Watt thermal loss to the room. Somebody has to be WAAAAY off. 8. Aug 21, 2009 VillageIdiot please move this back to the physic forum! I am not a student, nor is this coursework or homework. I am an adult seeking help understanding a physics concept, not a student looking for homework help 9. Aug 21, 2009 sylas I found an article online which uses numbers similar to your example. See Thermodynamics for the Reef Aquarist. This example uses: A = 3505 in2 (same as you have used) d = 0.375 in (same as you have used) T = 8 °F = 4.44 °C (same as you gave originally) k = 0.578 BTU/hr /ft /°F Converting units. 1 BTU/hr = 0.293 W 1 ft = 0.3048 m 1 °F = 0.555... °C (for a temperature difference) Hence k = 0.578 * 0.293 / 0.048 * 1.8 = 1 W m-1 K-1 which is 1/4.184/100 = 0.00239 cal s-1 cm-1 K-1 So they are using a value of k that is the same as yours -- though in your first post you used 0.002 which was 20% too small. I am thinking this must be a standard example. Mathematically, you should be aware that in fact there ISN'T a big dependence on k. The problem is that you rounded it by 20%, which makes a 20% error. When multiplying, the important thing is to have the same number of significant figures; not the same number of decimal places. So rounding 0.0024 to 0.002 is just as bad as rounding 240 to 200. The article calculates a heat loss as 3595.79 BTU/hr, which is 1053 W. Same as you and I obtained. So your calculation is correct. I do not know what the book is doing. Perhaps that are looking at some other terms for heat loss, such as an evaporation loss; or allowing for the heat input from a pump. I am just guessing. Cheers -- sylas PS. It doesn't make much difference which sub-forum this is in. A question like this is well suited to the homework forums, and you get useful input here. It's all still physicsforums. PPS. I suddenly get why you had 0.002. You transcribed numbers from your spreadsheet, which was set to display 3 decimal places. You did explain this, but I was slow to see what you mean. So your calculation is correct. Last edited: Aug 21, 2009 10. Aug 21, 2009 VillageIdiot Lets take a step back here because I think the very basic question and reasoning is being lost. I don't wish to sound ungrateful, but I am struggling to understand this information and this exchange has only confused the issue. My question is not being answered and you are isntead distracted by rather insignificant details. With all due respect, if you could answer my technical question, I could learn. If you are unable to, Then I will start a new thread, as this one is somewhat sidetracked and tossed in the homework forum. In hopes of getting back to the question at hand, can we please forget about significant digits, rounding and improper order of written units. Please carefully consider the points here. 1) The author of the book is a well respected engineer. One of us is clearly wrong. I would imagine it is me, but many sources on the web would indicated otherwise. 2) The author has not taken any heat gain into consideration and has also not considered other forms of heat loss such as evaporation, radiation. So please, lets ignore that they exist. We are talking about CONDUCTION from a GLASS tank full of water into an air filled room and the Wattage needed to keep it at a constant temperatre in that room. No more, no less. 2) The equation he uses is the Fourier equation in the form (Q = (A * 2.54) * k * T * time/d). He uses it in a form that outputs cal/sec. The links posted above use a form of the equation that outputs btu/hr and Watts. They all provide the EXACT same result. 3) The author claims that the RESULT of the equation Q = (A * 2.54) * k * T * time/d needs to be multiplied by 0.23889 to give an output in WATTS. That is to size a heater to keep that tank at the target temperature, it would need to be W watts where W = 0.23889 (Q = A k T t / d ) So let me ask very clearly WHY IS THE AUTHOR MULTIPLYING BY 0.23889? Please note: I have not used color to denote anything more attention to my actual question. I am not attempting to be rude or flip, just clear. 11. Aug 21, 2009 sylas He's just converting units, from cal/sec to Watts. 0.23889 is 1/4.186 Have a look at my very first reply, number 2. This was the point I singled out for you there. I have seen both 4.184 and 4.186 used as the conversion factor; I think 4.186 is correct. (see postscript) Cheers -- sylas PS. There are different notions of "calorie" in use. 4.184 is a "thermochemical calorie" 4.1868 is an "Internation Table" calorie. Definitions described at NIST http://physics.nist.gov/Pubs/SP811/footnotes.html#f09". Last edited by a moderator: Apr 24, 2017 12. Aug 21, 2009 VillageIdiot So again, what I am missing here? You would multiply cal/sec by 4.186 to get watts, not by 0.23889 1 Watt = 0.23889 cal/sec = 3.41 btu/h 1050 Watt = 251 cal/sec = 3583 btu/h That matches every calculator and example I can find online. Yet, the author multiplies by 0.23990 instead of 4.186 and gets an estremely small number that does not match any of the online calculators. Is the author wrong or are the online calculators wrong, or am i simply missing something very obvious? P.S. I was aware that there are "thermochemical (TH)" and "International (IT)" constants differ sligtly... and also that when we talk food we talk kcal even though we call it "Calories" :) 13. Aug 22, 2009 sylas Quite right... to get from cal/s to Watts you should divide by 0.23899, or else multiply by 4.184. That's actually what I did in my units conversion, and what is done in the article I quoted above. The number they get is correct. If you calculate a value in cal/s, you multiply by 4.184 to get Watts. I don't think you are missing anything. If your book takes a value that is in cal/s, and multiplies by 0.23899, then it is wrong. Cheers -- sylas 14. Aug 22, 2009 VillageIdiot That leads me to question the entire validity of the information in the book. How could the author make such a blatant error and use that error to derive an entire formula for sizing heaters and other equipment? I am simply stunned. 15. Aug 22, 2009 sylas It's not that unusual for a genius to slip up in a problem. Most textbooks will usually have a couple of errors in them that slip past review; I don't think there is such a thing as a perfect text book free of simple errors. Doing a division rather than a multiplication might be a basic error; but it is not because of a lack of understanding, so much as a slip of focus. We all make basic mistakes from time to time, and my guess is that this is what happened here. Cheers -- sylas 16. Aug 22, 2009 VillageIdiot Fully understood... but he used the basic premise to develop a rather complex integration to calculate heat loss and heater sizing, devoting an entire chapeter to the subject and using the results as the basis for fundamentals in other chapeters. There is a second edition to the book and I am curious to know if any of the errors have been corrected. Both the 1st and 2nd edition are long out of print and cost upwards of $100 at used book outlets on the internet. Nedless to say, I am not spending$100 to find out :) Thank you for taking the time to respond to this thread. I thought my understanding of these matters was fairly informed and asked the question here to ensure that I was not fully ignorant of the basic physics. It would appear that I have stumbled upon an error by a well regarded name in the hobby. His work is referenced any time "physics" with regard to aquatic systems is the subect. I shudder to think that so many people have been misled due to a basic error in converting units.
2018-03-24 22:39:10
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.6772196292877197, "perplexity": 1168.5398374793765}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-13/segments/1521257651007.67/warc/CC-MAIN-20180324210433-20180324230433-00684.warc.gz"}
http://chemwiki.ucdavis.edu/Physical_Chemistry/Quantum_Mechanics/Quantum_States_of_Atoms_and_Molecules/9._The_Electronic_States_of_the_Multielectron_Atoms/The_Schr%C3%B6dinger_Equation_For_Multi-Electron_Atoms
If you like us, please share us on social media. The latest UCD Hyperlibrary newsletter is now complete, check it out. # The Schrödinger Equation For Multi-Electron Atoms In this chapter, we will use the helium atom as a specific example of a multi-electron atom. Figure 9.2 shows a schematic representation of a helium atom with two electrons whose coordinates are given by the vectors $$r_1$$ and $$r_2$$. The electrons are separated by a distance $$r_{12} = |r_1-r_2|$$. The origin of the coordinate system is fixed at the nucleus. As with the hydrogen atom, the nuclei for multi-electron atoms are so much heavier than an electron that the nucleus is assumed to be the center of mass. Fixing the origin of the coordinate system at the nucleus allows us to exclude translational motion of the center of mass from our quantum mechanical treatment. Figure 9.2 a) The nucleus (++) and electrons (e-) of the helium atom. b) Equivalent reduced particles with the center of mass (approximately located at the nucleus) at the origin of the coordinate system. Note that μ1 and μ2 ≈ me. The Hamiltonian operator for the hydrogen atom serves as a reference point for writing the Hamiltonian operator for atoms with more than one electron. Start with the same general form we used for the hydrogen atom Hamiltonian $\hat {H} = \hat {T} + \hat {V} \tag {9-1}$ Include a kinetic energy term for each electron and a potential energy term for the attraction of each negatively charged electron for the positively charged nucleus and a potential energy term for the mutual repulsion of each pair of negatively charged electrons. The He atom Hamiltonian is $\hat {H} = -\frac {\hbar ^2}{2m_e} (\nabla ^2_1 + \nabla ^2_2) + V (r_1) + V (r_2) + V (r_{12}) \tag {9-2}$ where $V(r_1) = -\frac {2e^2}{4 \pi \epsilon _0 r_1} \tag {9-3}$ $V(r_2) = -\frac {2e^2}{4 \pi \epsilon _0 r_2} \tag {9-4}$ $V(r_{12}) = -\frac {e^2}{4 \pi \epsilon _0 r_{12}} \tag {9-5}$ Equation (9-2) can be extended to any atom or ion by including terms for the additional electrons and replacing the He nuclear charge +2 with a general charge Z; e.g. $V(r_1) = -\frac {Ze^2}{4 \pi \epsilon _0 r_1} \tag {9-6}$ Equation (9-2) then becomes $\hat {H} = -\frac {\hbar ^2}{2m_e} \sum _i \nabla ^2_i + \sum _i V (r_i) + \sum _{i \ne j} V (r_{ij}) \tag {9-7}$ Exercise 9.1 Referring to Equation (9-7), explain the meaning of the three summations and write expressions for V(ri) and V(rij). Exercise 9.2 Write the equivalent of Equation (9-2) for a boron atom. Each electron has its own kinetic energy term in Equations (9-2) and (9-7). For an atom like sodium there would be $$\nabla ^2_1 , \nabla ^2_2 , \cdot , \nabla ^2_{11}$$. The other big difference between single electron systems and multi-electron systems is the presence of the V(rij) terms which contain 1/rij, where rij is the distance between electrons i and j. These terms account for the electron-electron repulsion that we expect between like-charged particles. Given what we have learned from the previous quantum mechanical systems we’ve studied, we predict that exact solutions to the multi-electron Schrödinger equation in Equation (9-7) would consist of a family of multi-electron wavefunctions, each with an associated energy eigenvalue. These wavefunctions and energies would describe the ground and excited states of the multi-electron atom, just as the hydrogen wavefunctions and their associated energies describe the ground and excited states of the hydrogen atom. We would predict quantum numbers to be involved, as well. The fact that electrons interact through their Coulomb repulsion means that an exact wavefunction for a multi-electron system would be a single function that depends simultaneously upon the coordinates of all the electrons; i.e., a multi-electron wavefunction, $$\Psi (r_1, r_2, \cdots r_i)$$. The modulus squared of such a wavefunction would describe the probability of finding the electrons (though not specific ones) at a designated location in the atom. Alternatively, $$ne |\Psi |^2$$ would describe the total amount of electron density that would be present at a particular spot in the multi-electron atom. All of the electrons are described simultaneously by a multi-electron wavefunction, so the total amount of electron density represented by the wavefunction equals the number of electrons in the atom. Unfortunately, the Coulomb repulsion terms make it impossible to find an exact solution to the Schrödinger equation for many-electron atoms and molecules even if there are only two electrons. The most basic approximations to the exact solutions involve writing a multi-electron wavefunction as a simple product of single-electron wavefunctions, and obtaining the energy of the atom in the state described by that wavefunction as the sum of the energies of the one-electron components. $\psi (r_1, r_2, \cdots , r_i) = \varphi _1 (r_1) \varphi _2 (r_2) \cdots \varphi _i(r_i) \tag {9-8}$ By writing the multi-electron wavefunction as a product of single-electron functions, we conceptually transform a multi-electron atom into a collection of individual electrons located in individual orbitals whose spatial characteristics and energies can be separately identified. For atoms these single-electron wavefunctions are called atomic orbitals. For molecules, as we will see in the next chapter, they are called molecular orbitals. While a great deal can be learned from such an analysis, it is important to keep in mind that such a discrete, compartmentalized picture of the electrons is an approximation, albeit a powerful one. ### Contributors 20:24, 2 Oct 2013 ## Tags ### Textbook Maps An NSF funded Project
2014-10-31 11:19:45
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.5656189918518066, "perplexity": 397.65114952062237}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2014-42/segments/1414637899632.42/warc/CC-MAIN-20141030025819-00202-ip-10-16-133-185.ec2.internal.warc.gz"}
https://meridian.allenpress.com/radiation-research/article-abstract/71/2/338/46455/Intestinal-Proliferation-after-Multiple-Fractions
The histological appearance of intestinal mucosa in <tex-math>${\rm BALB}/{\rm c}^{+}$</tex-math> mice was studied after fractionated irradiation. Daily doses of 200 and 350 rad were given to the abdomen to investigate recovery mechanisms. After 200 rad/day (1000 rad/week), the average number of nuclei per tangential section of crypts increased in two steps and reached 117, 112, 151, 140, and 145% of the control value after 2, 3, 4, 5, and 6 weeks. The average numbers of mitoses and of [3 H]thymidine-labeled nuclei per crypt also increased to 155, 155, 160, 207, and 221% and 190, 140, 131, 192, and 230% of the controls, respectively. These data suggest an increase in the stem cell compartment, which is estimated at 1.4 times its normal value after 4 to 6 weeks. The length of the mitotic cycle <tex-math>$T_{{\rm c}}$</tex-math> was measured using the usual [3 H]thymidine labeling technique after 5000 rad/5 weeks, given in daily fractions of 200 rad. <tex-math>$T_{{\rm c}}$</tex-math> was reduced from 14 to 9 hr, this reduction mainly concerning the G1 phase (from 5 to 1 hr) and to a smaller extent the S phase (from 7 to 6 hr). Intestinal mucosa tends to compensate for cell destruction by two mechanisms: a reduction in the duration of the mitotic cycle and an increase in the size of the stem cell compartment. At 200 rad/day, an important compensation is achieved and the animals are able to tolerate the irradiation for about 6 weeks. At 350 rad/day, however, increased cell production can no longer compensate for cell destruction and the animals die after the third week. This content is only available as a PDF. You do not currently have access to this content.
2021-03-01 03:44:45
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.5368366241455078, "perplexity": 1811.0644345773726}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-10/segments/1614178361849.27/warc/CC-MAIN-20210301030155-20210301060155-00154.warc.gz"}
https://dbfin.com/logic/enderton/chapter-2/section-2-2-truth-and-models/problem-11-solution/
# Section 2.2: Problem 11 Solution Working problems is a crucial part of learning mathematics. No one can learn... merely by poring over the definitions, theorems, and examples that are worked out in the text. One must work part of it out for oneself. To provide that opportunity is the purpose of the exercises. James R. Munkres For each of the following relations, give a formula which defines it in . (The language is assumed to have equality and the parameters , , and ). (a) . (b) . (c) . (d) . Digression: This is merely the tip of the iceberg. A relation on is said to be arithmetical if it is definable in this structure. All decidable relations are arithmetical, as are many others. The arithmetical relations can be arranged in a hierarchy; see Section 3.5. The difference between this exercise and examples on page 91 seems to be in the language used, as here we are missing two essentially used in the examples symbols and . That is, the purpose of the exercise seems to be to show that those two symbols do not expand the set of what can be defined in , in particular, we show that using our restricted language we can still define both and the successor of any natural number. (a) ( iff for every , ). (b) ( iff for every , ). (c) ( iff for some , and ). (d) ( iff for some , and ).
2021-06-23 12:29:13
{"extraction_info": {"found_math": true, "script_math_tex": 31, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9468526244163513, "perplexity": 512.0515735994516}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-25/segments/1623488538041.86/warc/CC-MAIN-20210623103524-20210623133524-00373.warc.gz"}
https://www.physicsforums.com/threads/selection-rules-for-hydrogen.838042/
# Selection rules for Hydrogen 1. Oct 16, 2015 ### temp050505 Good afternoon, Does the selection rules have a condition on $\Delta n$ ? I have not found a website or a book that show transitions between $2S_{1/2}$ and $2P_{3/2}$, that's why I was wondering if $\Delta n = 0$, with respect to the other selection rules, are allowed transitions. 2. Oct 16, 2015 ### blue_leaf77 $\Delta n = 0$ is not a restriction for a transition to occur, in fact when one takes spin-orbit effect into account the levels of constant $n$ split further and transitions among them can happen. 3. Oct 16, 2015 Thanks !
2017-08-18 19:46:19
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 2, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.3488830327987671, "perplexity": 799.204610201519}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 5, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-34/segments/1502886105086.81/warc/CC-MAIN-20170818175604-20170818195604-00059.warc.gz"}
https://latex-beamer.com/tutorials/title-page/
## 1. Create a simple title page The following code creates a simple title page in LaTeX using Beamer. It includes a title, author name and a talk date: % Creating a simple Title Page in Beamer \documentclass{beamer} % Theme choice: \usetheme{AnnArbor} % Title page details: \author{latex-beamer.com} \date{\today} \begin{document} % Title page frame \begin{frame} \titlepage \end{frame} \end{document} Compiling this code yields: • We have chosen a predefined theme in Beamer, known as AnnArbor which is loaded using the command: \usetheme{AnnArbor} • \title{}: is used to set a title to the presentation • \author{}: is used to add authors’ names to the talk • \date{}: is used to print the date of the talk, using \today will print the compilation day of the presentation. ## 2. Add a subtitle to the beamer title page This can be achieved by adding \subtitle{My-subtitle} to the document preamble. Updating the above code and compiling it, we get the following output: ## 3. Title page with multiple authors In the previous example, we used \author{} to add the presenter name to the title page. Using the same command, we can add more authors. Check the following code: % Multiple authors \author{First~Author \and Second~Author \and Third~Author \and Fourth~Author \and Fifth~Author} Using this line code in the above code, we get the following result: We have three points to highlight about the above line code: • Point 1: We used \and command between authors names. • Point 2: We added ~ to keep the first name and last name of each author together, otherwise a new line is automatically created to get a sufficient space. • Point 3: Authors’ names, presentation title and the date are printed at the bottom of the presentation (footer). These can be modified easily which is the purpose of the “Modify footer details” section. Here is an example with the affiliation “Online Beamer Tutorials“: % Add author affiliation \documentclass{beamer} % Theme choice: \usetheme{AnnArbor} % Title page details: \author{latex-beamer.com} \institute{Online Beamer Tutorials} \date{\today} \begin{document} % Title page frame \begin{frame} \titlepage \end{frame} \end{document} Compiling this code yields the following result: ## 5. Add several authors with different affiliations If there are several affiliations or more than one author with different affiliations, we add the command \inst{} inside \author{} and \institute{} commands. Here is an illustrative example of two authors with different affiliations: % Two authors with different affiliations \author{First Author\inst{1} \and Second Author\inst{2}} \institute{\inst{1} Affiliation of the 1st author \and \inst{2} Affiliation of the 2nd author} Here is the obtained result: ## 6. Modify footer details As we mentioned above, authors names and affiliations, presentation title and date are printed at the bottom of the presentation. If text is too long and doesn’t fit well with the footer length or If you would like to put something else, we can add brackets to the command in question with desired text. So we use: • \title[This one is printed in the footer]{This is original title of the talk} • \author[short text printed in the footer]{authors names of the talk} • \institute[another short text]{authors affiliation}: The text “another short text” will be added between pair of round brackets to the footer (author section). • \date[Anything else]{2021}: The text “Anything else” will be added at the bottom right corner of presentation. Here is an example: % Modify footer text: \subtitle{My-subtitle} \author[Left text]{latex-beamer.com} \date[Right text]{\today} Output: If you would like to remove details from the footer, we can use empty brackets, eg. \author[]{Authors name}, \date[]{2021}, etc. ## Summary • The commands \title{}, \subtitle{}, \author{}, \institute{} and \date{} allow us to add a title, subtitle, authors names and their affiliations, and the date of the talk, respectively. We should put these commands in the preamble of the document. • To create a title page, we need to put \titlepage command inside a frame environment. • Using \title[short title]{Presentation title} will print short title at the bottom of the presentation, depending on the used theme. • The line code \title[]{Presentation title} will remove the talk title from the footer. This applies also to \date{}, \author{} and \institute{} commands. Next Lesson: 02 Add and Position a Logo in Beamer
2022-10-05 02:33:23
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.6662136316299438, "perplexity": 5225.394337353365}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 5, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337531.3/warc/CC-MAIN-20221005011205-20221005041205-00508.warc.gz"}
https://physics.stackexchange.com/questions/88669/physics-of-the-inverted-bottle-dispenser
# Physics of the inverted bottle dispenser When you invert a water-bottle in a container, the water rises and then stops at a particular level --- as soon as it touches the hole of the inverted bottle. This will happen no matter how long your water-bottle is. I understand this happens, because once the water level touches the hole, air from outside cant go inside and therefore there is nothing to displace the water that falls out of the container. Now according to the laws of pressure ---- the pressure at the water level must be same everywhere --- whether it's inside the water bottle or outside. And that must be equal to the atmospheric pressure. Therefore the pressure of the water column + air column inside the inverted bottle must be equal to the atmospheric pressure. What I dont understand is, no matter how long a bottle you take, the water level will always stop at the hole. So that means that no matter how long a bottle you take, the pressure of the water column + air column inside the water bottle will be equal to the atmospheric pressure. How could this be possible? Also I'd like to let you know that, if you pierce the upper part of the bottle with a small pin, then the water level rises and overflows out of the container. I'm assuming air from outside rushes in and pushes the water out. It took me quite some time to clearly understand the experiment you're describing. Actually, pouring a full bottle in a container is a quite intriguing thing. Consider the following starting configuration : This of course is an unstable situation, as the pressure $P_0$ cannot be at the same time the pressure of the air in the bottle, and the atmospheric pressure since the height of water in the bottle is higher than the level in the container. So we should quickly get to this configuration instead : You'll agree that along the red line, the pressure is $P_0$, so what is the pressure $P_1$ ? Using simple hydrostatics, $P_1 = P_0 - \rho \, g \, H$ Notice that in the picture as well as in this calculation, we consider the height $H$ to not have changed, i.e. very little water has moved out of the bottle into the container. We'll see why now. What is now the volume of air in the bottle ? Using the law of perfect gases $P_0 * V_0 = P_1 * V_1$, hence $$\frac{V_1-V_0}{V_0} = \frac{1}{\frac{P_0}{\rho \, g \, H} - 1} = \frac{1}{\frac{10^5}{10^3 \, 10 \, 10^{-1}} - 1} \approx 1 \%$$ For this numerical estimation I took a water height in the bottle of $10 \, cm$. The variation in volume is so small, it will be hardly noticeable ! The reason why pouring the bottle is intriguing is that it empties itself in bursts. A bubble of air gets in, and water gets out at once. But if you do it in a controlled way, you will end up in the initial configuration I described, and from that point onwards, no air can get in. The variation of volume of the air in the bottle we just obtained obviously corresponds to a volume of water that gets out of the bottle, but again, it is small, and hardly noticeable. What if you take a longer bottle ? gigacyan is right, something will happen after a while. Recall that I did the calculation assuming the amount of water exiting the bottle is very small, this assumption is now false. If you have a significant height of water, the pressure will be enough to push out quite a bit of water out of the bottle, in which case the pressure of the air in the bottle will go down, and the level of water in the container go up. If you consider a very wide container, its level will stay roughly the same, but the level of water in your bottle will go down. A simple calculation leads to: $$P_0-\rho \, g \, h_{final}+\rho \, g \, (H-h_{final})=P_0$$ Hence $h_{final} = H/2$, which is the point when the low pressure in the air is able to lift the weight of the water underneath, down to the free surface. Several interesting remarks can now be made. To begin with, the pressure in the air keeps on dropping, $P_1 = P_0 - \rho \, g \, h_{final}$. Nothing prevents it from going to negative values, which happens when $h_{final} = \frac{P_0}{\rho \, g} = 10 \, m$. That's where this famous value of 10 meters comes from. Now, if you think about trees, at first you may imagine they rely on capillary action to carry sap to their leaves, but that can't be the case, as the pressure drops too much after 10 vertical meters against gravity. Any presence of air would make the wood crumple under its own applied pressure. Which means there is absolutely no air whatsoever in the sap canals of a tree (a.k.a. xylem). The trees rely principally on another mechanism to pump up sap, known as evaporation. This easily produces (highly) negative pressures in the sap, and the actual limit to the size of a tree is the point when this pressure is small enough that a cavity of water vapor spontaneously appears in its canals, through cavitation. Pull hard enough on water, and you will create two interfaces and evaporate some of the liquid ! This cavitation pressure is around $-120 \, MPa$. This catastrophic failure is know as embolism, and is also a bad health condition for humans (a gas bubble in a blood vessel). • To understand this intuitively, when the bottle surface touches the opening of the inverted bottle, the water will try to fall down (out of the inverted bottle). Since Air cannot get in, pressure inside the bottle will be decreased (due to vacuum). This pressure when reduced below P0, the water will start flowing back into the bottle. It will reach a stable state when the downward pressure will be equal to upward pressure at the contact point i.e. P1 + pgh = P0. For very large h, it would be difficult to match the pressure due to weight of the water inside the bottle and hence never stable. – Shishir Gupta Apr 13 at 9:24 Your mistake is to assume that the water will stop "no matter how long a bottle you take". It will not - you just need a longer bottle than you expect. To be precise, you need a column of water 10 meters high to counteract atmospheric pressure. • Could you provide a source and/or calculation of the 10 m high column of water? This might help explain the misconception the OP has. – Kyle Kanos Dec 2 '13 at 13:45 • @gigacyan Thats not my point. If you look at the diagram, you must agree that the pressure of the water column + the air column above it must be equal to the atmospheric pressure? Now reduce the size of the bottle. And repeat the experiment. The pressure of the water column + air column will still be equal to the atmospheric pressure. That is wrecking my nerves out. – Black Dagger Dec 7 '13 at 6:14 • @Kyle Kanos Tagging you as well. – Black Dagger Dec 7 '13 at 6:15 • @vardhanamdaga: But the air pressure inside the bottle is less that atmospheric pressure and it depends on the height of the water column! For "larger bottle" the air pressure will be lower, so the total pressure will add up to equal atmospheric pressure. And, as I said, atmospheric pressure is quite high and it can counteract the pressure of water column that is 10 m high. – gigacyan Dec 7 '13 at 21:54 • @gigacyan So you are saying that, as long as water column is not more than 10m, the combination of air column + combination of water column will adjust itself to be equal to the atmospheric pressure? Am i right? You decrease the water column, then air pressure will increase, and vice versa? – Black Dagger Dec 8 '13 at 17:17 Water stops draining from the jar into the dispenser once it forms an interface as draining of more water would result into the formation of a vacuum in the jar because no air can rush into the jar to displace the water as it has an interfacial-lock. Consider the water level above interface $= h$, water level below interface $= x$ now $$P_{surface}= P_{atm} + d\cdot g \cdot h$$ $$P_{dispenser~bottom} = P_{atm} + d\cdot g\cdot (h+x)$$ Now since $P_{bottom} > P_{surface}$! No further water drains (flow from lower to higher potential/pressure is not possible). Also note that the air rushes in through the tap when you operate the system to take out water and not from the interface. And the pressure at the downside of the tap when you open it is just $= P_{atm}$. • Hello and welcome to Stack Exchange! Since these answers are supposed to be easily readible, it is nicer to use LaTeX for formulas and to refrain from using slang. I edited your answer accordingly, please have a look. – Martin Jun 19 '15 at 12:01 • To extend Martin's comment, a brief tutorial page on MathJax used on this site can be found here. – Kyle Kanos Jun 19 '15 at 12:07 Other answers are right, but let me put it without math: Water can't come out of the bottle if air can't go in. (Except: if the water in the bottle is so tall that water can come out even if air can't go in.) ## protected by Qmechanic♦Jun 19 '15 at 10:48 Thank you for your interest in this question. Because it has attracted low-quality or spam answers that had to be removed, posting an answer now requires 10 reputation on this site (the association bonus does not count).
2019-09-23 18:04:35
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.5416426658630371, "perplexity": 402.75684208624745}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-39/segments/1568514577478.95/warc/CC-MAIN-20190923172009-20190923194009-00314.warc.gz"}
http://myelectrical.com/notes/entryid/209/introduction-to-current-transformers
Introduction to Current Transformers By on February 8th, 2013 High Voltage SF6 Current Transformers Image Source: Courtesy Siemens Current transformers (CTs) are used to convert high level currents to a smaller more reasonable level for use as inputs to protection relays and metering equipment.  Within electrical systems, current transformers are essential to ensure the correct functioning and control of equipment and for providing operational data and information. This introductory note looks at the construction of current transformers and their specification. There are two broad categories of current transformer: Measuring CTs -  provide signals to meters and instruments Protection CTs - provide signals to protective relays to enable correct operation under steady state and transient conditions. Current transformers work on a similar principal to normal voltage transformers.  Two (or more) winding are wound round a magnetic core.  Current flowing in one winding [the primary] creates a magnetic field which drives current in the other winding [the secondary].  The ratio of the primary turns to the secondary turns provides the current scaling. Example: a 600:5 ratio CT, for every turn on the primary would have 600 turns on the secondary.  A primary current of 600 A would cause 5 A to flow in the secondary. The physical construction of a current transformer can be as simple as one primary winding and one secondary winding on a core.  Quite often the construction is more complex with several secondary windings providing different protection and instrumentation needs. Specification of current transformers, typically considers the following: 1. turns ratio - of the primary to secondary current (i.e. 1200/1) 2. burden - the normal load in VA that the CT can supply 3. accuracy factors -  the accuracy limits of  (both steady state and transient) 4. physical configuration - number of primary or secondary windings, size, shape, etc. Safety: if a CT secondary is not connected to any load, then it should be short circuited.  If the secondary of the CT was left open during operation, then you would effectively have a transformer with one turn on the primary and many turns on the secondary.  Large and potentially dangerous voltages would be induced at the secondary terminals. Specification of Current Transformers Current Transformer Accuracy Accuracy of a current transformer is measured by the composite error.  This is defined as the difference between the ideal secondary RMS current and that of the actual secondary current.  It takes into account current errors, phase error and harmonic errors. Current transformer intended for protection applications need to cover a wide range of current.  Then current value up to to which they will maintain accuracy is the 'accuracy limit current'.   The ratio of the accuracy limit current to the rated current is the 'accuracy limit factor'. Measuring CT Accuracy Class Accuracy for measuring current transformers is achieved by allocating an accuracy class to the CT.  For each class the standards define a maximum allowable current and phase displacement error for different load conditions. ± percentage current/ratio error ± phase displacement error minutes Current 5% 20% 50% 100% 120% 5% 20% 100% 120% 0.1 0.4 0.2   0.1 0.1 15 8 5 5 precision measurements 0.2 0.75 0.35   0.2 0.2 10 15 10 10 precision measurements 0.5 1.5 0.75   0.5 0.5 30 45 30 30 high grade kWhr meters 1 3 1.5   1.0 1.0 60 90 60 60 general measurements 3     3   3         general measurements 5     5   5         approximate measurements Protection CT Accuracy Class Protection current transformers are defined as either 5P or 10P.  For each of these the current error, phase displacement error and accuracy limit factor are defined Class Current Error Displacement Error Accuracy Limit Factor 5P ± 1% ± 60 minutes 5 10P ± 3% - 10 Class 'P' current transformers are generally used for overcurrent protection applications.  For more demanding applications, additional specification is required.  In this instance the maximum useful emf is often used - specified as the 'knee-point' of the excitation curve (point at which a further 10% rise in emf, requires a 50% increase in excitation current). In addition to the above, other current transformer specifications are also in widespread use: • P - general purpose with accuracy defined by composite error and steady state primary current • TPS - low leakage with performance defined by secondary excitation and turns ratio error • TPX - defined by peak instantaneous error during specified transient duty • TPY - as per TPX, but remanent flux limited to 10% • TPZ - breaker failure application CT with large air gap Name Plate Ratings Typical Current Transformer Nameplate I mage: Courtesy Schneider Electric All current transformers should have a name plate attached.  The image shows and example of a typical nameplate for a current transformer with one primary and two secondary windings (click for a larger version of the image). Sizing of Current Transformers The correct sizing and specification of transformers is essential to ensure trouble free operation of protection and instrumentation systems.  There is a full electrical note dedicated to this at: How to Size Current Transformers More interesting Notes: Steven has over twenty five years experience working on some of the largest construction projects. He has a deep technical understanding of electrical engineering and is keen to share this knowledge. About the author Latest Questions: Most Popular Notes: Electric Power Distribution Engineering, ... Turan Gonen Hardcover - 1061 pages $104.22 Solar Electricity Handbook - 2015 Edition: ... Michael Boxwell Paperback - 204 pages$17.09 Photovoltaic Design and Installation For ... Ryan Mayfield Paperback - 384 pages $16.99 How to Solar Power Your Home: Everything ... Martha Maeda Paperback - 336 pages$13.83 Independent Energy Guide: Electrical Power ... Kevin Jeffrey Paperback - 280 pages Power System Relaying Stanley H. Horowitz, ... Hardcover - 398 pages $89.90 Energy Systems Engineering: Evaluation and ... Francis Vanek, Louis ... Hardcover - 672 pages$60.99 Smart Power Grids 2011 (Power Systems) Hardcover - 696 pages $136.51 Submarine Power Cables: Design, ... Thomas Worzyk Hardcover - 296 pages$189.00 Electric Power Substations Engineering, ... Hardcover - 536 pages $149.95 Renewable and Efficient Electric Power ... Gilbert M. Masters Hardcover - 712 pages$100.78 Electrical Power System Essentials Pieter Schavemaker, ... Hardcover - 340 pages $53.87 Do It Yourself 12 Volt Solar Power, 2nd ... Michael Daniek Paperback - 128 pages$13.46 DESIGN AND INSTALLATION OF A SOLAR POWER ... Daniel Johnson Kindle Edition - 36 pages Power Electronics and Renewable Energy ... Hardcover - 1607 pages $112.15 Grid Converters for Photovoltaic and Wind ... Remus Teodorescu, ... Hardcover - 416 pages$106.78 Photovoltaics: Design and Installation ... Solar Energy ... Paperback - 336 pages $35.54 Solar Photovoltaic Basics: A Study Guide ... Sean White Paperback - 168 pages$33.95 Build Your Own Small Solar Power System Gavin Webber Kindle Edition - 59 pages Renewable Fuel Standard:: Potential ... Committee on Economic ... Paperback - 250 pages $68.00 Photovoltaic Systems Engineering, Third ... Roger A. Messenger, ... Hardcover - 528 pages$104.45 Power Electronics for Modern Wind Turbines ... Frede Blaabjerg, Zhe ... Paperback - 120 pages $35.00 Power System Monitoring and Control Hassan Bevrani, ... Hardcover - 288 pages$108.04 Wind Energy Engineering Pramod Jain Hardcover - 352 pages $77.80 Convex Optimization of Power Systems Joshua Adam Taylor Hardcover - 209 pages$95.00 Power Quality: Problems and Mitigation ... Bhim Singh, Ambrish ... Hardcover - 596 pages $109.80 Multi-Stage Flash Desalination: Modeling, ... Abraha Woldai Paperback - 352 pages$102.62 Large-Scale Solar Power Systems: ... Dr Peter Gevorkian Hardcover - 395 pages $127.73 Power from Pellets: Technology and ... Stefan Döring Hardcover - 223 pages$159.00 Have some knowledge to share If you have some expert knowledge or experience, why not consider sharing this with our community. By writing an electrical note, you will be educating our users and at the same time promoting your expertise within the engineering community. To get started and understand our policy, you can read our How to Write an Electrical Note
2015-11-30 06:10:24
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.5069583058357239, "perplexity": 6362.541038021413}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 5, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2015-48/segments/1448398461113.77/warc/CC-MAIN-20151124205421-00182-ip-10-71-132-137.ec2.internal.warc.gz"}
http://math.stackexchange.com/questions/316933/how-to-calculate-int-dfrac-et-1tdt
# How to calculate $\int \dfrac {e^{t}} {1+t}dt$ The original is slove $\dfrac {df} {dt}+\dfrac {t} {1+t}f=at(ie.a*t)$, IC:$f\left( 0\right) =f_{0}$ I used method of variation of constant and get a indefinite integral $\int \dfrac {et} {1+t}dt$, it is similar to the indefinite integral $\int \dfrac {e^t} {t}dt$, I use integration by parts and get a infinite series $e^{t}(t^{-1}-t^{-2}+\ldots +\left( n-1\right) !\left( -1\right) ^{n+1}t^{-n}+\ldots)$, then I got troubled. I don't know what i should do next. - is that $e^t$ ?? –  Santosh Linkha Feb 28 '13 at 15:37 Your equation is incomplete.It has nothing on the other end of= Unless that is a*t and not at. –  Ishan Banerjee Feb 28 '13 at 15:37 @experimentX Yes –  Jebei Feb 28 '13 at 15:39 Write $1+t = u$ you get $t = u - 1$ so $e^t = e^u/e$, put the $e$ outside, and you get $\int \frac{e^u}{u} du$, unfortunately the integral is non elementary. Check this out ... however with bounds, you can evaluate it numerically. –  Santosh Linkha Feb 28 '13 at 15:48 talyor expansion? –  user39843 Feb 28 '13 at 15:49 $\int \dfrac{e^t}{1+t}dt=\int \dfrac{e^v}{ev}dv=\dfrac{Ei(v)}{e}+C=\dfrac{Ei(t+1)}{e}+C$ For properties of this function see, http://mathworld.wolfram.com/ExponentialIntegral.html http://functions.wolfram.com/GammaBetaErf/ExpIntegralEi/
2015-07-29 07:23:55
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.950638473033905, "perplexity": 2082.2326697979447}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2015-32/segments/1438042986148.56/warc/CC-MAIN-20150728002306-00241-ip-10-236-191-2.ec2.internal.warc.gz"}
https://physics.stackexchange.com/questions/430455/infrared-divergencies-in-yang-mills-theory
# Infrared divergencies in Yang-Mills theory I'm trying to better understand the nature of infrared divergencies in YM theory; for now, I'm only interested in soft divergences. The usual explanation one is given about the origin of IR divergences is related to Landau equations and the Coleman-Norton picture, which are supposed to show the "physicality" of IR divergences. Indeed, most examples of IR divergences explicitly require on shell momenta in order to appear: more specifically, when one sets to zero certain momenta, these trigger other internal propagator to go on shell and these provide the extra powers in the denominator which produce the divergence. The typical example is the one loop correction to the vertex in QED, where the integrand, up to factors which are not relevant in determining the divergence, behaves as: $$\frac{1}{k^2 (p-k)^2 (p'+k)^2}=\frac{1}{k^2 k\cdot(p-k) k\cdot(p'+k)} \approx \frac{1}{k^4} \ \ \text{when} \ k\rightarrow 0.$$ (Here $$p$$ and $$p'$$ are the external fermion momentum and k the photon loop momentum). Then the fourth power of the momentum in the denominator cancels partially with the three powers provided by the measure in polar coordinates yielding a logarithmic divergence. This principle works perfectly as long as the number of loops is low. However, in general, one can think of subdiagrams of a general diagram which could produce a divergence without requiring any internal momentum to be on shell. An example could be: All propagators are here intended to be internal, and can be set to zero without triggering another momentum to go on shell. The diagram can be though as a subdiagram nested deep in a much larger diagram. Now power-counting for this diagram, when all momenta are sent to zero, yields a degree of divergence $$D=2 I - N_3 - 3L=1$$ where $$I$$ is the total number of propagators, $$N_3$$ the number of 3-vertices (which carry a power of momentum in the numerator) and $$L$$ the number of loops. Thus this configuration seems to yield an infrared soft divergence. Is this reasoning correct? Is there a way to eliminate the divergence? I have the impression that either I have made some embarrassing mistake or there is some argument based on the fact that this singularity when thought in complex space is not a pinch singularity and can thus be eliminated with an appropriate choice of contour. In this case, can someone direct me to appropriate literature? Thank you for any help.
2019-09-18 07:52:30
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 7, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9387376308441162, "perplexity": 367.8789306132213}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-39/segments/1568514573258.74/warc/CC-MAIN-20190918065330-20190918091330-00524.warc.gz"}
https://www.kernel.org/doc/html/v4.9/media/uapi/v4l/pixfmt-007.html
# 2.6. Detailed Colorspace Descriptions¶ ## 2.6.1. Colorspace SMPTE 170M (V4L2_COLORSPACE_SMPTE170M)¶ The SMPTE 170M standard defines the colorspace used by NTSC and PAL and by SDTV in general. The default transfer function is V4L2_XFER_FUNC_709. The default Y’CbCr encoding is V4L2_YCBCR_ENC_601. The default Y’CbCr quantization is limited range. The chromaticities of the primary colors and the white reference are: SMPTE 170M Chromaticities Color x y Red 0.630 0.340 Green 0.310 0.595 Blue 0.155 0.070 White Reference (D65) 0.3127 0.3290 The red, green and blue chromaticities are also often referred to as the SMPTE C set, so this colorspace is sometimes called SMPTE C as well. The transfer function defined for SMPTE 170M is the same as the one defined in Rec. 709. L' = -1.099(-L)^{0.45} + 0.099 \text{, for } L \le-0.018 L' = 4.5L \text{, for } -0.018 < L < 0.018 L' = 1.099L^{0.45} - 0.099 \text{, for } L \ge 0.018 Inverse Transfer function: L = -\left( \frac{L' - 0.099}{-1.099} \right) ^{\frac{1}{0.45}} \text{, for } L' \le -0.081 L = \frac{L'}{4.5} \text{, for } -0.081 < L' < 0.081 L = \left(\frac{L' + 0.099}{1.099}\right)^{\frac{1}{0.45} } \text{, for } L' \ge 0.081 The luminance (Y’) and color difference (Cb and Cr) are obtained with the following V4L2_YCBCR_ENC_601 encoding: Y' = 0.2990R' + 0.5870G' + 0.1140B' Cb = -0.1687R' - 0.3313G' + 0.5B' Cr = 0.5R' - 0.4187G' - 0.0813B' Y’ is clamped to the range [0…1] and Cb and Cr are clamped to the range [-0.5…0.5]. This conversion to Y’CbCr is identical to the one defined in the ITU BT.601 standard and this colorspace is sometimes called BT.601 as well, even though BT.601 does not mention any color primaries. The default quantization is limited range, but full range is possible although rarely seen. ## 2.6.2. Colorspace Rec. 709 (V4L2_COLORSPACE_REC709)¶ The ITU BT.709 standard defines the colorspace used by HDTV in general. The default transfer function is V4L2_XFER_FUNC_709. The default Y’CbCr encoding is V4L2_YCBCR_ENC_709. The default Y’CbCr quantization is limited range. The chromaticities of the primary colors and the white reference are: Rec. 709 Chromaticities Color x y Red 0.640 0.330 Green 0.300 0.600 Blue 0.150 0.060 White Reference (D65) 0.3127 0.3290 The full name of this standard is Rec. ITU-R BT.709-5. Transfer function. Normally L is in the range [0…1], but for the extended gamut xvYCC encoding values outside that range are allowed. L' = -1.099(-L)^{0.45} + 0.099 \text{, for } L \le -0.018 L' = 4.5L \text{, for } -0.018 < L < 0.018 L' = 1.099L^{0.45} - 0.099 \text{, for } L \ge 0.018 Inverse Transfer function: L = -\left( \frac{L' - 0.099}{-1.099} \right)^\frac{1}{0.45} \text{, for } L' \le -0.081 L = \frac{L'}{4.5}\text{, for } -0.081 < L' < 0.081 L = \left(\frac{L' + 0.099}{1.099}\right)^{\frac{1}{0.45} } \text{, for } L' \ge 0.081 The luminance (Y’) and color difference (Cb and Cr) are obtained with the following V4L2_YCBCR_ENC_709 encoding: Y' = 0.2126R' + 0.7152G' + 0.0722B' Cb = -0.1146R' - 0.3854G' + 0.5B' Cr = 0.5R' - 0.4542G' - 0.0458B' Y’ is clamped to the range [0…1] and Cb and Cr are clamped to the range [-0.5…0.5]. The default quantization is limited range, but full range is possible although rarely seen. The V4L2_YCBCR_ENC_709 encoding described above is the default for this colorspace, but it can be overridden with V4L2_YCBCR_ENC_601, in which case the BT.601 Y’CbCr encoding is used. Two additional extended gamut Y’CbCr encodings are also possible with this colorspace: The xvYCC 709 encoding (V4L2_YCBCR_ENC_XV709, xvYCC) is similar to the Rec. 709 encoding, but it allows for R’, G’ and B’ values that are outside the range [0…1]. The resulting Y’, Cb and Cr values are scaled and offset: Y' = \frac{219}{256} * (0.2126R' + 0.7152G' + 0.0722B') + \frac{16}{256} Cb = \frac{224}{256} * (-0.1146R' - 0.3854G' + 0.5B') Cr = \frac{224}{256} * (0.5R' - 0.4542G' - 0.0458B') The xvYCC 601 encoding (V4L2_YCBCR_ENC_XV601, xvYCC) is similar to the BT.601 encoding, but it allows for R’, G’ and B’ values that are outside the range [0…1]. The resulting Y’, Cb and Cr values are scaled and offset: Y' = \frac{219}{256} * (0.2990R' + 0.5870G' + 0.1140B') + \frac{16}{256} Cb = \frac{224}{256} * (-0.1687R' - 0.3313G' + 0.5B') Cr = \frac{224}{256} * (0.5R' - 0.4187G' - 0.0813B') Y’ is clamped to the range [0…1] and Cb and Cr are clamped to the range [-0.5…0.5]. The non-standard xvYCC 709 or xvYCC 601 encodings can be used by selecting V4L2_YCBCR_ENC_XV709 or V4L2_YCBCR_ENC_XV601. The xvYCC encodings always use full range quantization. ## 2.6.3. Colorspace sRGB (V4L2_COLORSPACE_SRGB)¶ The sRGB standard defines the colorspace used by most webcams and computer graphics. The default transfer function is V4L2_XFER_FUNC_SRGB. The default Y’CbCr encoding is V4L2_YCBCR_ENC_601. The default Y’CbCr quantization is full range. The chromaticities of the primary colors and the white reference are: sRGB Chromaticities Color x y Red 0.640 0.330 Green 0.300 0.600 Blue 0.150 0.060 White Reference (D65) 0.3127 0.3290 These chromaticities are identical to the Rec. 709 colorspace. Transfer function. Note that negative values for L are only used by the Y’CbCr conversion. L' = -1.055(-L)^{\frac{1}{2.4} } + 0.055\text{, for }L < -0.0031308 L' = 12.92L\text{, for }-0.0031308 \le L \le 0.0031308 L' = 1.055L ^{\frac{1}{2.4} } - 0.055\text{, for }0.0031308 < L \le 1 Inverse Transfer function: L = -((-L' + 0.055) / 1.055) ^{2.4}\text{, for }L' < -0.04045 L = L' / 12.92\text{, for }-0.04045 \le L' \le 0.04045 L = ((L' + 0.055) / 1.055) ^{2.4}\text{, for }L' > 0.04045 The luminance (Y’) and color difference (Cb and Cr) are obtained with the following V4L2_YCBCR_ENC_601 encoding as defined by sYCC: Y' = 0.2990R' + 0.5870G' + 0.1140B' Cb = -0.1687R' - 0.3313G' + 0.5B' Cr = 0.5R' - 0.4187G' - 0.0813B' Y’ is clamped to the range [0…1] and Cb and Cr are clamped to the range [-0.5…0.5]. This transform is identical to one defined in SMPTE 170M/BT.601. The Y’CbCr quantization is full range. The AdobeRGB standard defines the colorspace used by computer graphics that use the AdobeRGB colorspace. This is also known as the opRGB standard. The default transfer function is V4L2_XFER_FUNC_ADOBERGB. The default Y’CbCr encoding is V4L2_YCBCR_ENC_601. The default Y’CbCr quantization is full range. The chromaticities of the primary colors and the white reference are: Color x y Red 0.6400 0.3300 Green 0.2100 0.7100 Blue 0.1500 0.0600 White Reference (D65) 0.3127 0.3290 Transfer function: L' = L ^{\frac{1}{2.19921875}} Inverse Transfer function: L = L'^{(2.19921875)} The luminance (Y’) and color difference (Cb and Cr) are obtained with the following V4L2_YCBCR_ENC_601 encoding: Y' = 0.2990R' + 0.5870G' + 0.1140B' Cb = -0.1687R' - 0.3313G' + 0.5B' Cr = 0.5R' - 0.4187G' - 0.0813B' Y’ is clamped to the range [0…1] and Cb and Cr are clamped to the range [-0.5…0.5]. This transform is identical to one defined in SMPTE 170M/BT.601. The Y’CbCr quantization is full range. ## 2.6.5. Colorspace BT.2020 (V4L2_COLORSPACE_BT2020)¶ The ITU BT.2020 standard defines the colorspace used by Ultra-high definition television (UHDTV). The default transfer function is V4L2_XFER_FUNC_709. The default Y’CbCr encoding is V4L2_YCBCR_ENC_BT2020. The default R’G’B’ quantization is limited range (!), and so is the default Y’CbCr quantization. The chromaticities of the primary colors and the white reference are: BT.2020 Chromaticities Color x y Red 0.708 0.292 Green 0.170 0.797 Blue 0.131 0.046 White Reference (D65) 0.3127 0.3290 Transfer function (same as Rec. 709): L' = 4.5L\text{, for }0 \le L < 0.018 L' = 1.099L ^{0.45} - 0.099\text{, for } 0.018 \le L \le 1 Inverse Transfer function: L = L' / 4.5\text{, for } L' < 0.081 L = \left( \frac{L' + 0.099}{1.099}\right) ^{\frac{1}{0.45} }\text{, for } L' \ge 0.081 The luminance (Y’) and color difference (Cb and Cr) are obtained with the following V4L2_YCBCR_ENC_BT2020 encoding: Y' = 0.2627R' + 0.6780G' + 0.0593B' Cb = -0.1396R' - 0.3604G' + 0.5B' Cr = 0.5R' - 0.4598G' - 0.0402B' Y’ is clamped to the range [0…1] and Cb and Cr are clamped to the range [-0.5…0.5]. The Y’CbCr quantization is limited range. There is also an alternate constant luminance R’G’B’ to Yc’CbcCrc (V4L2_YCBCR_ENC_BT2020_CONST_LUM) encoding: Luma: \begin{align*} Yc' = (0.2627R + 0.6780G + 0.0593B)'& \\ B' - Yc' \le 0:& \\ &Cbc = (B' - Yc') / 1.9404 \\ B' - Yc' > 0: & \\ &Cbc = (B' - Yc') / 1.5816 \\ R' - Yc' \le 0:& \\ &Crc = (R' - Y') / 1.7184 \\ R' - Yc' > 0:& \\ &Crc = (R' - Y') / 0.9936 \end{align*} Yc’ is clamped to the range [0…1] and Cbc and Crc are clamped to the range [-0.5…0.5]. The Yc’CbcCrc quantization is limited range. ## 2.6.6. Colorspace DCI-P3 (V4L2_COLORSPACE_DCI_P3)¶ The SMPTE RP 431-2 standard defines the colorspace used by cinema projectors that use the DCI-P3 colorspace. The default transfer function is V4L2_XFER_FUNC_DCI_P3. The default Y’CbCr encoding is V4L2_YCBCR_ENC_709. The default Y’CbCr quantization is limited range. Note Note that this colorspace standard does not specify a Y’CbCr encoding since it is not meant to be encoded to Y’CbCr. So this default Y’CbCr encoding was picked because it is the HDTV encoding. The chromaticities of the primary colors and the white reference are: DCI-P3 Chromaticities Color x y Red 0.6800 0.3200 Green 0.2650 0.6900 Blue 0.1500 0.0600 White Reference 0.3140 0.3510 Transfer function: L' = L^{\frac{1}{2.6}} Inverse Transfer function: L = L'^{(2.6)} Y’CbCr encoding is not specified. V4L2 defaults to Rec. 709. ## 2.6.7. Colorspace SMPTE 240M (V4L2_COLORSPACE_SMPTE240M)¶ The SMPTE 240M standard was an interim standard used during the early days of HDTV (1988-1998). It has been superseded by Rec. 709. The default transfer function is V4L2_XFER_FUNC_SMPTE240M. The default Y’CbCr encoding is V4L2_YCBCR_ENC_SMPTE240M. The default Y’CbCr quantization is limited range. The chromaticities of the primary colors and the white reference are: SMPTE 240M Chromaticities Color x y Red 0.630 0.340 Green 0.310 0.595 Blue 0.155 0.070 White Reference (D65) 0.3127 0.3290 These chromaticities are identical to the SMPTE 170M colorspace. Transfer function: L' = 4L\text{, for } 0 \le L < 0.0228 L' = 1.1115L ^{0.45} - 0.1115\text{, for } 0.0228 \le L \le 1 Inverse Transfer function: L = \frac{L'}{4}\text{, for } 0 \le L' < 0.0913 L = \left( \frac{L' + 0.1115}{1.1115}\right) ^{\frac{1}{0.45} }\text{, for } L' \ge 0.0913 The luminance (Y’) and color difference (Cb and Cr) are obtained with the following V4L2_YCBCR_ENC_SMPTE240M encoding: Y' = 0.2122R' + 0.7013G' + 0.0865B' Cb = -0.1161R' - 0.3839G' + 0.5B' Cr = 0.5R' - 0.4451G' - 0.0549B' Y’ is clamped to the range [0…1] and Cb and Cr are clamped to the range [-0.5…0.5]. The Y’CbCr quantization is limited range. ## 2.6.8. Colorspace NTSC 1953 (V4L2_COLORSPACE_470_SYSTEM_M)¶ This standard defines the colorspace used by NTSC in 1953. In practice this colorspace is obsolete and SMPTE 170M should be used instead. The default transfer function is V4L2_XFER_FUNC_709. The default Y’CbCr encoding is V4L2_YCBCR_ENC_601. The default Y’CbCr quantization is limited range. The chromaticities of the primary colors and the white reference are: NTSC 1953 Chromaticities Color x y Red 0.67 0.33 Green 0.21 0.71 Blue 0.14 0.08 White Reference (C) 0.310 0.316 Note This colorspace uses Illuminant C instead of D65 as the white reference. To correctly convert an image in this colorspace to another that uses D65 you need to apply a chromatic adaptation algorithm such as the Bradford method. The transfer function was never properly defined for NTSC 1953. The Rec. 709 transfer function is recommended in the literature: L' = 4.5L\text{, for } 0 \le L < 0.018 L' = 1.099L ^{0.45} - 0.099\text{, for } 0.018 \le L \le 1 Inverse Transfer function: L = \frac{L'}{4.5} \text{, for } L' < 0.081 L = \left( \frac{L' + 0.099}{1.099}\right) ^{\frac{1}{0.45} }\text{, for } L' \ge 0.081 The luminance (Y’) and color difference (Cb and Cr) are obtained with the following V4L2_YCBCR_ENC_601 encoding: Y' = 0.2990R' + 0.5870G' + 0.1140B' Cb = -0.1687R' - 0.3313G' + 0.5B' Cr = 0.5R' - 0.4187G' - 0.0813B' Y’ is clamped to the range [0…1] and Cb and Cr are clamped to the range [-0.5…0.5]. The Y’CbCr quantization is limited range. This transform is identical to one defined in SMPTE 170M/BT.601. ## 2.6.9. Colorspace EBU Tech. 3213 (V4L2_COLORSPACE_470_SYSTEM_BG)¶ The EBU Tech 3213 standard defines the colorspace used by PAL/SECAM in 1975. In practice this colorspace is obsolete and SMPTE 170M should be used instead. The default transfer function is V4L2_XFER_FUNC_709. The default Y’CbCr encoding is V4L2_YCBCR_ENC_601. The default Y’CbCr quantization is limited range. The chromaticities of the primary colors and the white reference are: EBU Tech. 3213 Chromaticities Color x y Red 0.64 0.33 Green 0.29 0.60 Blue 0.15 0.06 White Reference (D65) 0.3127 0.3290 The transfer function was never properly defined for this colorspace. The Rec. 709 transfer function is recommended in the literature: L' = 4.5L\text{, for } 0 \le L < 0.018 L' = 1.099L ^{0.45} - 0.099\text{, for } 0.018 \le L \le 1 Inverse Transfer function: L = \frac{L'}{4.5} \text{, for } L' < 0.081 L = \left(\frac{L' + 0.099}{1.099} \right) ^{\frac{1}{0.45} }\text{, for } L' \ge 0.081 The luminance (Y’) and color difference (Cb and Cr) are obtained with the following V4L2_YCBCR_ENC_601 encoding: Y' = 0.2990R' + 0.5870G' + 0.1140B' Cb = -0.1687R' - 0.3313G' + 0.5B' Cr = 0.5R' - 0.4187G' - 0.0813B' Y’ is clamped to the range [0…1] and Cb and Cr are clamped to the range [-0.5…0.5]. The Y’CbCr quantization is limited range. This transform is identical to one defined in SMPTE 170M/BT.601. ## 2.6.10. Colorspace JPEG (V4L2_COLORSPACE_JPEG)¶ This colorspace defines the colorspace used by most (Motion-)JPEG formats. The chromaticities of the primary colors and the white reference are identical to sRGB. The transfer function use is V4L2_XFER_FUNC_SRGB. The Y’CbCr encoding is V4L2_YCBCR_ENC_601 with full range quantization where Y’ is scaled to [0…255] and Cb/Cr are scaled to [-128…128] and then clipped to [-128…127]. Note The JPEG standard does not actually store colorspace information. So if something other than sRGB is used, then the driver will have to set that information explicitly. Effectively V4L2_COLORSPACE_JPEG can be considered to be an abbreviation for V4L2_COLORSPACE_SRGB, V4L2_YCBCR_ENC_601 and V4L2_QUANTIZATION_FULL_RANGE.
2023-02-04 12:46:11
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.844929575920105, "perplexity": 14498.90849315959}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2023-06/segments/1674764500126.0/warc/CC-MAIN-20230204110651-20230204140651-00751.warc.gz"}
https://www.projecteuclid.org/euclid.pm/1248095661
## Publicacions Matemàtiques ### Reversibility in the Diffeomorphism Group of the Real Line #### Abstract An element of a group is said to be reversible if it is conjugate to its inverse. We characterise the reversible elements in the group of diffeomorphisms of the real line, and in the subgroup of order preserving diffeomorphisms. #### Article information Source Publ. Mat., Volume 53, Number 2 (2009), 401-415. Dates First available in Project Euclid: 20 July 2009 https://projecteuclid.org/euclid.pm/1248095661 Mathematical Reviews number (MathSciNet) MR2543857 Zentralblatt MATH identifier 1178.37022 #### Citation O’Farrell, Anthony G.; Short, Ian. Reversibility in the Diffeomorphism Group of the Real Line. Publ. Mat. 53 (2009), no. 2, 401--415. https://projecteuclid.org/euclid.pm/1248095661 #### References • A. B. Calica, Reversible homeomorphisms of the real line, Pacific J. Math. 39 (1971), 79\Ndash87. • R. L. Devaney, Reversible diffeomorphisms and flows, Trans. Amer. Math. Soc. 218 (1976), 89\Ndash113. • N. J. Fine and G. E. Schweigert, On the group of homeomorphisms of an arc, Ann. of Math. (2) 62 (1955), 237\Ndash253. • W. Jarczyk, Reversible interval homeomorphisms, J. Math. Anal. Appl. 272(2) (2002), 473\Ndash479. • N. Kopell, Commuting diffeomorphisms, in: “Global Analysis”, Proc. Sympos. Pure Math. 14, Amer. Math. Soc., Providence, R.I., 1970, pp. 165\Ndash184. • J. S. W. Lamb and J. A. G. Roberts, Time-reversal symmetry in dynamical systems: a survey, Time-reversal symmetry in dynamical systems (Coventry, 1996), Phys. D 112(1–2) (1998), 1\Ndash39. • J. Lubin, Non-Archimedean dynamical systems, Compositio Math. 94(3) (1994), 321\Ndash346. • A. G. O'Farrell, Conjugacy, involutions, and reversibility for real homeomorphisms, Irish Math. Soc. Bull. 54 (2004), 41\Ndash52. • A. G. O'Farrell, Composition of involutive power series, and reversible series, Comput. Methods Funct. Theory 8(1–2) (2008), 173\Ndash193. • A. G. O'Farrell and M. Roginskaya, Reducing conjugacy in the full diffeomorphism group of $\mathbb{R}$ to conjugacy in the subgroup of order preserving maps, Zap. Nauchn. Sem. S.-Peterburg. Otdel. Mat. Inst. Steklov. (POMI) (Russian) 360 (2008), 231\Ndash237; translation in: J. Math. Sci. (N.Y.) 158 (2009), 895\Ndash898. • S. Sternberg, Local $C\sp{n}$ transformations of the real line, Duke Math. J. 24 (1957), 97\Ndash102. • S. W. Young, The representation of homeomorphisms on the interval as finite compositions of involutions, Proc. Amer. Math. Soc. 121(2) (1994), 605\Ndash610.
2019-12-11 03:35:34
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.6517860889434814, "perplexity": 2256.552555898138}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-51/segments/1575540529745.80/warc/CC-MAIN-20191211021635-20191211045635-00197.warc.gz"}
http://www.physicsforums.com/showthread.php?t=114062
# Two similar Bayesian problems. Did I get them right? by Fredrik Tags: bayesian, similar Sci Advisor HW Helper P: 2,589 "Bayesian" comes from a guy named "Bayes", which would be pronounced like "Bays". So "Bayesian" is pronounced like "Bays-Ian". You can actually here it here. Anyways, the idea is to use the idea of conditional probability: $$P(A | B) = \frac{P(A \cap B)}{P(B)}$$ So the probability of A, given that B has occured, is equal to the probability that both A and B occur, divided by the probability that B occurs. In problem 1, you want to calculate the probability that the box chosen had 10 balls, given that the ball that was picked had 9 written on it. Emeritus Sci Advisor PF Gold P: 8,995 Thanks AKG. At least now I know how pronounce Bayesian. I'm not sure I understand the formula for the conditional probability though. A and B are obviously not independent in the formula. If they were, the formula would be kind of pointless, since we would have P(A|B)=P(A) and P(A and B)=P(A)*P(B). What I don't understand is what P(A and B) means when A and B are not independent. Anyway, that doesn't matter much right now, since I believe my solutions are correct. If I'm wrong, I hope someone will tell me. These are my solutions: Problem 1 There was a 1/2 probability that you picked the box with 10 balls and a 1/2 probability that you picked the box with 100 balls. If you picked the box with 10 balls, it was certain that the ball your friend picked would have a number less than 11. If you picked the box with 100 balls, there was only a 1/10 chance that he would pick a ball with a number less than 11, and a 9/10 chance that he would not. From this we get the probabilities for each possibility: . . . . . . . . . . . . . Small number . . . . . . . . . Large number Box with 10 . . . . . 1/2 * 1 = 1/2 . . . . . . . . 1/2 * 0 = 0 Box with 100 . . . . 1/2 * 1/10 = 1/20 . . . . . . 1/2 * 9/10 = 9/20 If we do this a large number of times, we will get a ball with a small number from the box with 10 balls 1/2 the times and we will get a small number 1/2 + 1/20 = 11/20 times. The probability we seek is the first of those numbers divided by the second: (1/2)/(11/20)=10/11. Problem 2 You were assigned a room that was chosen at random from a set of 110 rooms, 10 of which is in building A, so there was a 1/11 probability that you ended up in building A and a 10/11 probability that you ended up in building B. If you ended up in building A, it was certain that you would get a low room number. If you ended up in building B, there was a 1/10 probability that you would get a low room number and a 9/10 probability that you would get a high room number. From this we get the probabilities for each possibility: . . . . . . . . . . . . . Small number . . . . . . . . . Large number Building A . . . . . . 1/11 * 1 = 1/11 . . . . . . . 1/11 * 0 = 0 Building B . . . . . . 10/11 * 1/10 = 1/11 . . . . 10/11 * 9/10 = 9/11 If you do this a large number of times, you will find yourself in a room with a small number in building A 1/11 times, and you will find yourself in a room with a small number 2/11 times. The probability we seek is the first of those numbers divided by the second: (1/11)/(2/11)=1/2. P: 31 ## Two similar Bayesian problems. Did I get them right? I would suggest that you use the formula: $$P(A|B)= \frac{P(B)P(B|A)}{P(A)}$$ instead. In the first example $$A$$ is "the box contains 10 balls" and $$B$$ is "You pick ball number 9", we get $$P(B)= \frac{1}{2}(\frac{1}{10}+\frac{1}{100})$$, $$P(B|A)= \frac{1}{10}$$, and $$P(A)=\frac{1}{2}$$. Hence $$P(A|B)= \frac{10}{11}$$. Quote by DavidK I would suggest that you use the formula: $$P(A|B)= \frac{P(B)P(B|A)}{P(A)}$$
2014-04-20 03:19:55
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8256726861000061, "perplexity": 172.30710077892638}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2014-15/segments/1397609537864.21/warc/CC-MAIN-20140416005217-00532-ip-10-147-4-33.ec2.internal.warc.gz"}
https://papers.nips.cc/paper/2020/hash/13ec9935e17e00bed6ec8f06230e33a9-Abstract.html
#### Authors Ilias Diakonikolas, Daniel M. Kane, Ankit Pensia #### Abstract <p>We study the problem of outlier robust high-dimensional mean estimation under a bounded covariance assumption, and more broadly under bounded low-degree moment assumptions. We consider a standard stability condition from the recent robust statistics literature and prove that, except with exponentially small failure probability, there exists a large fraction of the inliers satisfying this condition. As a corollary, it follows that a number of recently developed algorithms for robust mean estimation, including iterative filtering and non-convex gradient descent, give optimal error estimators with (near-)subgaussian rates. Previous analyses of these algorithms gave significantly suboptimal rates. As a corollary of our approach, we obtain the first computationally efficient algorithm for outlier robust mean estimation with subgaussian rates under a bounded covariance assumption.</p>
2021-03-01 05:03:52
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.931260883808136, "perplexity": 832.0616873872718}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-10/segments/1614178361849.27/warc/CC-MAIN-20210301030155-20210301060155-00418.warc.gz"}
http://mathoverflow.net/questions/87719/splitting-principle-for-holomorphic-vector-bundles/87724
# Splitting principle for holomorphic vector bundles Let $E \to X$ be a vector bundle over a decent space $X$. Then there is a space $Z$ together with a map $p: Z \to X$ which induces a split injection on cohomology and such that $p^* E$ splits as a direct sum of line bundles (take e.g. the flag bundle of $E$). Is the analog true for holomorphic vector bundles (if we stay purely in the category of complex manifolds)? That is, if $X$ is a complex manifold and $E$ a holomorphic vector bundle, can we get a holomorphic map $p: Z \to X$ (with $Z$ a complex manifold) with the same properties: the map on cohomology is a split injection, and $p^*E$ splits in the holomorphic category as a sum of line bundles? (As a side question, I'm curious what additional invariants one can construct for holomorphic vector bundles, which don't make sense for an ordinary complex vector bundle. I'm vaguely aware of the Atiyah class, but are there other examples?) - This isn't a direct answer to your question, but what one usually proves in the holomorphic category is that you can find $p:Z \to X$ giving a split injection on $H^{\ast}$ such that $p^{\ast} E$ has a filtration by vector bundles of ranks $0$, $1$, $2$, ..., $\dim E$. This is generally adequate for all the purposes for which the splitting principle is used. See, for example, Fulton's <i>Intersection Theory</i>. –  David Speyer Feb 6 '12 at 20:55 In general, if we pull the bundle back to the flag bundle, the result can be filtered so that the resulting $Gr$ is a sum of line bundles, but the pullback itself does not necessarily split. –  algori Feb 6 '12 at 20:56 The answer is positive. Let $P$ be the principal $\mathrm{GL}_n$-bundle associated with $E$; then the space of flags is the quotient $P/B$, where $B$ is the Borel subgroup of $\mathrm{GL}_n$ consisting of upper triangular matrices. Set $Z = P/T$, where $T$ is the maximal torus consisting of diagonal matrices. A point of $Z$ is a point of $X$, plus $n$ independent 1-dimensional linear subspaces of the fiber of $E$. The projection $Z \to P/B$ is a fibration with contractible fibers, hence the pullback from the cohomology of $P/B$ to that of $Z$ is an isomorphism. Since the cohomology of $X$ injects into the cohomology of $P/B$, it also injects into the cohomology of $Z$.
2015-05-22 13:05:34
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9822216629981995, "perplexity": 99.96092032858502}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2015-22/segments/1432207925201.39/warc/CC-MAIN-20150521113205-00317-ip-10-180-206-219.ec2.internal.warc.gz"}
https://socratic.org/questions/how-do-you-solve-the-system-of-equations-y-x-1-and-y-2x-4-graphically
# How do you solve the system of equations y=-x+1 and y=2x+4 graphically? May 4, 2018 X = -1, Y = 2 #### Explanation: Here's how you solve it graphically: Plug-in each equation into your calculator by using the Y= function. Once you're done typing 1 equation, hit enter to enter ${Y}_{2}$, your second line. You should have: graph{-x +1 [-16.02, 16.02, -8.01, 8.01]} graph{2x+4 [-16.02, 16.02, -8.01, 8.01]} intersecting / overlapping each other once you hit the GRAPH function. To figure out what x and y are, hit 2ND TRACE. Your options will be: 1: value 2: zero 3: minimum 4: maximum 5: intersect 6: dy/dx 7: f(x)dx Select 5: intersect. Then, hit ENTER three times.
2022-10-02 10:55:31
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 1, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.208438441157341, "perplexity": 9608.305198359249}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337307.81/warc/CC-MAIN-20221002083954-20221002113954-00343.warc.gz"}
https://socratic.org/questions/581f1edbb72cff411af31cf4
# For "barium carbonate", "barium oxide", "magnesium oxide", and "magnesium chloride", which salt is likely to exhibit the greatest lattice enthalpy? Nov 6, 2016 Quite probably $\text{magnesium oxide}$. #### Explanation: We assess the lattice enthalpy of each species by (i) the size of the ions involved, and (ii) the charges on the ions involved. For (i), the smaller the ions, the greater should be the resultant lattice enthalpy; for (ii), the more highly charged the individual ions, the greater should be the resultant lattice enthalpy. Size is minimized, and charge is maximized in the case of magnesium oxide. Note that as a chemist, as a physical scientist, you should look for the data that informs this argument. In other words, you have to interpret the experimental result.
2020-01-18 10:01:23
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 1, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.6348791122436523, "perplexity": 2553.9820651717446}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-05/segments/1579250592394.9/warc/CC-MAIN-20200118081234-20200118105234-00301.warc.gz"}
https://qotd.hideoushumpbackfreak.com/2020/11/26/qotd.html
# 11/26/2020 In the seventeenth century, René Descartes opted for reason over a divine source of knowledge. This came to be known as putting Descartes before the source. — Thomas Cathcart and Daniel Klein, Plato and a Platypus Walk Into a Bar…
2021-07-24 04:15:24
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8470638394355774, "perplexity": 5781.046118634876}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-31/segments/1627046150129.50/warc/CC-MAIN-20210724032221-20210724062221-00231.warc.gz"}
https://www.semanticscholar.org/paper/The-Nondeterministic-Constraint-Logic-Model-of-and-Hearn-Demaine/058e054b567357e3e81594080118deca7fe4bcf8?p2df
# The Nondeterministic Constraint Logic Model of Computation: Reductions and Applications @inproceedings{Hearn2002TheNC, title={The Nondeterministic Constraint Logic Model of Computation: Reductions and Applications}, author={Robert A. Hearn and Erik D. Demaine}, booktitle={ICALP}, year={2002} } • Published in ICALP 4 May 2002 • Computer Science We present a nondeterministic model of computation based on reversing edge directions in weighted directed graphs with minimum in-flow constraints on vertices. Deciding whether this simple graph model can be manipulated in order to reverse the direction of a particular edge is shown to be PSPACE-complete by a reduction from Quantified Boolean Formulas. We prove this result in a variety of special cases including planar graphs and highly restrictedv ertex configurations, some of which correspond… 28 Citations Parameterized Complexity of Graph Constraint Logic It is shown that reconfiguration versions of several classical graph problems are PSPACE-complete on planar graphs of bounded bandwidth and that Rush Hour, generalized to $k\times n$ boards, is PSPace-complete even when $k$ is at most a constant. 284 Parameterized Complexity of Graph Constraint Logic It is shown that reconfiguration versions of several classical graph problems are PSPace-complete on planar graphs of bounded bandwidth and that Rush Hour, generalized to k × n boards, is PSPACE-complete even when k is at most a constant. Constraint Logic: A Uniform Framework for Modeling Computation as Games • Computer Science 2008 23rd Annual IEEE Conference on Computational Complexity • 2008 A simple game family, called constraint logic, where players reverse edges in a directed graph while satisfying vertex in-flow constraints is introduced, which makes it substantially easier to prove completeness of such games in their appropriate complexity classes. Push-2-f is pspace-complete • Mathematics CCCG • 2002 It is proved that Push-k-F and Push-*-F are PSPACEcomplete for k ≥ 2 using a reduction from Nondeterministic Constraint Logic (NCL) [8]. Games, puzzles and computation • Philosophy • 2006 This thesis develops the idea of game as computation to a greater degree than has been done previously, and presents a general family of games, called Constraint Logic, which is both mathematically simple and ideally suited for reductions to many actual board games. The Connectivity of Boolean Satisfiability: Computational and Structural Dichotomies • Mathematics SIAM J. Comput. • 2009 The results assert that the intractable side of the computational dichotomies is PSPACE-complete, while the tractable side—which includes but is not limited to all problems with polynomial-time algorithms for satisfiability—is in P for the $st$-connectivity question, and in coNP for the connectivity question. Connectivity of Boolean satisfiability A computational dichotomy is proved for the st-connectivity problem, asserting that it is either solvable in polynomial time or PSPACE-complete, and an aligned structural dichotomy for the connectivity problem is proved, asserting the maximal diameter of connected components is either linear in the number of variables, or can be exponential. Limits of Rush Hour Logic Complexity • Computer Science ArXiv • 2005 The authors show how the Rush Hour model supports polynomial space computation, using certain car configurations as building blocks to construct boolean circuits for a cpu and memory. Belief propagation algorithms for constraint satisfaction problems • Computer Science • 2006 This thesis shows that survey propagation, which is the most effective heuristic for random 3-SAT problems with density of clauses close to the conjectured satisfiability threshold, is in fact a belief propagation algorithm, and defines a parameterized distribution on partial assignments, and shows that applying belief propagation to this distribution recovers a known family of algorithms. Reconfiguring Undirected Paths • Mathematics, Computer Science • 2019 We consider problems in which a simple path of fixed length, in an undirected graph, is to be shifted from a start position to a goal position by moves that add an edge to either end of the path and ## References SHOWING 1-10 OF 15 REFERENCES Computers and Intractability: A Guide to the Theory of NP-Completeness • Computer Science • 1978 Horn formulae play a prominent role in artificial intelligence and logic programming. In this paper we investigate the problem of optimal compression of propositional Horn production rule knowledge Sokoban is PSPACE-complete It is shown that the popular puzzle Sokoban can be used to emulate a linear bounded automata and shows that the puzzles are PSPACE-complete, solving the open problem stated in 1. On the Complexity of Motion Planning for Multiple Independent Objects; PSPACE- Hardness of the "Warehouseman's Problem" • Computer Science • 1984 This paper shows that even the restricted two-dimensional problem for arbitrarily many rectangles in a rectangular region is PSPACE-hard, which should be viewed as a guide to the difficulty, of the general problem. Sliding Piece Puzzles Puzzle specialist and collector Edward Hordern has selected 270 of the best puzzles from his collection of over 8,000 and systematically presents them in this book with full solutions. Interlocking Decoupling of a Two-Axis Robotic Manipulator Using Nonlinear State Feedback: A Case Study • Engineering • 1984 A case study that illustrates the use of nonlinear state feed back to decouple the control of a two-axis polar-coordinate robotic manipulator is presented. One type of position cou pling and two VorlesungenVorlesungen¨Vorlesungenüber die Theorie der Polyeder • VorlesungenVorlesungen¨Vorlesungenüber die Theorie der Polyeder • 1934
2022-06-27 12:25:12
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.5566062927246094, "perplexity": 1780.869266821777}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-27/segments/1656103331729.20/warc/CC-MAIN-20220627103810-20220627133810-00220.warc.gz"}
https://www.physicsforums.com/threads/rate-of-spring-extension.914071/
# I Rate of Spring Extension Tags: 1. May 8, 2017 ### bartekac Hi, So I am currently working on a rather simple problem of a projectile being launched by a spring at a certain angle. Ignoring friction, from conservation of energy we know that the velocity of the launched projectile would be $v = \sqrt{\frac{kx^2}{m}}$ (with $m$ being the mass of the projectile, $k$ the spring constant and $x$ the initial displacement of the spring). Then I applied simple kinematics to get that $$\text{Range} = \frac{v\cos{\alpha}}{g}\left(\sqrt{(v\sin{\alpha})^2+2gY} + v\sin{\alpha}\right )$$ ($Y$ is the initial height and $\alpha$ is the angle of the launch relaive to the horizontal). What I was slightly confused about was to what extent this model would be applicable to a real world scenario in terms of just the spring extension process. That is, if we still assume that the losses due to external friction are negligible can we be sure that the extending spring itself is going to reach the computed velocity $v$ even if there is no projectile mass launched (i.e. the only mass the spring has to accelerate is the mass of itself)? For instance, let's say that we have a spring with a very large $k = 10^6 ~\text{N/m}$ and we compress it by $10 ~\text{cm}$. If we take the effective mass that the spring has to push to be about $m = 0.1 ~\text{kg}$ the final velocity should be about $316~\text{m/s}$ according to the theoretical model The question I'm asking here is whether, if we neglect external friction, can we be sure that the endpoint of the spring is going to approximately reach the computed theoretical velocity for a given mass $m$ and a given value of $x$? Even though it strictly follows from conservation of energy that it would be the case, I'm not sure if that's what would actually happen in a real world scenario. Would the internal non-conservative work done in the system due to internal friction be highly significant and thus make the final velocity calculation invalid? I would appreciate any sort of feedback on this question :) 2. May 8, 2017 ### scottdave I don't think it is accelerating the entire mass of the spring at the same rate. Here is a video from Smarter Every Day, that you may find interesting. It talks about rubber bands and slingshots, but some of the same principals can apply. and 3. May 8, 2017 ### bartekac Thank you for the response. Yes, I watched those videos back in the day. What I'm more confused about is how the increase of the kinetic energy of the whole spring is represented by the motion of a given point in the spring in the case when we consider the spring just extending by itself. I tried first using the fact that that for an arbitrarily shaped body the kinetic energy is: $$\iiint_\mathcal{B} \frac{1}{2}\rho v^2 \,dV$$ Using this relation (unless there are some other formulas we could use) we need to find the velocity as a function of space coordinates $v(x,y,z)$, so that ultimately we can plug in the coordinates of the spring endpoint. For the sake of simplicity, I parametrically defined the shape of the spring as a helix: $$\boldsymbol r(t)=\left [\frac{t}{2\pi N}, \cos{t},\sin{t} \right ]$$ (where $N$ is the number of "twists" in the spring wiring per meter of the spring length) I'm kind of stuck here, because the curve parametrization should be generalized to include the thickness of the spring, so that the integral can be evaluated over an explicit 3-D space. But even then I'm not sure how I would go about finding the velocity distribution on the spring. 4. May 8, 2017 ### Staff: Mentor The question you are asking seems to be "what is the effect on the dynamics of a spring response if, rather than assuming that the spring has no mass, it has a uniformly distributed mass along the helix in its undeformed state?" Is that correct? 5. May 8, 2017 ### bartekac Yes, that would basically be my question and I'm seeking to find an analytical solution to the velocity field of the spring. Sorry if my initial statement was unclear. 6. May 8, 2017 ### Staff: Mentor In that case, would you be willing to make the approximation the spring behaves mechanically as a sequence of small infinitesimal springs? In this case, the local mechanical behavior of the spring would be described by $$T(x)=kL_0\frac{d u}{d x}\tag{1}$$ where T is the tension and u is the axial displacement of a material element in the deformed configuration of the spring that was at location x in the undeformed configuration, k is the spring constant of a spring that is of length $L_0$. So, if T were uniform along the length of the spring, for example, the spring would experience a homogeneous elongation, and, if we integrated between $x = 0$ and $x = L_0$, we would obtain: $$T=kL_0\frac{u(L_0)}{L_0}$$where $$u(L_0)=(L-L_0)$$So, combining these two equations, we would have (for a uniform tension along the length): $$T=k(L-L_0)$$ So, Eqn. 1 is predictive of the correct tension in the spring for a uniform deformation of the spring. And it will predict the local tension in the spring if the axial deformation of the spring is not uniform. Is this satisfactory to you so far? 7. May 8, 2017 ### bartekac Thanks for the answer. This all certainly makes sense, but what I am aiming for is incorporating this result into the dynamics of the spring itself. So assuming that the mass and the geometry of the spring is not to be neglected, how could we find the magnitude of the velocity of a certain point on the spring (with position $x'$) at a certain time $t$ (while the spring is extending)? Edit: Since in the problem we defined $x$ as the displacement of the spring (you expressed it as $L-L_{0})$ let's define the position of the point along the spring relative to its attachment point to be $x'$. 8. May 8, 2017 ### Staff: Mentor Well, suppose we let $\rho$ represent the mass per unit initial undeformed length of the spring. Then the mass of the section of the spring between initial axial locations x and $x+\Delta x$ is $\rho \Delta x$. If we do a force balance on this section of the spring, we can write: $$\rho \Delta x\frac{\partial^2 u}{\partial t^2}=T(x+\Delta x)-T(x)$$where $\partial^2u/\partial t^2$ is the local axial acceleration. Taking the limit of this equation as $\Delta x$ approaches zero yields: $$\rho \frac{\partial^2 u}{\partial t^2}=\frac{\partial T}{\partial x}$$ If we substitute our equation for the local tension into this equation, we obtain: $$\frac{\partial^2 u}{\partial t^2}=\left(\frac{kL_0}{\rho}\right)\frac{\partial ^2u}{\partial x^2}$$ This is the wave equation for the spring (in terms of the local displacement), where $kL_0/\rho$ is the square of the wave velocity. So the spring can experience axial wave-like behavior like a Slinky. So, for a spring with mass, all we need to do is solve this PDE, subject to appropriate boundary conditions, for the axial displacement u as a function of axial position and time. 9. May 8, 2017 ### bartekac Thank you, this the bit I was missing. Completely forgot that the wave equation can be actually derived from the Hooke's law :) So the solution to the PDE would be obtained as Fourier series with Fourier coefficients computed from eigenfunction orthogonality, but I'm sort of confused about the physical interpretation of the function variables for the spring. For instance, if I want to find the velocity of the point 2 cm below the endpoint of the spring at the time when the spring is maximally elongated, how would I do so using the PDE solution? 10. May 8, 2017 ### Staff: Mentor The velocity is du/dt. The solution you get is for u(x,t). I would recommend solving this equation numerically, unless the imposed transient loading is pretty simple. Don't forget possible solutions to the wave equation based on the d'Alembert form of the solution. 11. May 8, 2017 ### bartekac Oh I see. And yes, in the end I would probably solve the PDE numerically. What I'm still confused about is applying the obtained solution to the problem. The way you derived the given equation, is $u$ the change in position between the initial location $x$ of a material element in the undeformed configuration, or does function $u$ describe the position of the material element relative to the attachment point of the spring (since the spring is attached to an inertially fixed point in the problem)? 12. May 8, 2017 ### Staff: Mentor It is the change in position between the initial location in the undeformed configuration. 13. May 9, 2017 ### bartekac Okay, thanks. So I have also attempted to numerically solve the PDE, but I'm not sure about the boundary conditions that should be imposed with this problem statement. For the initial conditions, assuming that the geometry of the whole spring is compressed linearly, at $t=0$ we can say that $u(x,0)=u_{0}$ ($u_{0}$ is a constant) and that $\frac{\partial u}{\partial t}(x,0)=0$. Now for the boundary condition at $x=0$, since the spring is just attached to a fixed point $u(0,t)=0$. But then what would be the other boundary condition (perhaps for the moving endpoint of the spring)? 14. May 9, 2017 ### Nidum When you release the spring you go from an end condition where there is a force applied to one where there is no force applied . The change is notionally instantaneous . You are familiar with use of step functions ? This analysis is very similar to that used for determining the output response of electronic circuits and control systems to step function inputs . Just for interest : There is an upper limit to the velocity that springs can expand at when released . This is set by the speed at which a stress wave can travel through the material of the spring . Damping effects in real springs can be significant but are very hard to quantify except by experiment . Last edited: May 9, 2017 15. May 9, 2017 ### Staff: Mentor If you take the partial derivative of wave equation in u with respect to x, you get the same wave equation in $\epsilon=\partial u/\partial x$, where $\epsilon$ is the strain. This also means that the same wave equation is likewise satisfied in terms of the tension T. This might be easier to work with than the equation in u. For example, if you suddenly release the far end of the spring, the tension T suddenly becomes zero at x = Lo. Such a boundary condition would be ideally handled analytically by a d'Alembert form of solution.
2017-12-15 10:33:32
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.7773942351341248, "perplexity": 250.42745334671653}, "config": {"markdown_headings": true, "markdown_code": false, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-51/segments/1512948568283.66/warc/CC-MAIN-20171215095015-20171215115015-00555.warc.gz"}
https://codereview.stackexchange.com/questions/170992/find-the-greatest-gcd-among-the-pairs-of-two-equal-sized-arrays-and-prints-the-s
# Find the greatest gcd among the pairs of two equal sized arrays and prints the sum of that pair You are given two arrays AA and BB containing elements each. Choose a pair of elements x,y such that: x belongs to array AA. y belongs to array BB. gcd(x,y) is the maximum for all pairs x,y. If there is more than one such pair x,y having maximum gcd, then choose the one with maximum sum. Print the sum of elements of this maximum-sum pair. NOTE: returns gcd(x,y) the largest integer that divides both x and y. ### Constraints: $$1≤N≤5\cdot 10^5$$ $$1≤A_i≤10^6$$ $$1≤B_i≤10^6$$ ### Input format: The first line of the input contains n denoting the total number of elements of arrays AA and BB. Next line contains n space separated positive integers denoting the elements of array AA. Next line contains n space separated positive integers denoting the elements of array BB. ### Output format: From all the pairs having maximum gcd , print the sum of one that has the maximum sum. public class Solution2 { static int gcdcalc(int x, int y){ if(y == 0) { return x; } return gcdcalc(y, x%y); } static int maximumGcdAndSum(int[] A, int[] B) { int gcd,sum,maxgcd=0,maxsum=0; for(int A_i = 0; A_i < A.length; A_i++) { for(int B_i = 0; B_i < B.length; B_i++) { sum = A[A_i] + B[B_i]; gcd = gcdcalc(A[A_i], B[B_i]); if(maxgcd < gcd) { maxgcd = gcd; maxsum = sum; } if(maxgcd == gcd) { if(maxsum < sum) maxsum = sum; } } } return maxsum; } public static void main(String[] args) { Scanner in = new Scanner(System.in); int n = in.nextInt(); int[] A = new int[n]; for(int A_i = 0; A_i < n; A_i++){ A[A_i] = in.nextInt(); } int[] B = new int[n]; for(int B_i = 0; B_i < n; B_i++){ B[B_i] = in.nextInt(); } int res = maximumGcdAndSum(A, B); System.out.println(res); } } • Possible duplicate of Maximizing GCD of two arrays of numbers – justjofe Jul 23 '17 at 21:26 • What if in gcdcalc y > x? – Joop Eggen Jul 23 '17 at 21:43 • @JoopEggen That occurred to me to, but this is not a problem, because if y > x, the method will just invoke gcdcalc(y, x), so it's just one extra recursion. – Stingy Jul 23 '17 at 21:46 • @Stingy now that you are saying it, I even saw that trick at an other gcd. – Joop Eggen Jul 23 '17 at 21:56 int gcd,sum,maxgcd=0,maxsum=0; You don't need to declare gcd and sum yet. int maxGcd = 0; int maxSum = 0; It's also generally better not to initialize more than one variable per line. It's not a functional difference. It's a readability thing. The Java standard is to capitalize the second and later words. sum = A[A_i] + B[B_i]; gcd = gcdcalc(A[A_i], B[B_i]); This could be int sum = A[A_i] + B[B_i]; int gcd = gcdcalc(A[A_i], B[B_i]); You don't use these outside the current iteration. So you can declare and initialize them at the same time. if(maxgcd == gcd) { if(maxsum < sum) maxsum = sum; This could be else if (maxgcd == gcd && maxsum < sum) { maxsum = sum; If the maxgcd is less than gcd, then they won't be equal. So we don't need to check both. The else makes it so that we only check for equality if not less than. You don't need two if statements. One statement with two clauses is sufficient. Due to the way that short circuit comparisons work, this will be the same efficiency as the original code. I prefer to have a space between the if and the (. I find it easier to see that it is an if and not a method call that way. I would encourage you to add more vertical whitespace. Again, I find breaking up the statements into logical groups makes it easier to read. if(y == 0) { return x; } return gcdcalc(y, x%y); This could be just return (y == 0) ? x : gcdcalc(y, x%y); Same efficiency, just a bit terser. If you find yourself trying to wring out every bit of efficiency, there is an iterative version of this that is probably faster. Instead of using two int[]s, you could use two HashSet<Integer> instances instead, because duplicates in any of the two arrays can be ignored. Also, the declaration of gcd and sum in maximumGcdAndSum(int[], int[]) could be moved into the innermost for loop to reduce their scope to the smallest extent necessary, which would make the code a bit clearer in my opinion. Other than that, your code seems to be succinct and working.
2020-01-24 00:33:22
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 1, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.44301676750183105, "perplexity": 2430.41999385167}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-05/segments/1579250614086.44/warc/CC-MAIN-20200123221108-20200124010108-00020.warc.gz"}
https://physics.stackexchange.com/questions/709149/conducting-shell-surronded-by-dielectric-shell-and-an-outside-q-charge
Conducting shell surronded by, dielectric shell, and an outside $q$ charge I would like some help with my solution attempt. I have a conducting shell with radius $$R1$$, surronded by a dielectric shell with $$\varepsilon_1$$ and radius $$R2$$, and on the outside i have a $$POINT$$ $$charge$$ $$q$$ from distance $$L$$, in Vacuum so $$\varepsilon_0$$. So its $$R1. The question is the potentials, $$\Phi_1,\Phi_2,\Phi_3$$. Edit: My question is how should i choose the coefficents to satisfy the following things: 1.Inside the inner circle the potential should be zero because its a conductor, so there is only potential on its side where $$r=R1$$, and its a constant. Usually we choose the $$B_l r^{-(l+1)}$$ part to be zero. But i dont know which one should be zero in a case of conductor with no charge inside 2.The second shell is dielectric, but it doesn't contain the origo it goes from $$[R_1,R_2]$$, so i also dont know which coefficents to have inside here. \If there would just be a dielectric Sphere with outside charge inside would be the part where $$C_l r^{l}$$, and in the inside there would be the $$D_l r^{-(l+1)}$$ part with the point charge $$\dfrac{q}{4\pi \varepsilon_0}\dfrac{1}{|\mathbf{r-L}|}$$ 3.On the outside we have the point charge $$\dfrac{q}{4\pi \varepsilon_0}\dfrac{1}{|\mathbf{r-L}|}=\sum_{n=0}^\infty \dfrac{r^l}{L^{l+1}}P_l{cos\vartheta}+Outer terms from the two shells$$ 4.I also have the two types of boundary conditions, which should satisfy And this is how i attemted it: I write up three cylindrical Laplacians for each section: $$\Phi_1(r,\vartheta)=\sum_{n=0}^\infty (A_l r^l+B_lr^{-(l+1)})P_l(\cos \vartheta) \\ \Phi_2(r,\vartheta)=\sum_{n=0}^\infty (C_l r^l+D_lr^{-(l+1)})P_l(\cos \vartheta) \\ \Phi_3(r,\vartheta)=\sum_{n=0}^\infty (E_l r^l+F_lr^{-(l+1)})P_l(\cos \vartheta)$$ $$B_l$$ is zero because $$r^{-(l+1)}$$ diverges there. I'm not sure if $$A_l$$ should be zero or not. Because $$\Phi_1(r, this should give me that $$A_l=0$$, but on the surface: so $$\Phi_1(r=R_1)$$, should be a constant value. $$\Phi_1(r=R1)=\Phi_2(r=R1)=V1(\text{constans})$$. And there should also be another boundary condition here $$\varepsilon_0 \dfrac{\partial \Phi_1}{\partial r} \bigg|_{r=R_1}=\varepsilon_1 \dfrac{\partial \Phi_2}{\partial r} \bigg|_{r=R_1}$$ When $$r=R_1$$, the Potential is a constant $$V_1$$. So i have these : $$\Phi_2(r=R,\vartheta)=\sum_{n=0}^\infty (C_l R^l+D_lR^{-(l+1)})P_l(\cos \vartheta)=V1$$ $$\sum_{n=0}^\infty (A_l l r^{l-1})P_l(\cos \vartheta)=\sum_{n=0}^\infty (C_l l r^{l-1}+D_l(-l-1)r^{-(l+2)})P_l(\cos \vartheta)$$ I'm not sure in the part where $$r\in [R_1,R_2]$$. So $$\Phi_2$$ there is :$$\Phi_2(r,\vartheta)=\sum_{n=0}^\infty (C_l r^l+D_lr^{-(l+1)})P_l(\cos \vartheta)$$. We leave both of the coefficients because there is no divergent part, because it $$r$$ doesn't go to zero here. We have the boundary where $$r=R2$$ and the conditions are: $$\Phi_2(r=R_2)=\Phi_3(r=R_2)\\ \varepsilon_1 \dfrac{\partial \Phi_2}{\partial r} \bigg|_{r=R_2}=\varepsilon_0 \dfrac{\partial \Phi_3}{\partial r} \bigg|_{r=R_2}$$ $$\sum_{n=0}^\infty (C_l R_2^l+D_lR_2^{-(l+1)})P_l(\cos \vartheta) =\sum_{n=0}^\infty (E_l R_2^l+F_lR_2^{-(l+1)})P_l(\cos \vartheta)$$ $$\sum_{n=0}^\infty (C_l l R_2^l+D_l(-l-1)R_2^{-(l+2)})P_l(\cos \vartheta) =\sum_{n=0}^\infty (E_l l R_2^l+F_l(-l-1)R_2^{-(l+2)})P_l(\cos \vartheta)$$ • I've read your description of the setup twice now and I can't figure out where the charge $q$ is. Is it a point charge? A spherical shell? A diagram would be really helpful. May 17 at 18:12 • Please clarify your specific problem or provide additional details to highlight exactly what you need. As it's currently written, it's hard to tell exactly what you're asking. May 17 at 18:13 • Thanks for the responses! There is a point charge,and its outside of both shells. from a distance L from the origo which is greater than both the radiuses. The second shell is directly on the inner one so they touch. I added my questions about the coefficients i hope its more clear now what is my problem May 18 at 5:59 • $$\Phi_1$$, for the range $$0; • $$\Phi_2$$, for the range $$R_1 < r < R_2$$; • $$\Phi_3$$, for the range $$R_2 < r < L$$; and • $$\Phi_4$$, for the range $$L < r < \infty$$. This is because $$\Phi$$ does not satisfy Laplace's equation anywhere that there is charge, and in your setup there is a non-zero charge density at $$r = R_1$$, $$R_2$$, and $$L$$. The relationship between the potentials $$\Phi_3$$ and $$\Phi_4$$ would be given by the usual relation: $$\epsilon_0 \left[ \frac{\partial \Phi_3}{\partial r} - \frac{\partial \Phi_4}{\partial r} \right] = \sigma(\theta)$$ where $$\sigma(\theta)$$ is the charge density on the sphere $$r = L$$. For a point charge on the $$z$$-axis, it can be shown that in spherical coordinates $$\sigma(\theta) = \frac{q}{L^2} \frac{\delta(\theta)}{\sin \theta}$$ (note that this must be viewed as a distribution, and the factor of $$\sin \theta$$ will cancel out with the $$\sin \theta$$ factor in the volume element when this is integrated in spherical coordinates.) Beyond that, this will be a whole lot of algebra. One piece that you might want to keep in mind is that the potential must remain finite as $$r \to \infty$$. This means that the coefficients of the $$r^l$$ terms in $$\Phi_4$$ must all be zero.
2022-08-15 15:32:53
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 62, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9581792950630188, "perplexity": 1057.882104404826}, "config": {"markdown_headings": false, "markdown_code": false, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-33/segments/1659882572192.79/warc/CC-MAIN-20220815145459-20220815175459-00744.warc.gz"}
http://mscroggs.co.uk/puzzles/?lang=FRA
mscroggs.co.uk mscroggs.co.uk subscribe # Puzzles ## Archive Show me a random puzzle ▼ show ▼ ## Cryptic crossnumber #2 In this puzzle, the clues are written like clues from a cryptic crossword, but the answers are all numbers. You can download a printable pdf of this puzzle here. 1 2 3 4 5 ## Cryptic crossnumber #1 In this puzzle, the clues are written like clues from a cryptic crossword, but the answers are all numbers. You can download a printable pdf of this puzzle here. 1 2 3 4 5 ## Breaking Chocolate You are given a bar of chocolate made up of 15 small blocks arranged in a 3×5 grid. You want to snap the chocolate bar into 15 individual pieces. What is the fewest number of snaps that you need to break the bar? (One snap consists of picking up one piece of chocolate and snapping it into two pieces.) ## Square and cube endings Source: UKMT 2011 Senior Kangaroo How many positive two-digit numbers are there whose square and cube both end in the same digit? ## Equal lengths The picture below shows two copies of the same rectangle with red and blue lines. The blue line visits the midpoint of the opposite side. The lengths shown in red and blue are of equal length. What is the ratio of the sides of the rectangle? ## Digitless factor Ted thinks of a three-digit number. He removes one of its digits to make a two-digit number. Ted notices that his three-digit number is exactly 37 times his two-digit number. What was Ted's three-digit number? ## Backwards fours If A, B, C, D and E are all unique digits, what values would work with the following equation? $$ABCCDE\times 4 = EDCCBA$$ ## Is it equilateral? In the diagram below, $$ABDC$$ is a square. Angles $$ACE$$ and $$BDE$$ are both 75°. Is triangle $$ABE$$ equilateral? Why/why not?
2018-07-19 19:18:47
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 2, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 1, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.36524587869644165, "perplexity": 1088.3646462066524}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-30/segments/1531676591216.51/warc/CC-MAIN-20180719183926-20180719203926-00272.warc.gz"}
https://calendar.math.illinois.edu/?year=2020&month=01&day=22&interval=day
Department of # Mathematics Seminar Calendar for events the day of Wednesday, January 22, 2020. . events for the events containing More information on this calendar program is available. Questions regarding events or the calendar should be directed to Tori Corkery. December 2019 January 2020 February 2020 Su Mo Tu We Th Fr Sa Su Mo Tu We Th Fr Sa Su Mo Tu We Th Fr Sa 1 2 3 4 5 6 7 1 2 3 4 1 8 9 10 11 12 13 14 5 6 7 8 9 10 11 2 3 4 5 6 7 8 15 16 17 18 19 20 21 12 13 14 15 16 17 18 9 10 11 12 13 14 15 22 23 24 25 26 27 28 19 20 21 22 23 24 25 16 17 18 19 20 21 22 29 30 31 26 27 28 29 30 31 23 24 25 26 27 28 29 Wednesday, January 22, 2020 12:00 pm in 141 Altgeld Hall,Wednesday, January 22, 2020 #### Predictive Actuarial Analystics Using Tree-Based Models ###### Zhiyu Quan (University of Connecticut) Abstract: Because of its many advantages, the use of tree-based models has become an increasingly popular alternative predictive tool for building classification and regression models. Innovations to the original methods, such as random forests and gradient boosting, have further improved the capabilities of using tree-based models as a predictive model. Quan et al. (2018) examined the performance of tree-based models for the valuation of the guarantees embedded in variable annuities. We found that tree-based models are generally very efficient in producing more accurate predictions and the gradient boosting ensemble method is considered the most superior. Quan and Valdez (2018) applied multivariate tree-based models to multi-line insurance claims data with correlated responses drawn from the Wisconsin Local Government Property Insurance Fund (LGPIF). We were able to capture the inherent relationship among the response variables and improved marginal predictive accuracy. Quan et al. (2019) propose to use tree-based models with a hybrid structure as an alternative approach to the Tweedie Generalized Linear Model (GLM). This hybrid structure captures the benefits of tuning hyperparameters at each step of the algorithm thereby allowing for an improved prediction accuracy. We examined the performance of this model vis-\`a-vis the Tweedie GLM using the LGPIF and simulated datasets. Our empirical results indicate that this hybrid tree-based model produces more accurate predictions without loss of intuitive interpretation. 2:00 pm in 447 Altgeld Hall,Wednesday, January 22, 2020 #### Organizational Meeting ###### Sungwoo Nam (Illinois Math) Abstract: We will have an organizational meeting for this semester. This involves making a plan for this semester and possibly choose a topic for a reading seminar. If you want to speak this semester, or are interested in a reading seminar, please join us and make a suggestion. 3:00 pm in 243 Altgeld Hall,Wednesday, January 22, 2020 #### How do mathematicians believe? ###### Brian P Katz (Smith College) Abstract: Love it or hate it, many people believe that mathematics gives humans access to a kind of truth that is more absolute and universal than other disciplines. If this claim is true, we must ask: what makes the origins and processes of mathematics special and how can our messy, biological brains connect to the absolute? If the claim is false, then what becomes of truth in mathematics? In this session, we will consider beliefs about truth and how they play out in the mathematics classroom, trying to understand a little about identity, authority, and the Liberal Arts. 3:30 pm in 341 Altgeld Hall,Wednesday, January 22, 2020 #### Organizational meeting 4:00 pm in 245 Altgeld Hall,Wednesday, January 22, 2020 #### Statistical reduced models and rigorous analysis for uncertainty quantification of turbulent dynamical systems ###### Di Qi   [email] (Courant Institute of Mathematical Sciences) Abstract: The capability of using imperfect statistical reduced-order models to capture crucial statistics in turbulent flows is investigated. Much simpler and more tractable block-diagonal models are proposed to approximate the complex and high-dimensional turbulent flow equations. A rigorous statistical bound for the total statistical uncertainty is derived based on a statistical energy conservation principle. The systematic framework of correcting model errors is introduced using statistical response and empirical information theory, and optimal model parameters under this unbiased information measure are achieved in a training phase before the prediction. It is demonstrated that crucial principal statistical quantities in the most important large scales can be captured efficiently with accuracy using the reduced-order model in various dynamical regimes with distinct statistical structures.
2020-06-06 18:20:32
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.36484983563423157, "perplexity": 1135.1156458001547}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-24/segments/1590348517506.81/warc/CC-MAIN-20200606155701-20200606185701-00028.warc.gz"}
https://mathoverflow.net/questions/159650/poincar%C3%A9-bundle-and-weil-pairing-for-abelian-schemes
# Poincaré bundle and Weil pairing for Abelian schemes In which situations is there a Poincaré bundle for Abelian schemes? In [Mumford, Abelian varieties] only the case of Abelian varieties is treated. The same question for the Weil pairing $\mathscr{A}[n] \times \mathscr{A}^\vee[n] \to \mu_n$. (Why is it a perfect pairing?) • For the perfectness of the Weil pairing, see Oda "The first de Rham cohomology and Dieudonne modules", esp. Thm. 1.1. – Kestutis Cesnavicius Mar 7 '14 at 13:22 • Oort's book "Commutative group schemes" has a very nice discussion of both the representability of functor $T \mapsto {\rm{Ext}}^1_T(A_T, {\mathbf{G}}_m)$ by the dual abelian scheme when the latter exists (which is always the case, by the result of Raynaud) and not only the relation of its $n$-torsion with Cartier dual of that of $A$ but also the more subtle issue of relating double-duality on both sides. Oda's paper addresses the double-duality aspect (and much more) in its first section. – user76758 Mar 7 '14 at 13:47 Always, because the dual abelian scheme/space can be defined as the connected component of the (fine) moduli space of invertible sheaves trivialized at $0$. The poincare bundle is the universal object. PS: Defined as above it is clear from general theory that the dual abelian something is an algebraic space. It was shown I think by Raynaud that it is a scheme in most cases of interest (for example if the abelian scheme is projective over the base, so for example over a normal base scheme), but I think later that fact was established in general (not 100% sure). This is related to the question of whether any abelian algebraic space over a base is representable. • Thank you. Do you have a reference for the existence of the Poincaré bundle? – TKe Mar 7 '14 at 12:50 • I think I've found it in [FGA explained, Kleiman, The Picard scheme], p. 262, Exercise 9.4.3. – TKe Mar 7 '14 at 13:02 • The representability of the dual abelian scheme (by a scheme) is discussed in Chai, Faltings "Degeneration of abelian varieties", section I.1 (esp. Thm. 1.9). – Kestutis Cesnavicius Mar 7 '14 at 13:20 • The Poincar\'e bundle is tautologically part of the very meaning of "dual abelian scheme" or representability of the (rigidified) Picard functor (say as an algebraic space, which in turn is a special case of Artin's theorem on Picard functors). – user76758 Mar 7 '14 at 13:50 I've found it in [FGA explained, Kleiman, The Picard scheme], p. 262, Exercise 9.4.3. A universal sheaf/Poincaré sheaf exists iff $\mathbf{Pic}_{X/S}$ represents $\mathrm{Pic}_{X/S}$ or if $f: \mathscr{A} \to S$ has a section. Edit: This gives us a Poincaré bundle on $\mathscr{A} \times \mathbf{Pic}_{\mathscr{A}/S}$, but I need it on $\mathscr{A} \times \mathbf{Pic}^0_{\mathscr{A}/S}$! Perhaps [FGA explained, Kleiman, The Picard scheme], p. 289, Remark 9.5.24 does help? • The functor ${\rm{Pic}}^0_{A/S}$ is a subfunctor of ${\rm{Pic}}_{A/S}$ (defined by a condition on geometric fibers), so what is the meaning of the question in the "Edit" that isn't a tautology (via pullback)? – user76758 Mar 7 '14 at 13:44 • The question is if the universal property still holds (for the modified Poincaré bundle as in [FGA explained, Kleiman, The Picard scheme], p. 289, Remark 9.5.24. – TKe Mar 7 '14 at 13:47 • If you have a given abelian scheme $B$ with line bundle $L$ on $A \times B$ equipped with trivialization $i$ of its pullback to $A \times \{0\}$ then to check if the resulting map $B \rightarrow A^{\vee}$ is an isomorphism (thereby giving the universal property to $(B, L, i)$ it suffices to check on geometric fibers, where various results in Mumford's book are applicable. I don't know what "modified Poincar\'e bundle" means (FGA Explained not nearby at the moment), but would that address whatever is concerning you? – user76758 Mar 7 '14 at 13:55 • The normalised Poincaré bundle is $\mathscr{P} \otimes f_{A^\vee}^*g_{A^\vee}^*\mathscr{P}$ with $f: A \to X$ and $g$ the zero section. – TKe Mar 7 '14 at 14:01 • OK, this is what I think is usually called the "rigidified" Poincar\'e bundle (typo: you missed an inversion on the 2nd tensor factor). But encoding such rigdification is part of the very content of building a Poincar\'e sheaf on the entire Picard scheme (or algebraic space), so I remain puzzled as to where the point of confusion is arising for Pic$^0$ versus Pic in terms of Poincar\'e bundles (i.e., if you are happy for Pic then why not for Pic$^0$?). – user76758 Mar 7 '14 at 14:41
2019-04-19 23:16:11
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9072088599205017, "perplexity": 760.8128736697752}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-18/segments/1555578528430.9/warc/CC-MAIN-20190419220958-20190420002958-00171.warc.gz"}
https://www.gradesaver.com/textbooks/math/algebra/algebra-1-common-core-15th-edition/chapter-2-solving-equations-2-9-percents-practice-and-problem-solving-exercises-page-143/56
## Algebra 1: Common Core (15th Edition) We first add 15 to both sides to get that x/2 + x/3 =15. We need a common denominator (bottom of the fraction) to add the numerators (tops of the fractions). Thus, we multiply $\frac{x}{3}$ by $\frac{2}{2}$ to get $\frac{2x}{6}$, and we multiply $\frac{x}{2}$ by $\frac{3}{3}$ to get $\frac{3x}{6}$. We add the two fractions to get that $\frac{5x}{6}=15$. We multiply both sides by 6 to get that 5x=90. We finally divide both sides by 5 to get that x=18.
2023-01-29 23:03:54
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8870570063591003, "perplexity": 231.90989842238503}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2023-06/segments/1674764499768.15/warc/CC-MAIN-20230129211612-20230130001612-00623.warc.gz"}
https://physics.aps.org/articles/v13/s119
Synopsis # An Intrinsically Magnetic Topological Insulator Physics 13, s119 Experiments show that the ferromagnetism naturally possessed by the topological insulator manganese bismuth telluride extends right to the material’s surface. Topological insulators that are intrinsically magnetic—rather than made to have a field by the incorporation of magnetic impurities—may provide opportunities to study how magnetism affects these materials’ distinctive electron-transport properties. Researchers have recently shown that the topological insulator manganese bismuth telluride ( ${\text{MnBi}}_{2}{\text{Te}}_{4}$) is naturally magnetically ordered in its interior, but they had not determined whether this bulk ferromagnetism extends to the surface. Now, Dan Nevola of Brookhaven National Laboratory, New York, and colleagues have experimentally confirmed that ${\text{MnBi}}_{2}{\text{Te}}_{4}$ is magnetically ordered at its surface as well [1]. Nevola and colleagues used photoemission spectroscopy to try to spot the effects of magnetism on the energy bands of electrons at the surface of an ${\text{MnBi}}_{2}{\text{Te}}_{4}$crystal. The researchers looked at two particular features of the surface band structure—a Dirac cone and a Rashba state. They found that an energy gap was present in the Rashba state but, contrary to some predictions, no such gap existed in the Dirac cone. Incorporating ferromagnetic order at the surface of the crystal into theoretical models reproduced the same energy gap in the Rashba state that the researchers observed in the experiments. Thus the crystal must have a magnetically ordered surface. The fact that the Dirac cone didn’t behave as expected indicates that there is further complexity to be understood in this strange material. The finding reveals a more complex view of intrinsic magnetic topological insulators in general, and of ${\text{MnBi}}_{2}{\text{Te}}_{4}$ specifically. In particular, the researchers say that it may provide insights into some of this material’s mysterious properties, such as the especially low temperature at which it exhibits the quantum anomalous Hall effect. –Erika K. Carlson Erika K. Carlson is a Corresponding Editor for Physics based in New York City. ## References 1. D. Nevola et al., “Coexistence of surface ferromagnetism and a gapless topological state in ${\text{MnBi}}_{2}{\text{Te}}_{4}$,” Phys. Rev. Lett. 125, 117205 (2020). ## Related Articles Condensed Matter Physics ### A Supersolid Disk Researchers have created a disk-shaped supersolid, an achievement that could provide new routes to exploring previously unseen states of matter. Read More » Magnetism ### Spinon Collisions Glimpsed in a Model Quantum System Researchers catch a first peek of a collision between two spinon quasiparticles in a quantum spin chain. Read More » Condensed Matter Physics ### The Two Structures of Hot Dense Ice Experiments indicate that superionic ice can exist in two stable crystal structures. Read More »
2022-05-20 09:49:38
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 7, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.5536080598831177, "perplexity": 1930.1018941509474}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-21/segments/1652662531779.10/warc/CC-MAIN-20220520093441-20220520123441-00099.warc.gz"}
https://www.groundai.com/project/new-algorithms-for-unordered-tree-inclusion/
New Algorithms for Unordered Tree Inclusion # New Algorithms for Unordered Tree Inclusion Tatsuya Akutsu, Jesper Jansson, Ruiming Li, Atsuhiro Takasu, and Takeyuki Tamura ###### Abstract The tree inclusion problem is, given two node-labeled trees  and  (the “pattern tree” and the “text tree”), to locate every minimal subtree in  (if any) that can be obtained by applying a sequence of node insertion operations to . The ordered tree inclusion problem is known to be solvable in polynomial time while the unordered tree inclusion problem is NP-hard. The currently fastest algorithm for the latter is from 1995 and runs in time, where and are the sizes of the pattern and text trees, respectively, and is the degree of the pattern tree. Here, we develop a new algorithm that improves the exponent to  by considering a particular type of ancestor-descendant relationships and applying dynamic programming, thus reducing the time complexity to . We then study restricted variants of the unordered tree inclusion problem where the number of occurrences of different node labels and/or the input trees’ heights are bounded and show that although the problem remains NP-hard in many such cases, if the leaves of  are distinctly labeled and each label occurs at most times in  then it can be solved in polynomial time for and in time for . 1. Bioinformatics Center, Institute for Chemical Research, Kyoto University, Gokasho, Uji, Kyoto, 6110011, Japan. {takutsu,rmli,tamura}@kuicr.kyoto-u.ac.jp 2. Department of Computing, The Hong Kong Polytechnic University, Hung Hom, Kowloon, Hong Kong, China. jesper.jansson@polyu.edu.hk 3. Content and Media Science Research Division, National Institute of Informatics, 2-1-2 Hitotsubashi, Chiyoda-ku, Tokyo, 101-8430, Japan. takasu@nii.ac.jp keywords: algorithm, tree inclusion, unordered tree, dynamic programming, ancestor-descendant relationship ## 1 Introduction Tree pattern matching and measuring the similarity of trees are classic problem areas in theoretical computer science. One intuitive and extensively studied measure of the similarity between two rooted, node-labeled trees  and  is the tree edit distance, defined as the length of a shortest sequence of node insertion, node deletion, and node relabeling operations that transforms  into . When and  are ordered trees, the tree edit distance can be computed in polynomial time. The first algorithm to achieve this bound ran in time [17], where is the total number of nodes in  and , and it was gradually improved upon until Demaine et al. [8] presented an -time algorithm thirty years later which was proved to be worst-case optimal among a reasonable class of algorithms. On the other hand, the tree edit distance problem is NP-hard for unordered trees [21]. It is in fact MAX SNP-hard even for binary trees in the unordered case [20], which implies that it is unlikely to admit a polynomial-time approximation scheme. Akutsu et al. [1, 3] have developed efficient exponential-time algorithms for this problem variant. As for parameterized algorithms, Shasha et al. [16] developed an -time algorithm for the problem, where and are the number of leaves in and , respectively, and an -time algorithm for the unit-cost edit operation model, where is the edit distance, was given in [2]. See [4] for a survey of many other related results. An important special case of the tree edit distance problem known as the tree inclusion problem is obtained when only node insertion operations are allowed. This problem has applications to structured text databases and natural language processing [5, 11, 18]. Here, we assume the following formulation of the problem: given a “text tree”  and a “pattern tree” , locate every minimal subtree in  (if any) that can be obtained by applying a sequence of node insertion operations to . (Equivalently, one may define the tree inclusion problem so that only node deletion operations on  are allowed.) For unordered trees, Kilpeläinen and Mannila [11] proved the problem to be NP-hard in general but solvable in polynomial time when the degree of the pattern tree is bounded from above by a constant. More precisely, the running time of their algorithm is time, where , , and is the degree of . Bille and Gørtz [5] gave a fast algorithm for the case of ordered trees, and Valiente [18] developed an efficient algorithm for a constrained version of the unordered case. Also note that the special case of the tree inclusion problem where node insertion operations are only allowed to insert new leaves corresponds to a subtree isomorphism problem, which can be solved in polynomial time for unordered trees [14]. The extended tree inclusion problem, proposed in [15], is an optimization problem designed to make the problem more useful for practical tree pattern matching applications, e.g., involving glycan data from the KEGG database [10], weblogs data [19], and bibliographical data from ACM, DBLP, and Google Scholar [12]. This problem asks for an optimal connected subgraph of  (if any) that can be obtained by performing node insertion operations as well as node relabeling operations to  while allowing non-uniform costs to be assigned to the different node operations; it was shown in [15] how to solve the unrooted version in time exponential in  and how a further extension of the problem that also allows at most  node deletion operations can be solved by an algorithm whose running time depends on . ### 1.1 Practical Applications As the rapid advance of AI technology, matching methods for knowledge base become more important. As a fundamental technique for searching knowledge base, researchers in database community have been studying the subtree similarity search. For example, Cohen and Or proposed subtree similarity search algorithm for various distance function [7], while Chang et al. proposed top-k tree matching algorithm [6]. In Natural Language Processing (NLP) field, researchers are incorporating the deep learning techniques into NLP problems and developing parsing/dependency trees processing and matching problems [13]. Bibliographic matching is one of the most popular applications of real-world matching problems [12]. In most cases, single article has at most two or three versions, and it is very rare that single article includes the same name co-authors. Therefore, it may be reasonable to assume that the leaves of  are distinctly labeled and each label occurs at most times in ### 1.2 New Results and Organization of the Paper We improve the exponential contribution to the time complexity of the fastest known algorithm for the unordered tree inclusion problem (Kilpeläinen and Mannila’s algorithm from 1995 [11]) from  to , where is the maximum degree of the pattern tree, so that the time complexity becomes . We then study the problem’s computational complexity for several restricted cases (see Table 1 for a summary) and give a polynomial-time algorithm for when the leaves in  are distinctly labeled and every label appears at most twice in . Finally, we derive an -time algorithm for the NP-hard case where the leaves in  are distinctly labeled and each label appears at most three times in . The paper is organized as follows. Section 2 defines the unordered tree inclusion problem and the concept of minimality, and explains the basic ideas related to the ancestor-descendant relationship. In Section 3, we utilize the ancestor-descendant relationships and dynamic programming to obtain the exponential-factor speedup. Section 4 presents the NP-hardness results for the special cases listed in Table 1. Finally, the polynomial- and exponential-time algorithms for when the leaves in  are distinctly labeled and each label appears at most two or three times are developed in Sections 5 and 6, respectively. ## 2 Preliminaries From here on, all trees are rooted, unordered, and node-labeled. Let  be a tree. A node insertion operation on  is an operation that creates a new node  having any label and then: (i) attaches  as a child of some node  currently in  and makes become the parent of a (possibly empty) subset of the children of  instead of ; or (ii) makes the current root of  become a child of  and lets become the new root. For any two trees  and , we say that  is included in  if there exists a sequence  of node insertion operations such that applying  to  yields . The set of vertices in a tree  is denoted by . A mapping between two trees  and  is a subset such that for every , it holds that: (i)  if and only if ; and (ii)  is an ancestor of  if and only if is an ancestor of .  is included in  if and only if there is a mapping  between  and  such that and and have the same node label for every  [17]. In the tree inclusion problem, the input is two trees  and  (also referred to as the “pattern tree” and the “text tree”), and the objective is to determine if is included in . Define and , and denote the maximum outdegree of . For any node , let and denote its label and the set of its children. Also let and denote the sets of strict ancestors and strict descendants of , respectively, i.e., where itself is excluded from these sets. For a tree , and denote its root and the set of nodes in . For a node in a tree , is the subtree of induced by . We write if is included in under the condition that corresponds to . For two trees and , denotes that is isomorphic to . The following concept plays a key role in our algorithm. ###### Definition 1. We say that minimally includes (denoted as ) if holds and there is no such that . ###### Proposition 1. Let . holds if and only if the following conditions are satisfied. • . • has a set of descendants such that for all . • There exists a bijection from to such that holds for all . ###### Proof. Conditions (1) and (2) are obvious. To prove (3), suppose there exists a bijection from to such that holds for all and does not hold for some . Then, there must exist such that holds. Let be the bijection obtained by replacing a mapping from to with that from to . Clearly, gives an inclusion mapping. Repeatedly applying this procedure, we can obtain a bijection satisfying all conditions. ∎ Since is included in if and only if there exists such that , we focus on how to decide if assuming that whether holds is known for all with , , and . We have: ###### Proposition 2. Suppose that can be decided in time. Then the unordered tree inclusion problem can be solved in time by using a bottom-up dynamic programming procedure. ## 3 An O(d2dmn3)-Time Algorithm The crucial parts of the algorithm in [11] are the definition of and its computation. (for fixed ) was defined by S(v) = {A⊆Chd(u)| P(A)⊂T(v)}, where is the forest induced by nodes in and their descendants and means that forest is included in (i.e., can be obtained from  by node insertion operations). Clearly, the size of is no greater than . In the algorithm of [11], the following operation is performed from left to right among the children of : S := {A∪B|A∈S,B∈S(vi)}, which causes an factor because it examines set pairs. Therefore, we need to avoid this kind of operation. Given an unordered tree , we fix any left-to-right ordering of its nodes. Then, for any two nodes that do not have any ancestor-descendant relationship, either “ is left of ” or “ is right of ” is uniquely determined. We denote “ is left of ” by . We focus on deciding if holds for fixed . Assume w.l.o.g. that . For simplicity, we assume until the end of this section that does not hold for any . For any , define by M(vi) = {uj∈Chd(u)|P(uj)≺T(vi)}. For example, , , and in Figure 1. For any , denotes the set of nodes in each of which is left of (see Figure 1 for an example). Then, we define by S(v,vi) = {A⊆Chd(u)|P(A)⊂T(L(v,vi))} ∪ {A⊆Chd(u)|(A=A′∪{uj})∧(P(A′)⊂T(L(v,vi)))∧(uj∈M(vi))} where is the forest induced by nodes in and their descendants. Note that always holds. The definition of leads to a dynamic programming procedure for its computation. We explain and related concepts using an example in Figure 1. Suppose that we have the following relations. P(uA)≺T(v1),P(uB)≺T(v1),P(uC)≺T(v2), P(uD)≺T(v3),P(uE)≺T(v3),P(uD)≺T(v4),P(uF)≺T(v4). Then, the following holds. S(v,v0) = { ∅ }, S(v,v1) = { ∅, {uA}, {uB} }, S(v,v2) = { ∅, {uC} }, S(v,v3) = { ∅, {uD}, {uE} }, S(v,v4) = { ∅, {uD}, {uE}, {vF}, {uD,uE}, {uD,uF}, {uE,uF} } . ###### Proof. Let and . Let be an injection from to giving an inclusion mapping for . Let , where . Then, and hold for all . Furthermore, holds for . Therefore, . It is straightforward to see that does not contain any element not in . ∎ We construct a DAG (directed acyclic graph) from (see also Figure 2). is defined by , and is defined by . Then, we traverse so that node is visited only after its all of its predecessors are visited. Let denote the set of the predecessors of (i.e., is the set of nodes left of ). Recall that . Then, we compute by the following procedure, which is referred to as AlgInc1. • . • . If , we let . Finally, we let . Then, iff and have the same label and . ###### Lemma 1. AlgInc1 correctly computes s in time. ###### Proof. Since it is straightforward to prove the correctness, we analyze the time complexity. The sizes of , s, and s are , and computation of each of such sets can be done in time. Since the number of s and s are , the total computation time is . ∎ If there exist such that , we treat each element in , s, and s as a multiset where each pair of and such that are identified and the multiplicity of is bounded by the number of s isomorphic to . Then, the size of each multiset is at most and the number of different multisets is not greater than . Therefore, the same time complexity result holds. This discussion can also be applied to the following sections. AlgInc1 did a lot of redundant computations. In order to compute , we do not need to consider all s that are left of . Instead, we construct a tree from a given by the following rule (see also Figure 3): for each pair of consecutive siblings in , add a new sibling (leaf) between and . Newly added nodes are called virtual nodes. We construct a DAG on by: iff one of the following holds • is a virtual node, and is in the rightmost path of , where . • is a virtual node, and is in the leftmost path of , where . Then, we can use the same algorithm as AlgInc1, except that is replaced by . We denote the resulting algorithm by AlgInc2. ###### Lemma 2. AlgInc2 correctly computes s in time. ###### Proof. Since it is straightforward to see the correctness, we analyze the time complexity. We can see that is since • is , • Each non-virtual node in has at most one incoming edge and at most one outgoing edge, • Each edge connects non-virtual node and virtual node. Therefore, the total number of set operations is reduced to , from which the lemma follows. ∎ From Proposition 2, we have: ###### Theorem 1. Unordered tree inclusion can be solved in time. If we analyze the time complexity carefully, we can see that the total time complexity is , where is the height of because each is involved in computation of only for . ## 4 NP-Hardness of Unordered Tree Inclusion for Pattern Trees with Unique Leaf Labels For any node-labeled tree , let be the height of  and let be the set of all leaf labels in . For any , let be the number of times that  occurs in , and define . The decision version of the tree inclusion problem is to determine whether  can be obtained from  by applying node insertion operations. Kilpeläinen and Mannila [11] proved that the decision version of unordered tree inclusion is NP-complete by reducing from Satisfiability. In their reduction, the clauses in a given instance of Satisfiability are represented by node labels in the constructed trees; in particular, for every clause , each literal in  introduces one node in  whose node label represents . By modifying their reduction to assume that each clause contains exactly three literals (i.e., using 3SAT instead of Satisfiability), we immediately have: ###### Corollary 1. The decision version of the unordered tree inclusion problem is NP-complete even if restricted to instances where , , , and . In Kilpeläinen and Mannila’s reduction, the labels assigned to the internal nodes of  are significant. Below, we consider the computational complexity of the special case of the problem where all internal nodes in  and  have the same label, or equivalently, where only the leaves are labeled. The following problem is known to be NP-complete [9]: Exact Cover by 3-Sets (X3C): Given a set and a collection of subsets of  where for every and every belongs to at most three subsets in , does admit an exact cover, i.e., is there a such that and ? From here on, assume w.l.o.g. that in any given instance of X3C,  is an integer and each belongs to at least one subset in . ###### Theorem 2. The decision version of the unordered tree inclusion problem is NP-complete even if restricted to instances where , , , , and all internal nodes have the same label. ###### Proof. Membership in NP follows from the proof of Theorem 7.3 in [11]. To prove NP-hardness, we reduce from X3C. Given an instance of X3C, construct two node-labeled, unordered trees  and  as follows. (Refer to Figure 4 for an example of the reduction.) Let be a set of elements different from , define , and let be an element not in . For any , let denote the height- unordered tree consisting of a root node labeled by  whose children are bijectively labeled by . Construct by creating a node  labeled by  and attaching the roots of the following trees as children of : • for each • for each , • for each Construct by taking a copy of  and then, for each , attaching the root of  as a child of the root of . Note that by construction, , , , , and hold. We now show that is included in  if and only if admits an exact cover. First, suppose that admits an exact cover . Then is included in  because all leaves of  labeled by  can be mapped to the -subtrees in  for , while of the leaves labeled by can be mapped to the remaining -subtrees and each of the other leaves with labels from  can be mapped to one of the - and -subtrees. Next, suppose that is included in . By the definitions of  and , each subtree rooted at a child of  can have at most one leaf with a label in  or at most three leaves with labels in  mapped to it from . Since but there are only subtrees in  of the form and , at least subtrees of the form must have a leaf with a label from mapped to them. This means that at most subtrees of the form remain for the  leaves in  labeled by  to be mapped to, and hence, exactly such subtrees have to be used. Denote these subtrees by , , , . Then is an exact cover of . ∎ ## 5 A Polynomial-Time Algorithm for the Case of Occ(p,t)=2 In the following, we require that each leaf of has a unique label and that it appears at no more than leaves in . We denote this number by . We write if is included in under the condition that corresponds to , where denotes the subtree of induced by and its descendants. Then, the following (#) is the crucial part (exponential-time part): Assume w.l.o.g. that has the same label as . Let be the children of . Then, if and only if holds for all for some nodes each pair of which does not have an ancestor-descendant relationship. From the assumption, we have the following observation. ###### Proposition 4. Suppose that has a leaf labeled with . If , then is an ancestor of a leaf (or leaf itself) with label . From (#) and this proposition, for each , we only need to consider minimal nodes s such that , where ‘minimal’ means that there is no descendant of such that , It is easy to see that the number of such minimal nodes is at most for each if . If is such a minimal node, we write . As illustrated in Figure 5, we can have a chain of choices of the subtrees of in . (E.g., if we choose , then we cannot choose . Therefore, we need to choose . If we choose , then we cannot choose . Etc.) This suggests that 2-SAT may be useful. We have: ###### Theorem 3. Unordered tree inclusion can be solved in polynomial time if . ###### Proof. We prove the theorem by using a reduction to 2-SAT. Let . Assume by induction that we know . We define by Occ(ui,M) = |{(ui,vj)| (ui,vj)∈M}|. See Figure 6 for an illustration. We assume w.l.o.g. that for all . Associate a Boolean variable to each element and include the following constraints: • and for each , where (). It means that is mapped to exactly one of or . (Recall that we assume for all .) • for each pair such that holds or and have an ancestor-descendant relationship. It means that the condition of (#) must be satisfied. Then, this 2-SAT instance is satisfiable iff holds. Since 2-SAT is solvable in polynomial time, we have the theorem. ∎ ## 6 An O(1.8d⋅poly(m,n))-time Algorithm for the Case of Occ(p,t)=3 In this section, we present an time algorithm for the case of , where is the maximum outdegree of , , and . The basic strategy is use of dynamic programming: decide whether in a bottom-up way. Suppose that has a set of children . Since we use dynamic programming, we can assume that is known for all and for all . We define by M(u,v) = {(ui,vj)| P(ui)≺T(vj) ∧ vj∈V(T(v))}. The crucial task of the dynamic programming procedure is to find an injective mapping from to such that holds for all () and there is no ancestor/descendant relationship between any and (). If this task can be performed in time, the total complexity will be . We assume w.l.o.g. that is given as a set of mapping pairs. For , we define by AncDes(vj,T,M) = {(uk,vh)| (uk,vh)∈M ∧ vh∈({vj}∪Anc(vj,T)∪Des(vj,T))}, where (resp., ) denotes the set of ancestors (resp., descendants) of in where (resp., ). Recall that is defined by Occ(ui,M) = |{(ui,vj)| (ui,vj)∈M}|, where . Let (resp., ) be the number of s such that (resp., ) (see also Figure 6). We assume w.l.o.g. that because means that is uniquely determined. From Theorem 3, we can see the following if there is no pair such that , , and . • The problem can be solved in time: For each such that (i.e., ), we choose (i.e., ) or not. Thus, there exist possibilities. After each choice, there is no such that and Theorem 3 can be applied. • The problem can also be solved in time: For each with (i.e., ), we choose or not. Thus, there are possibilities and after each choice, each with is removed or the problem can be reduced to bipartite matching as shown in Figure 7. It means the problem can be solved in time. We denote the condition (i.e., ‘if’ part of the above) and this algorithm by (##) and ALG-##, respectively, Therefore, the crucial point is how to (recursively) remove pairs such that , , and .
2020-07-11 08:12:25
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9430170655250549, "perplexity": 726.9688613627294}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-29/segments/1593655924908.55/warc/CC-MAIN-20200711064158-20200711094158-00197.warc.gz"}
http://www.exampleproblems.com/wiki/index.php/Multivariable_Calculus
# Multivariable Calculus ## Vector Calculus ### Vector Differentiation Solution If $A=xyz{\vec i}+xz^{2}{\vec j}-y^{3}{\vec k},B=x^{3}{\vec i}-xyz{\vec j}+x^{2}z{\vec k}\,$ calculate ${\frac {\partial ^{2}{\vec A}}{\partial y^{2}}}\times {\frac {\partial ^{2}{\vec B}}{\partial x^{2}}}$ at the point (1,1,0). Solution Find ${\frac {dr}{dt}},{\frac {d^{2}r}{dt^{2}}}$ when $r=3i-6t^{2}j+4tk$ Solution If $r=\sin ti+\cos tj+tk\,$ Find ${\frac {dr}{dt}},{\frac {d^{2}r}{dt^{2}}},\left|{\frac {dr}{dt}}\right\vert ,\left|{\frac {d^{2}r}{dt^{2}}}\right\vert$ Solution If $r=\cos nti+\sin ntj\,$ where n is constant and t varies, prove that $r\times ({\frac {dr}{dt}})=nk\,$ and $r\cdot ({\frac {dr}{dt}})=0\,$. Solution If $r=e^{{nt}}a+e^{{-nt}}b\,$ where a,b are constant vectors, show that $({\frac {d^{2}r}{dt^{2}}})-n^{2}r=0\,$ Solution If $r=a\cos \omega t+b\sin \omega t\,$,Show that $r\times {\frac {dr}{dt}}=\omega a\times b\,$ and ${\frac {d^{2}r}{dt^{2}}}=-\omega ^{2}r\,$ Solution If $u=t^{2}i-tj+(2t+1)k\,$ and $v=(2t-3)i+j-tk\,$, Find ${\frac {d}{dt}}(u\cdot v)\,$ and ${\frac {d}{dt}}(u\times v)\,$ where t=1. Solution If $a=\sin \theta i+\cos \theta j+\theta k,b=\cos \theta i-\sin \theta j-3k,c=2i+3j-k\,$.Find ${\frac {d}{d\theta }}[a\times (b\times c)]\,$ at $\theta =0\,$ Solution A particle moves along a curve whose parametric equations are $x=e^{{-t}},y=2\cos 3t,z=\sin 3t\,$.Find the velocity and acceleration at t=0. Solution A particle moves along the curve $x=t^{3}+1,y=t^{2},z=2t+5\,$ where t is the time.Find the components of its velocity and acceleration at t=1 in the direction of $i+j+3k\,$. Solution A particle moves so that its position vector is given by $r=\cos \omega ti+\sin \omega tj\,$ where $\omega \,$ is a constant.Show that i).The velocity of the particle is perpendicular to r ii).The acceleration is directed towards the origin and has magnitude proportional to the distance from the origin. Solution Show that if a,b,c are constant vectors,then $r=at^{2}+bt+c\,$ is the path of a particle moving with constant acceleration. SolutionIf $f=\cos xyi+(3xy-2x^{2})j-(3x+2y)k\,$,find the value of ${\frac {\partial f}{\partial x}},{\frac {\partial f}{\partial y}},{\frac {\partial ^{2}f}{\partial x^{2}}},{\frac {\partial ^{2}f}{\partial y^{2}}},{\frac {\partial ^{2}f}{\partial x\partial y}}\,$. Solution If $f=(2x^{2}y-x^{4})i+(e^{{xy}}-y\sin x)j+x^{2}\cos yk\,$.Verify that ${\frac {\partial ^{2}f}{\partial x\partial y}}={\frac {\partial ^{2}f}{\partial y\partial x}}\,$. Solution If $\phi (xyz)=xy^{2}z\,$ and $f=xzi-xyj+yz^{2}k\,$.Find ${\frac {\partial ^{3}(\phi f)}{\partial x^{2}\partial z}}\,$ at (2,-1,1). ### Vector Integration Solution If $f(t)=(t-t^{2})i+2t^{3}j-3k\,$,Find i).$\int f(t)\,dt\,$ ii).$\int _{1}^{2}f(t)\,dt\,$ Solution If $f(t)=ti+(t^{2}-2t)j+(3t^{2}+3t^{3})k\,$,find $\int _{0}^{1}f(t)\,dt\,$. Solution Evaluate $\int _{0}^{1}(e^{t}i+e^{{-2t}}j+tk)\,dt\,$ Solution If $r=ti-t^{2}j+(t-1)k\,$ and $s=2t^{2}i+6tk\,$,Evaluate $\int _{0}^{2}r\cdot s\,dt\,$ and $\int _{0}^{2}r\times s\,dt\,$. Solution Evaluate $\int _{0}^{2}a\cdot b\times c\,dt\,$ where $a=ti-3j+2tk,b=i-2j+2k,c=3i+tj-k\,$ Solution Given that $r(t)=2i-j+2k\,$ when t=2,$r(t)=4i-2j+3k\,$ when t=3, Show that $\int _{2}^{3}[r\cdot {\frac {dr}{dt}}]\,dt=10\,$. Solution Evaluate $\int _{1}^{2}r\times {\frac {d^{2}r}{dt^{2}}}\,dt\,$ where $r=2t^{2}i+tj-3t^{3}k\,$ Solution If $r(t)=5t^{2}i+tj-r^{3}k\,$,prove that $\int _{1}^{2}r\times {\frac {d^{2}r}{dt^{2}}}\,dt=-14i+75j-15k\,$ Solution Evaluate $\int a\cdot [r\times {\frac {d^{2}r}{dt^{2}}}]\,dt\,$ Solution Evaluate $\int _{1}^{2}[a\cdot (b\times c)+a\times (b\times c)]\,dt\,$ where $a=ti-3j+2tk,b=i-2j+2k,c=3i+tj-k\,$ Solution The acceleration of a moving particle at any time t is given by ${\frac {d^{2}r}{dt^{2}}}=12\cos 2ti-8\sin 2tj+16tk\,$.Find the velocity v and displacement r at any time t,if t=0,v=0 and r=0. Solution Find the value of r satisfying the equation ${\frac {d^{2}r}{dt^{2}}}=6ti-24t^{2}j+4\sin tk\,$ given that $r=2i+j,{\frac {dr}{dt}}=-i-3k\,$ at t=0. Solution If the acceleration of a particle at any time t greater than or equal to zero is given by $a=3\cos ti+4\sin tj+t^{2}k\,$ and the velocity v and displacement r are zero at t=0, then find v and r at any time t. Solution Integrate ${\frac {d^{2}r}{dt^{2}}}=-n^{2}r\,$ SolutionIf $f(x,y,z)=x^{3}+y^{3}+z^{3}+3xyz\,$ then find $\nabla f\,$ SolutionIf $f(x,y,z)=3x^{2}y-y^{3}z^{2}\,$,find $\nabla f\,$ at the point (1,-2,-1). Solution If $r=xi+yj+zk\,$ and $r=|r|=(x^{2}+y^{2}+z^{2})^{{{\frac {1}{2}}}}\,$, Prove that i). $\nabla f(r)=f'(r)\nabla f\,$ ii). $\nabla r=({\frac {1}{r}})r\,$ Solution If $\phi (x,y,z)=(3r^{2}-4r^{{{\frac {1}{2}}}}+6r^{{-{\frac {1}{3}}}})\,$,Show that $\nabla \phi =2(3-r^{{-{\frac {3}{2}}}}-r^{{-{\frac {7}{3}}}})r\,$ Solution If $u=x+y+z,v=x^{2}+y^{2}+z^{2},w=xy+yz+zx\,$ Prove that ${\mathrm {grad}}u\cdot [\nabla v\times \nabla w]=0\,$ Solution Evaluate $\nabla e^{{r^{2}}}\,$ where $r^{2}=x^{2}+y^{2}+z^{2}\,$ Solution Show that $(a\cdot \nabla )\phi =a\cdot \nabla \phi \,$ Solution If $F=[y{\frac {\partial f}{\partial z}}-z{\frac {\partial f}{\partial y}}]i+[z{\frac {\partial f}{\partial x}}-x{\frac {\partial f}{\partial z}}]j+[x{\frac {\partial f}{\partial y}}-y{\frac {\partial f}{\partial x}}]k\,$,Prove that i).$F=r\times \nabla f\,$ ii).$F\cdot r=0\,$ iii).$F\cdot \nabla f=0\,$ SolutionIf $u=3x^{2}y,v=xz^{2}-2y\,$,find $(\nabla u)\cdot (\nabla v)\,$ SolutionFind $\nabla \phi ,|\nabla \phi |\,$ where $\phi (x,y,z)=(x^{2}+y^{2}+z^{2})e^{{-(x^{2}+y^{2}+z^{2})^{{{\frac {1}{2}}}}}}\,$ SolutionIf $f=x^{2}yi-2xzj+2yzk\,$, find i). $divf\,$. ii). Evaluate $div[(x^{2}-y^{2})i+2xyj+(y^{2}-2xy)k]\,$ Solution If $a_{1}i+a_{2}j+a_{3}k\,$,prove that $\nabla \cdot a=(\nabla a_{1})\cdot i+(\nabla a_{2})\cdot j+(\nabla a_{3})\cdot k\,$ SolutionIf $a=(x+3y)i+(y-3z)j+(x-2z)k\,$,find $(a\cdot \nabla )a\,$ Solution Evaluate $\nabla \cdot (a\times r)r^{n}\,$ where a is a constant vector. Solution Find $\nabla \times f\,$ or curl F, where i). $F=x^{2}yi-2xzj+2yzk\,$ ii). $F=(x^{2}-y^{2})i+2xyj+(y^{2}-2xy)k\,$ Solution Prove that $curlcurlF=0\,$ where $F=zi+xj+yk\,$ Solution If $V=e^{{xyz}}(i+j+k)\,$,find $curlV\,$ Solution If $r=xi+yj+zk\,$,prove that i).$divr=3\,$ ii). If $r=xi+yj+zk\,$ show that $curlr=0\,$ Solution IF $f=xy^{2}i+2x^{2}yzj-3yz^{2}k\,$,Find ${\mathrm {div}}f,{\mathrm {curl}}f\,$.What are their values at$(1,-1,1)\,$ Solution Find the ${\mathrm {curl}}\,$ of the vector $V=(x^{2}+yz)i+(y^{2}+zx)j+(z^{2}+xy)k\,$ at the point$(1,2,3)\,$ Solution If $f=(x+y+1)i+j+(-x-y)k\,$,prove that $f\cdot {\mathrm {curl}}f=0\,$ Solution a). Prove that vector $f=(x+3y)i+(y-3z)j+(x-2z)k\,$ is solenoidal. b). Determine the constant 'a' so that the vector $f=(x+3y)i+(y-2z)j+(x+az)k\,$ is solenoidal. Solution a). Show that the vector $f=(\sin y+z)i+(x\cos y-z)j+(x-y)k\,$ is irrational. b). Determine the constants 'a','b','c' so that the vector $f=(x+2y+az)i+(bx-3y-z)j+(4x+cy+2z)k\,$ is irrational. Solution Prove that $\nabla \cdot (r^{3}r)=6r^{3}\,$ Solution Prove ${\mathrm {div}}[r\nabla r^{{-3}}]=3r^{{-4}}\,$ or $\nabla \cdot [r\nabla ({\frac {1}{r^{3}}})]={\frac {3}{r^{4}}}\,$ Solution If a is a constant vector,prove that ${\mathrm {curl}}{\frac {a\times r}{r^{3}}}=-{\frac {a}{r^{3}}}+{\frac {3r}{r^{5}}}(a\cdot r)\,$ Solution Show that $\nabla ^{2}({\frac {x}{r^{3}}})=0\,$ Solution Show that ${\mathrm {div}}{\mathrm {grad}}(r^{m})=m(m+1)r^{{m-2}}\,$ Solution Evaluate ${\mathrm {curl}}{\mathrm {grad}}(r^{m})\,$ where $r=|r|=|xi+yj+zk|\,$ Solution If $u=x^{2}-y^{2}+4z\,$,Show that $\nabla ^{{2}}u=0\,$ Solution Show that $u=ax^{2}+by^{2}+cz^{2}\,$ satisfies Laplace equation $\nabla ^{2}u=0\,$ Solution If f and g are two scalar functions,prove that ${\mathrm {div}}(f\nabla g)=f\nabla ^{2}g+\nabla f\times \nabla g\,$ Solution Show that $\nabla \cdot (\nabla \times r)=0\,$ if $\nabla \times V=0\,$ Solution Prove that $\nabla ^{2}({\frac {1}{r}})=0\,$,where $r^{2}=x^{2}+y^{2}+z^{2}\,$ Solution Evaluate $\nabla ^{2}({\frac {x}{r^{2}}})\,$ Solution Prove that $V\times {\mathrm {curl}}V={\frac {1}{2}}\nabla V^{2}-(V\cdot \nabla )V\,$ SolutionIf $v=v_{1}i+v_{2}j+v_{3}k\,$,prove that $\nabla \times v=\nabla v_{1}\times i+\nabla v_{2}\times j+\nabla v_{3}\times k\,$ Solution If r(P) be the vector from the origin O to a point P in the xy-plane,then show that the plane scalar field $u(P)=\log r\,$ satisfies the equation $\nabla ^{2}u=0\,$ Solution Prove that ${\mathrm {div}}(A\times r)=r\cdot {\mathrm {curl}}A\,$ Solution Prove that $\nabla \times (F\times r)=2F-(\nabla \cdot F)r+(r\cdot \nabla )F\,$ Solution If $u=e^{{2x}}+x^{2}z\,$ and $v=2z^{2}y-xy^{2}\,$,find ${\mathrm {grad}}(uv)\,$ at the point (1,0,2). Solution Prove that ${\mathrm {curl}}[r\times (a\times r)]=3r\times a\,$,where a is a constant vector. Solution Find the unit normal to the surface $z=x^{2}+y^{2}\,$ at the point (-1,-2,5). Solution Find the directional derivative of $\phi =x^{2}yz+2xz^{2}\,$ at (1,-2,-1) in the direction of $2i-j-2k\,$ Solution Calculate the maximum rate of change and the corresponding direction for the function $\phi =x^{2}y^{3}z^{4}\,$ at the point $2i+3j-k\,$ Solution Find the equation of the tangent plane and normal to the surface $xyz=4\,$ at the point (1,2,2). Solution Find the equation of the tangent line and normal plane to the curve of intersection of $x^{2}+y^{2}+z^{2}=1,x+y+z=1\,$ at (1,0,0). Solution Find the angle between the curves $x^{2}+y^{2}+z^{2}=9,z=x^{2}+y^{2}-3\,$ at the point (2,-1,2). Solution Find the constants a and b so that surfaces $ax^{2}-byz=(a+2)x\,$ will be orthogonal to the surface $4x^{2}y+z^{3}=4\,$ at the point (1,-1,2). ### Line, Surface & Volume Integrals Solution If $F=3xyi-y^{2}j\,$,Evaluate $\int _{{C}}F\cdot dr\,\,$,where C is the curve $y=2x^{2}\,$ in the xy-plane from (0,0)to (1,2). Solution Evaluate $\int _{{C}}F\cdot \,dr\,$ where $F=(x^{2}+y^{2})i-2xyj\,$,C is the rectangle in xy-plane bounded by $y=0,x=a,y=b,x=0\,$ Solution If $F=(2x+y)i+(3y-x)j\,$,evaluate $\int _{{C}}F\cdot \,dr\,$ where C is the curve in the xy-plane consisting of the straight line from $(0,0)\,$ to $(2,0)\,$ and then to $(3,2)\,$ Solution If $F=(3x^{2}+6y)i-14yzj+20xz^{2}k\,$,evaluate $\int _{{C}}F\cdot \,dr\,$ where C is the straight line joining $(0,0,0),(1,1,1)\,$ Solution Evaluate $\int _{{C}}F\cdot \,dr\,$ where $F=\cos yi-x\sin yj\,$ and C is the curve $y={\sqrt {1-x^{2}}}\,$ in the xy-plane from $(1,0)\,$ to $(0,1)\,$ Solution Evaluate $\int F\cdot \,dr\,$ along the curve $x^{2}+y^{2}=1,z=1\,$ in the positive direction from (0,1,1) to (1,0,1),where $F=(2x+yz)i+xzj+(xy+2z)k\,$ Solution Evaluate $\int _{{C}}F\cdot \,dr\,$ where $F=x^{2}y^{2}i+yj\,$ and the curve c is $y^{2}=4x\,$ in the xy-plane from (0,0) to (4,4) where $r=xi+yj\,$ Solution Evaluate $\int _{{C}}F\cdot \,dr\,$ where $F=xyi+(x^{2}+y^{2})j\,$ and C in the arc of the curve $y=x^{2}-4\,$ from (2,0) to (4,12). Solution If $F=(2x^{2}+y^{2})i+(3y-4x)j\,$ evaluate $\int F\cdot \,dr\,$ around the triangle ABC whose vertices are $A(0,0),B(2,0),C(2,1)\,$ Solution If $F=(2y+3)i+xzj+(yz-x)k\,$.evaluate $\int _{{C}}F\cdot \,dr\,$ where C is the path consisting of the straight lines from (0,0,0) to (0,0,1) then to (0,1,1) and then to (2,1,1). Solution If $A=(2y+3)i+xzj+(yz-x)k\,$,evaluate $\int _{{C}}A\cdot \,dr\,$ along the curve C.$x=2t^{2},y=t,z=t^{3}\,$ from t=0 to t=1. Solution Evaluate $\int _{{C}}F\cdot \,dr\,$ where $F=zi+xj+yk\,$ and C is the arc of the curve $r=\cos ti+\sin tj+tk\,$ from $t=0\,$ to $t=\pi \,$ Solution Evaluate $\int _{{C}}F\cdot \,dr\,$ where $F=yzi+zxj+xyk\,$ and C is the portion of the curve $r=a\cos ti+b\sin tj+ctk\,$ from $t=0\,$ to $t={\frac {\pi }{2}}\,$ Solution Evaluate $\int _{{C}}F\cdot \,dr\,$ where $F=xyi+yzj+zxk\,$ and C is the arc of the curve $r=a\cos \theta i+a\sin \theta j+a\theta k\,$ from $\theta =0\,$ to $\theta ={\frac {\pi }{2}}\,$ Solution If $F=xyi-zj+x^{2}k\,$,evaluate $\int _{{C}}F\times \,dr\,$ where C is the curve $x=t^{2},y=2t,z=t^{3}\,$ from t=0 to 1. Solution Find the total work done in moving a particle in a force field given by $F=3xyi-5zj+10xk\,$ along the curve $x=t^{2}+1,y=2t^{2},z=t^{3}\,$ from t=1 to t=2. Solution Find the work done when a force $F=(x^{2}-y^{2}+x)i-(2xy+y)j\,$ moves a particle in xy-plane from (0,0) to (1,1) along the curve $y^{2}=x\,$ Solution Find the work done in moving a particle in a force field $F=3x^{2}i+(2xz-y)j+zk\,$ along the line joining (0,0,0) to (2,1,3). Solution Find the work done in moving a particle once round a circle C in the xy-plane,if the circle has centre at the origin and radius 3 and when the force field is given by $F=(2x-y+z)i+(x+y-z^{2})j+(3x-2y+4z)k\,$ Solution Find the circulation of F round the curve C where $F=yi+zj+zxk\,$ and C is the circle $x^{2}+y^{2}=1,z=0\,$ Solution Find the circulation of F round the curve C where $F=e^{x}\sin yi+e^{x}\cos yj\,$ and C is the rectangle whose vertices are $(0,0),(1,0),(1,{\frac {\pi }{2}}),(0,{\frac {\pi }{2}})\,$ Solution Evaluate $\iint _{{S}}(y^{2}z^{2}i+z^{2}x^{2}j+x^{2}y^{2}k)\cdot \,ds\,$ where S is the part of the sphere $x^{2}+y^{2}+z^{2}=1\,$ above the xy-plane. Solution If $F=yi+(x-2xz)j-xyk\,$,evaluate $\iint _{{S}}(\nabla \times F)\cdot n\,dS\,$ where S is the surface of the sphere $x^{2}+y^{2}+z^{2}=a^{2}\,$ above the xy-plane. Solution Evaluate $\iint _{{S}}({\mathrm {curl}}F\cdot n\,dS\,$ where $F=yi+zj+xk\,$ and surface S in the part of the sphere $x^{2}+y^{2}+z^{2}=1\,$ above the xy-plane. Solution Evaluate $\iint _{S}(y^{2}zi+z^{2}xj+x^{2}yk)\cdot dS\,$ where S is the surface of the sphere $x^{2}+y^{2}+z^{2}=a^{2}\,$ lying in the positive octant. Solution Evaluate $\iint _{{S}}F\cdot n\,dS\,$ over the surface of the cylinder $x^{2}+y^{2}=9\,$ included in the first octant between z=0 and z=4 where $F=zi+xj-yzk\,$ Solution Evaluate $\iint _{{S}}F\cdot n\,dS\,$ where $F=2yxi-yzj+x^{2}k\,$ over the surface of the cube bounded by the coordinate planes and planes $x=a,y=a,z=a\,$ Solution Evaluate $\iint _{{S}}F\cdot n\,dS\,$ where $F=(x-z)i+(x^{3}+yz)j-3xy^{2}k\,$ and S in the surface of the cone $z=2-{\sqrt {(x^{2}+y^{2})}}\,$ above the xy-plane. Solution Evaluate $\iiint _{V}F\,dv\,$ where $F=xi+yj+zk\,$ and V is the region bounded by the surfaces $x=0,x=2,y=0,y=6,z=4,z=x^{2}\,$ Solution Evaluate $\iiint _{V}\phi \,dv\,$ where $\phi =45x^{2}y\,$ and V is the closed region bounded by the planes $4x+2y+z=8,x=0,y=o,z=0\,$ Solution If $F=2xzi-xj+y^{2}k\,$ evaluate $\iiint _{V}F\,dV\,$ where V is the region bounded by the planes $x=y=z=0,x=y=z=1\,$ Solution Let r denote the position vector any point (x,y,z) measured from an origin O and let $r=|r|\,$.Evaluate $\iint _{S}{\frac {r}{|r|^{3}}}\cdot dS\,$ where S denotes the sphere of radius a with center at the origin. Solution Evaluate $\iint _{S}F\cdot ndS\,$,where $F=yi+2xj-zk\,$ and S in the surface of the plane 2x+y=6 in the first octant cut off by the plane z=4. Solution If $F=(2x^{2}-3z)i-2xyj-4xk\,$,then evaluate i). $\iiint _{V}\nabla \times FdV\,$ ii). $\iiint _{V}FdV\,$, where V is the region bounded by x=0,y=0,z=0 and $2x+2y+z=4\,$ Solution Evaluate $\iiint _{V}(2x+y)dV\,$ where V is closed region bounded by the cylinder $z=4-x^{2}\,$ and the planes x=0,y=0,y=2 and z=0. Solution Find the volume of the region common to the intersecting cylinders $x^{2}+y^{2}=a^{2}\,$ and $x^{2}+z^{2}=a^{2}\,$ ### Green, Stokes & Gauss Divergence Theorems • Green's Theorem in the plane - Relation between plane and line integrals If R is a closed region in the xy-plane bounded by a simple closed curve C and if $\phi (x,y)\,$ and $\psi (x,y)\,$ are continuous functions having continuous partial derivatives in R,then $\oint (\psi dx+\phi dy)=\iint _{R}\left[{\frac {\partial \phi }{\partial x}}-{\frac {\partial \psi }{\partial y}}\right]dxdy\,$ where C is traversed in the positive (anti-clockwise) direction. • Stokes Theorem - Relation between surface and line integrals If F is any continuously differentiable vector point function and S is a surface bounded by a curve C,then $\oint F\cdot dr=\iint _{S}{\mathrm {curl}}F\cdot ndS\,$ where the unit normal n at any point of S is drawn in the direction in which a right-handed screw would move when rotated in the sense of description of C. Solution If $F=(x^{2}-y^{2})i+2xyj\,$ and $r=xi+yj\,$,find the value of $\int F\cdot dr\,$ around the rectangular boundary x=0,x=a,y=0,y=b. SolutionEvaluate by Green's theorm in plane$\int _{C}(e^{{-x}}\sin ydx+e^{{-x}}\cos ydy)\,$ where C is the rectangle with vertices (0,0),$(\pi ,0),(\pi ,{\frac {\pi }{2}}),(0,{\frac {\pi }{2}})\,$. Solution Verify Green's theorm in plane for $\int _{C}[(x^{2}-xy^{3})dx+(y^{2}-2xy)dy]\,$ where C is the square with vertices (0,0),(2,0),(2,2),(0,2). Solution Verify Green's theorm in plane for $\oint _{C}[(3x^{2}-8y^{2})dx+(4y-6xy)dy]\,$ where C is the boundary of the region defined by x=0,y=0,$x+y=1\,$ Solution Apply Green's theorm in the plane to evaluate $\int _{C}[(2x^{2}-y^{2})dx+(x^{2}+y^{2})dy]\,$ where C is the boundary of the curve enclosed by the x-axis and the semi-circle$y=(1-x^{2})^{{{\frac {1}{2}}}}\,$ Solution Show that the area bounded by a simple closed curve C is given by ${\frac {1}{2}}\int _{C}(xdy-ydx)\,$. Hence deduce that the area of the ellipse ${\frac {x^{2}}{a^{2}}}+{\frac {y^{2}}{b^{2}}}=1\,$ Solution Verify Green's theorm in a plane $\oint _{C}[(x^{2}-2xy)dx+(x^{2}y+3)dy]\,$ where C is the boundary of the region defined by $y^{2}=8x,x=2\,$ Solution Evaluate by Green's theorm $\oint _{C}[(\cos x\sin y-xy)dx+\sin x\cos ydy]\,$ where C is the circle $x^{2}+y^{2}=1\,$ Solution Evaluate by Green's theorm in the plane $\oint _{C}[(x^{2}-\cos hy)dx+(y+\sin x)dy]\,$ where C is the rectangle with vertices (0,0),($\pi \,$,0),($\pi \,$,1),(0,1). Solution Evaluate $\oint _{C}[(y-\sin x)dx+\cos xdy]\,$ where C is the triangle whose vertices are (0,0),(${\frac {\pi }{2}}\,$,0),(${\frac {\pi }{2}}\,$, by using Green's theorm in plane. Solution Verify Green's theorm in the plane for $\oint _{C}[(xy+y^{2})dx+x^{2}dy]\,$ where C is the closed curve of the region bounded by $y=x,y=x^{2}\,$ Solution Verify Green's theorm in plane for $\oint _{C}[(3x^{2}-8y^{2})dx+(4y-6xy)dy]\,$ where C is the region bounded by the parabolas $y^{2}=x,y=x^{2}\,$ Solution Verify Stokes' theorm for $F=(x^{2}+y^{2})i-2xyj\,$ taken round the rectangle bounded by $x=\pm a,y=0,y=b\,$ Solution Evaluate $\oint _{C}F\cdot dr\,$ by Stokes' theorm where $F=y^{2}i+x^{2}j-(x+z)k\,$ and C is the boundary of the triangle with vertices at (0,0,0),(1,0,0),(1,1,0). Solution If $F=(2x^{2}+y^{2})i+(3y-4x)j\,$ evaluate $\oint _{C}F\cdot dr\,$ where C is the boundary of the triangle with vertices (0,0),(2,0),(2,1) Solution Verify Stokes'theorm for the function $F=x^{2}i+xyj\,$ integrated along the rectangle in the plane z=0,where sides are along the lines x=0,y=0,x=a and y=b. Solution Evaluate by Stokes'theorm $\oint _{C}(e^{x}dx+2ydy-dz)\,$ where C is the curve $x^{2}+y^{2}=4,z=2\,$ Solution Evaluate by Stokes'theorm $\oint _{C}(\sin zdx-\cos xdy+\sin ydz)\,$ where C is the boundary of the rectangle $0\leq x\leq \pi ,0\leq y\leq 1,z=3\,$ Solution By converting into line integral,evaluate $\iint _{S}(\nabla \times A)\cdot ndS\,$, where $A=(x-z)i+(x^{3}+yz)j-3xy^{2}k\,$ and S is the surface of the cone $z=2-{\sqrt {(x^{2}+y^{2})}}\,$ above the xy-plane. Solution By converting into a line integral evaluate $\iint _{S}(\nabla \times F)\cdot ndS\,$ where $F=(x^{2}+y-4)i+3xyj+(2xy+z^{2})k\,$ and S is the surface of the paraboloid $z=4-(x^{2}+y^{2})\,$ above the xy-plane. Solution Evaluate $\iint _{S}(\nabla \times F)\cdot ndS\,$ where $F=(y-z+2)i+(yz+4)j-xzk\,$ and S is the surface of the cube $x=y=z=0,x=y=z=2\,$ above the xy-plane. Solution Verify Stokes'theorm for $F=yi+zj+xk\,$ where S is the upper half surface of the sphere $x^{2}+y^{2}+z^{2}=1\,$ and C is its boundary. Solution Verify Stokes'theorm for the vector $F=3yi-xzj+yz^{2}k\,$ where S is the surface of the paraboloid $2z=x^{2}+y^{2}\,$ bounded by z=2 and C is its boundary. Solution Verify Stokes'theorm for the function $F=zi+xj+yk\,$ where C is the unit circle in xy-plane bounding the hemisphere $z={\sqrt {(1-x^{2}-y^{2})}}\,$ Solution Apply Stokes'theorm to prove that $\int _{C}(ydx+zdy+xdz)=-2{\sqrt {2}}\pi a^{2}\,$,where C is the curve given by $x^{2}+y^{2}+z^{2}-2ax-2ay=0,x+y=2a\,$ and begins at the point (2a,0,0) and goes at first below the xy-plane. Solution By Stokes' theorem,prove that ${\mathrm {curl}}{\mathrm {grad}}\phi =0\,$ Solution Evaluate $\iint _{S}F\cdot ndS\,$ where $F=axi+byj+czk\,$ and S is the surface of the sphere $x^{2}+y^{2}+z^{2}=1\,$ Solution Use divergence theorm to find $\iint _{S}F\cdot ndS\,$ for the vector $F=xi-yj+2zk\,$ over the sphere $x^{2}+y^{2}+(z-1)^{2}=1\,$ Solution If $F=axi+byj+czk\,$ where a,b,c are constants,show that $\iint _{S}(n\cdot F)dS={\frac {4}{3}}\pi (a+b+c)\,$,S being the surface of the sphere $(x-1)^{2}+(y-2)^{2}+(z-3)^{2}=1\,$ Solution Find $\iint _{S}A\cdot ndS\,$,where $A=(2x+3z)i-(xz+y)j+(y^{2}+2z)k\,$ and S is the surface of the sphere having center at (3,-1,2) and radius 3 units. Solution By using the Gauss Divergence theorm,evaluate $\iint _{S}(xdydz+ydzdx+zdxdy)\,$,where S is the surface of the sphere $x^{2}+y^{2}+z^{2}=4\,$ Solution Apply divergence theorm to evaluate $\iint _{S}[(x+z)dydz+(y+z)dzdx+(x+y)dxdy]\,$ where S is the surface of the sphere $x^{2}+y^{2}+z^{2}=4\,$ Solution If $F=4xzi-y^{2}j+yzk\,$,then evaluate $\iint _{s}F\cdot ndS\,$ where S is the surface of the cube enclosed by x=0,x=1,y=0,y=1,z=0 and z=1. Solution Evaluate $\iint F\cdot ndS\,$,where $F=4xyi+yzj-xzk\,$ and S is the surface of the cube bounded by the planes x=0,x=2,y=0,y=2,z=0,z=2. Solution Apply Gauss'theorm to evaluate $\iint _{S}[(x^{3}-yz)dzdx-2x^{2}ydzdx+zdxdy]\,$ over the surface S of a cube bounded by the coordinate planes and the planes x=y=z=a. Solution Apply Gauss'theorm to show that $\iint _{S}(x^{3}-yz)i-2x^{2}yj+2k]\cdot ndS={\frac {a^{5}}{3}}\,$,where S denotes the surface of the cube bounded by the planes $x=0,x=a,y=0,y=a,z=0,z=a\,$ Solution Evaluate $\iint _{s}[x^{2}dydz+y^{2}dzdx+2z(xy-x-y)dxdy]\,$ where S is the surface of the cube $0\leq x\leq 1,0\leq y\leq 1,0\leq z\leq 1\,$ Solution Find the value of $\iint _{S}(F\times \nabla \phi )\cdot ndS\,$ where $F=x^{2}i+y^{2}j+z^{2}k,\phi =xy+yz+zx,S:x=\pm 1,y=\pm 1,z=\pm 1\,$ Solution Evaluate $\iint _{S}F\cdot ndS\,$,where $F=xi-yj+(z^{2}-1)k\,$ and S is the closed surface bounded by the planes $z=0,z=1\,$ and the cylinder $x^{2}+y^{2}=4\,$ by the application of Gauss'theorm. Solution Use Gauss' theorem to evaluate the integral $\iint _{S}F\cdot ndS\,$ of the vector field $F=xy^{2}i+y^{3}j+y^{2}zk\,$ through the closed surface formed by the cylinder $x^{2}+y^{2}=9\,$ and the plane $z=0,z=2\,$ Solution Use Gauss divergence theorem to find $\iint _{S}F\cdot ndS\,$ where $F=2x^{2}yi-y^{2}j+4xz^{2}k\,$ and S is the closed surface in the first octant bounded by $y^{2}+z^{2}=9,x=2\,$ Solution Evaluate $\iint _{S}(zx^{2}dxdy+x^{3}dydz+yx^{2}dzdx)\,$ where S is the closed surface consisting of the cylinder $x^{2}+y^{2}=4\,$ and the circular discs $z=0,z=3\,$ Solution If $F=\nabla \phi ,\nabla ^{2}\phi =-4\pi \rho \,$ show that $\iint _{S}F\cdot ndS=-4\pi \iiint _{V}\rho dV\,$ SolutionIf $\phi \,$ is harmonic in V,then $\iint _{S}{\frac {\partial \phi }{\partial n}}dS=\iiint _{V}\nabla ^{2}\phi dV\,$ where S is the surface enclosing V. Solution Prove that i). $\iiint _{V}\nabla \phi \cdot AdV=\iint _{S}\phi \cdot ndS-\iiint _{V}\phi \nabla \cdot AdV\,$ ii).Prove that $\iiint _{V}F\cdot {\mathrm {curl}}GdV=\iint _{S}G\times F\cdot dS+\iiint _{V}G\cdot {\mathrm {curl}}FdV\,$ Solution Show that for any closed surface S, i). $\iint _{S}ndS=0\,$ ii).$\iint _{S}r\times ndS=0\,$ iii).$\iint _{S}(\nabla \phi )\times ndS=0\,$ Solution If V is the volume of a region T bounded by a surface S,then prove that $V=\iint _{S}xdydz=\iint _{S}ydzdx=\iint _{S}zdxdy\,$ Solution Evaluate $\iint _{S}(\nabla \times F)\cdot ndS\,$,where $F=(x-z)i+(x^{3}+yz)j-3xy^{2}k\,$ and S is the surface of the cone $z=2-{\sqrt {(x^{2}+y^{2})}}\,$ above the xy plane. Solution Evaluate $\iint _{S}(x^{3}dydz+y^{3}dzdx+z^{3}dxdy)\,$ by converting the surface integral into a volume integral.Here,S is the surface of the sphere $x^{2}+y^{2}+z^{2}=1\,$ Solution Evaluate with the help of divergence theorm the integral $\iint _{S}[xz^{2}dydz+(x^{2}y-z^{3})dzdx+(2xy+y^{2}z)dxdy]\,$, where S is the entire surface of the hemispherical region bounded by $z={\sqrt {(a^{2}-x^{2}-y^{2})}},z=0\,$ Solution Evaluate $\iint _{S}(ax^{2}+by^{2}+cz^{2})dS\,$ over the sphere $x^{2}+y^{2}+z^{2}=1\,$ using the divergence theorm. Solution Compute $\iint _{S}(a^{2}x^{2}+b^{2}y^{2}+c^{2}z^{2})^{{{\frac {1}{2}}}}dS\,$ over the ellipsoid $ax^{2}+by^{2}+cz^{2}=1\,$ Solution Compute $\iint _{S}(a^{2}x^{2}+b^{2}y^{2}+c^{2}z^{2})^{{{\frac {-1}{2}}}}dS\,$ over the ellipsoid $ax^{2}+by^{2}+cz^{2}=1\,$ Solution Evaluate$\iint _{S}F\cdot ndS\,$ over the entire surface of the region above the xy plane bounded by the cone $z^{2}=x^{2}+y^{2}\,$ and the plane z=4,if $F=xi+yj+z^{2}k\,$ Solution By using the Gauss divergence theorm evaluate $\iint _{S}(xi+yj+z^{2}k)\cdot ndS\,$,where S is the closed surface bounded by the cone $x^{2}+y^{2}=z^{2}\,$ and the plane z=1. Solution Evaluate $\iint _{S}(\nabla \times F)\cdot ndS\,$ where $A=[xye^{z}+\log(z+1)-\sin x]k\,$ and S is the surface of the sphere $x^{2}+y^{2}+z^{2}=a^{2}\,$ above the xy plane. Solution Evaluate $\iint _{S}xyzdS\,$ where S is the surface of the sphere $x^{2}+y^{2}+z^{2}=a^{2}\,$ Solution By transforming to a triple integral,evaluate $\iint _{S}(x^{3}dydz+x^{2}ydzdx+x^{2}zdxdy)\,$ where S is the closed surface bounded by the planes $z=0,z=b\,$ and the cylinder $x^{2}+y^{2}=a^{2}\,$ Solution Evaluate $\iint _{S}{\frac {1}{p}}dS\,$ where S is the surface of the ellipsoid ${\frac {x^{2}}{a^{2}}}+{\frac {y^{2}}{b^{2}}}+{\frac {z^{2}}{c^{2}}}=1\,$ and p is the perpendicular drawn from the origin to the tangent plane at $(x,y,z)\,$ Solution Show that $\iint _{S}(x^{2}i+y^{2}j+z^{2}k)\cdot ndS\,$ vanishes where S denotes the surface of the ellipsoid ${\frac {x^{2}}{a^{2}}}+{\frac {y^{2}}{b^{2}}}+{\frac {z^{2}}{c^{2}}}=1\,$ Solution Verify the divergence theorm theorm for $F=4xzi-y^{2}j+yzk\,$ taken over the cube bounded by $x=0,x=1,y=0,y=1,z=0,z=1\,$ Solution Evaluate $\iint _{S}(x^{3}dydz+y^{3}dzdx)\,$ where S is the surface of hte sphere $x^{2}+y^{2}+z^{2}=a^{2}\,$ Solution Show that the vector field $F=(2xy^{2}+yz)i+(2x^{2}y+xz+2yz^{2})j+(2y^{2}z+xy)k\,$ is conservative. Solution Show that the vector field defined by $F=(2xy-z^{3})i+(x^{2}+z)j+(y-3xz^{2})k\,$ is conservative and find the scalar potential of F. Solution Show that the vector field F given by $F=(y+\sin z)i+xj+x\cos zk\,$ is conservative,find its scalar potential. Solution Show that $F=xi+yj+zk\,$ is conservative and find $\phi \,$ such that $F=\nabla \phi \,$ Solution Prove that $F=|r|^{2}r\,$ is conservative and find its scalar potential. Solution Show that $(y^{2}z^{3}\cos x-4x^{3}z)dx+2z^{3}y\sin xdy+(3y^{2}z^{2}\sin x-x^{4})dz\,$ is an exact differential of some function $\phi \,$ and find this function. Solution Show that $(2x\cos y+z\sin y)dx+(xz\cos y-x^{2}\sin y)dy+x\sin ydz=0\,$ is an exact differential and hence solve it. Solution Evaluate $\int _{C}[2xyz^{2}dx+(x^{2}z^{2}+z\cos yz)dy+(2x^{2}yz+y\cos yz)dz\,$ where C is any path from $(0,0,0)\,$ to $1,{\frac {\pi }{4}},2)\,$ Solution If $F=\cos yi-x\sin yj\,$, evaluate $\int _{C}F\cdot dr\,$ where C is the curve $y={\sqrt {(1-x^{2})}}\,$ in the xy plane from $(1,0)\,$ to $(0,1)\,$ Solution Evaluate $\int _{C}[yzdx+(xz+1)dy+xydz]\,$,where C is any path from $(1,0,0)\,$ to $(2,1,4)\,$ Solution A vector field is given by $F=(x^{2}+xy^{2})i+(y^{2}+x^{2}y)j\,$ Show that the field is irrational and obtain its scalar potential. Solution Show that the vector field F given by $F=(x^{2}-yz)i+(y^{2}-zx)j+(z^{2}-xy)k\,$ is irrotational. Find a scalar $\phi \,$ such that $F=\nabla \phi \,$ Solution Show that the vector function $F=(\sin y+z\cos x)i+(x\cos y+\sin z)j+(y\cos z+\sin x)k\,$ is irrotational and find the scalar function $\phi \,$ such that $F=\nabla \phi \,$ ## Multiple Integrals solution $\int _{0}^{2}\int _{0}^{1}(2x+y)^{8}dxdy\,$ solution Evaluate $\int _{0}^{2}\int _{1}^{2}(x^{2}+y^{2})dxdy\,$ solution Evaluate $\int _{0}^{1}\int _{1}^{2}(x^{2}+y^{2})dxdy\,$ solution $\int _{0}^{3}\int _{1}^{2}xy(x+y)dxdy\,$ solution $\int _{0}^{a}\int _{0}^{b}(x^{2}+y^{2})dxdy\,$ solution $\int _{1}^{2}\int _{3}^{4}{\frac {1}{(x+y)^{2}}}dxdy\,$ solution $\int _{1}^{4}\int _{{0}}^{{{\sqrt {4-x}}}}xydxdy\,$ solution $\int _{1}^{2}\int _{{x}}^{{x{\sqrt {3}}}}xydxdy\,$ solution $\int _{1}^{2}\int _{1}^{x}xy^{2}dxdy\,$ solution $\int _{{0}}^{{{\frac {\pi }{4}}}}\int _{{0}}^{{{\frac {\pi }{2}}}}\sin(x+y)dxdy\,$ solution $\int _{0}^{a}\int _{{0}}^{{{\sqrt {a^{2}-x^{2}}}}}y^{3}dydx\,$ solution $\int _{0}^{1}\int _{{{\sqrt {y}}}}^{{2-y}}x^{2}dxdy\,$ solution $\int _{0}^{2}\int _{{x^{2}}}^{{2x}}(2x+3y)dydx\,$ solution $\int _{0}^{a}\int _{{0}}^{{{\sqrt {a^{2}-x^{2}}}}}xydxdy\,$ solution $\int _{0}^{1}\int _{{0}}^{{1-x}}(x^{2}+y^{2})dydx\,$ solution $\int _{0}^{a}\int _{{{\frac {x^{2}}{a}}}}^{{2a-x}}xydydx\,$ solution $\int _{0}^{1}\int _{{-{\sqrt {y}}}}^{{{\sqrt {y}}}}dxdy+\int _{1}^{9}\int _{{{\frac {y-3}{2}}}}^{{{\sqrt {y}}}}dxdy\,$ solution $\int _{{0}}^{{2a}}\int _{{{\frac {y^{2}}{4a}}}}^{{3a-y}}(x^{2}+y^{2})dxdy\,$ solution Evaluate $\iint _{R}xydxdy\,$ where R is the positive quadrant of the circle $x^{2}+y^{2}=a^{2}\,$ solution Evaluate the double integral $\iint xy(x+y)dxdy\,$ over the region bounded by the curves $y=x,y=x^{2}\,$ solution Evaluate $\int _{{0}}^{{2a}}\int _{{0}}^{{{\sqrt {2ax-x^{2}}}}}(x^{2}+y^{2})dxdy\,$ by changing into polar coordinates. solution Evaluate $\iint xy(x^{2}+y^{2})^{{{\frac {n}{2}}}}dxdy\,$ over the positive quadrant of the circle $x^{2}+y^{2}=a^{2}\,$ supposing n+3>0. solution Find $\iint _{R}xydxdy\,$ where R is the region bounded by $x=1,x=2,y=0,xy=1\,$ solution Evaluate $\int _{{1}}^{{\log 8}}\int _{{0}}^{{\log y}}e^{{x+y}}dxdy\,$ solution Evaluate $I=\iint _{D}(x^{2}+y^{2})dxdy\,$ where D is bounded by $y=x\,$ and $y^{2}=4x\,$ solution $\iint _{D}(4xy-y^{2})dxdy\,$ where D is the reactangle bounded by $x=1,x=2,y=0,y=3\,$ solution $\iint _{D}(x^{2}+y^{2})dxdy\,$ where D is the region bounded by $y=x,y=2x,x=1\,$ in the first quadrant. solution $\iint _{D}(1+x+y)dxdy\,$ where D is the region bounded by the lines $y=-x,x={\sqrt {y}},y=2,y=0\,$ solution $\iint _{D}xydxdy\,$ where D is the domain bounded by the parabola $x^{2}=4ay\,$,the ordinates $x=a\,$ and x-axis. solution Evaluate $\iint (x-y)dxdy\,$ over the region between the line $x=y\,$ and the parabola $y=x^{2}\,$ solution Find the area bounded by the ellipse ${\frac {x^{2}}{a^{2}}}+{\frac {y^{2}}{b^{2}}}=1\,$ solution $I_{D}=\iint _{D}x^{3}ydxdy\,$ where D is the region enclosed by the ellipse ${\frac {x^{2}}{a^{2}}}+{\frac {y^{2}}{b^{2}}}=1\,$ in the first quadrant. solution Find by double integration,the area which lies inside the cardoid $r=a(1+\cos \theta )\,$ and outside the circle r=a. solution Find the area in the XY-plane bounded by the lemniscate $r^{2}=a^{2}\cos 2\theta \,$ solution Find the area bounded by the curves $y^{2}=x^{3}\,$ and $x^{2}=y^{3}\,$ solution Find the area of the domains $3x=4-y^{2},x=y^{2}\,$ in the XY-plane. solution Find the area of the region bounded by the parabola $y^{2}=4ax\,$ and the straight line $x+y=3a\,$ in the XY-plane. solution Find the area common to the parabolas $y^{2}=4a(x+a),y^{2}=4b(b-x)\,$ solution Find the area of the domains $x=y-y^{2},x+y=0\,$ in the XY-plane. solution Find the area of the domains $3y^{2}=25x,5x^{2}=9y\,$ in the XY-plane. solution Find the mass,coordinates of the centre of gravity and moments of inertia relative to x-axis,y-axis and origin of a reactangle $0\leq x\leq 4,0\leq y\leq 2\,$ having mass density xy. solution Find the volume of tetrahedron in space cut from the first octant by the plane $6x+3y+2z=6\,$ solution Calculate the volume of a solid whose base is in a xy-plane and is bounded by the parabola $y=4-x^{2}\,$ and the straight line $y=3x\,$,while the top of the solid is in the plane $z=x+4\,$. solution Find the moment of inertia of the area bounded by the circle $x^{2}+y^{2}=a^{2}\,$ in the first quadrant,assume the surface density of 1. solution A plane lamina of non uniform density is in the form of a quadrant of the ellipse ${\frac {x^{2}}{a^{2}}}+{\frac {y^{2}}{b^{2}}}=1\,$.If the density at any point (x,y)be kxy,where K is a constant,find the coordinates of the centroid of the lamina. solution Determine the volume of the space below the paraboloid $x^{2}+y^{2}+z-4=0\,$ and above the square in the xy-plane with vertices at $(0,0),(0,1),(1,0),(1,1)\,$. solution Find the volume of the solid under the surface $az=x^{2}+y^{2}\,$ and whose base R is the circle $x^{2}+y^{2}=a^{2}\,$. solution Find the volume enclosed by the coordinate planes and that portion of the plane $x+y+z=1\,$ which lies in the first quadrant. solution A circular hole of radius b is made centrally through a sphere of radius a.Find the volume of the remaining portion of the sphere. solution Find the volume of the region bounded by the paraboloids $z=x^{2}+y^{2}\,$ and $z={\frac {6-{\frac {x^{2}+y^{2}}{2}}}\,}$. solution Find the volume of the region bounded by the paraboloid $z=x^{2}+y^{2}\,$ and the plane z=4. solution Find the volume of the solid enclosed by the ellipsoid ${\frac {x^{2}}{a^{2}}}+{\frac {y^{2}}{b^{2}}}+{\frac {z^{2}}{c^{2}}}=1\,$ solutionFind the volume of the region in space bounded by the surface $z=1-(x^{2}+y^{2})\,$ on the sides by the planes $x=0,y=0,x+y=1\,$ and below by the plane z=0. solution Evaluate $\int _{0}^{1}\int _{0}^{x}\int _{{0}}^{{x+y}}(x+y+z)dzdydx\,$ solution Find the volume bounded by the ellipsoidic paraboloids $z=x^{2}+9y^{2}\,$ and $z=18-x^{2}-9y^{2}\,$ solution Find the total mass of the region in the cube $0\leq x\leq 1,0\leq y\leq 1,0\leq z\leq 1\,$ with density at any point given by $xyz\,$ solution Find the mass,centroid of the tetrahedron bounded by the coordinate planes and the plane ${\frac {x}{a}}+{\frac {y}{b}}+{\frac {z}{c}}=1\,$ solution Evaluate $\int _{0}^{2}\int _{1}^{z}\int _{{0}}^{{yz}}xyzdxdydz\,$ solution If the radius of the base and altitude of a right circular cone are given by a and h respectively,express its volume as a tripple integral and evaluate it using cylindrical coordinates. solution Evaluate $\int _{0}^{a}\int _{0}^{x}\int _{{0}}^{{y+x}}e^{{x+y+z}}dzdydx\,$ solution Find the volume bounded by the sphere $x^{2}+y^{2}+z^{2}=a^{2}\,$ solution Evaluate $\iiint xyzdxdydz\,$ over the positive octant of the sphere $x^{2}+y^{2}+z^{2}=a^{2}\,$ solution Assuming $\rho (x,y,z)=1\,$,find the centroid of the portion of the sphere $x^{2}+y^{2}+z^{2}=a^{2}\,$ in the first octant. solution Find the mass and moment of inertia of a sphere of radius 'a' with respect to a diameter if the density is proportional to the distance from the center. solution Evaluate $\iiint _{E}y^{2}x^{2}dV\,$ Where E is the region bounded by the paraboloid $x=1-y^{2}-z^{2}\,$ and the plane $\,x=0$ solution $\int _{{-2}}^{{2}}\int _{{0}}^{{{\sqrt {4-y^{2}}}}}\int _{{-{\sqrt {4-x^{2}-y^{2}}}}}^{{{\sqrt {4-x^{2}-y^{2}}}}}y^{2}{\sqrt {x^{2}+y^{2}+z^{2}}}dzdxdy\,$ Evaluate using spherical coordinates solution Evaluate $\iint _{{R}}(x+y)e^{{x^{2}-y^{2}}}dA\,$ Where $_{{R}}\,$ is the rectangle enclosed by the lines $x-y=0,x-y=2,x+y=0,x+y=3\,$
2014-08-28 13:04:38
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 525, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9081196188926697, "perplexity": 273.17484326297944}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2014-35/segments/1408500830834.3/warc/CC-MAIN-20140820021350-00126-ip-10-180-136-8.ec2.internal.warc.gz"}
http://mathhelpforum.com/trigonometry/117127-ambiguous-case-law-sine-cosine.html
# Thread: The Ambiguous Case (law of sine/ cosine) 1. ## The Ambiguous Case (law of sine/ cosine) need some help 3 probs. on setting up the pictures, etc. plz thxs! 2. Hello Nismo Originally Posted by Nismo need some help 3 probs. on setting up the pictures, etc. plz thxs! For question 1, I am assuming that the ship's speed of 16 mph is measured through the water - not relative to the earth. The vector law of addition (of velocities) states: The velocity of the ship relative to the earth $({_SV_E})$ = the velocity of the ship relative to the water $({_SV_W})$ + the velocity of the water relative to the earth $({_WV_E})$. Study the attached diagram carefully. It shows these three velocity vectors, drawn with the two possible directions of the current - that is, the velocity of the water relative to the earth. Using the Sine Rule: $\frac{\sin\theta}{16}=\frac{\sin 15}{14}$ This gives the value of $\theta$ (and $180-\theta$). From this you can work out the two directions the current makes with the North. (If my arithmetic is correct, they are $32^o$ and $178^o$, to the nearest degree.) In question 2, you need to draw a similar diagram, but this time you'll find there is only only possibility because the triangle is right-angled. See the second of the attachments. Since it's a right-angle triangle, it's very simple to work out the length of the third side to give the ground speed of the ship (that's the speed relative to the earth). Lastly, look at my third diagram to see the two different possibilities. Find $\theta$ (and $180-\theta$) using the Sine Rule. Then you can either (a) find the third angle and then use the Cosine Rule on each triangle to find the distances along the ground, or (b) possibly easier, because it just uses right-angled triangles, find the height above ground of the top of the pole (the dotted line in my diagram) and then the horizontal distances. Can you finish all these now?
2017-07-27 19:08:37
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 10, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8989275693893433, "perplexity": 395.1370742575394}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-30/segments/1500549429417.40/warc/CC-MAIN-20170727182552-20170727202552-00532.warc.gz"}
https://atcoder.jp/contests/abc291/tasks/abc291_d
Contest Duration: - (local time) (100 minutes) Back to Home D - Flip Cards / Time Limit: 2 sec / Memory Limit: 1024 MB ### 問題文 1 から N までの番号がついた N 枚のカードが一列に並んでいて、各 i\ (1\leq i < N) に対してカード i とカード i+1 が隣り合っています。 カード i の表には A_i が、裏には B_i が書かれており、最初全てのカードは表を向いています。 • 選んだカードを裏返した後、どの隣り合う 2 枚のカードについても、向いている面に書かれた数が相異なる。 ### 制約 • 1\leq N \leq 2\times 10^5 • 1\leq A_i,B_i \leq 10^9 • 入力は全て整数 ### 入力 N A_1 B_1 A_2 B_2 \vdots A_N B_N ### 入力例 1 3 1 2 4 2 3 4 ### 出力例 1 4 ### 入力例 2 4 1 5 2 6 3 7 4 8 ### 出力例 2 16 ### 入力例 3 8 877914575 602436426 861648772 623690081 476190629 262703497 971407775 628894325 822804784 450968417 161735902 822804784 161735902 822804784 822804784 161735902 ### 出力例 3 48 Score : 400 points ### Problem Statement N cards, numbered 1 through N, are arranged in a line. For each i\ (1\leq i < N), card i and card (i+1) are adjacent to each other. Card i has A_i written on its front, and B_i written on its back. Initially, all cards are face up. Consider flipping zero or more cards chosen from the N cards. Among the 2^N ways to choose the cards to flip, find the number, modulo 998244353, of such ways that: • when the chosen cards are flipped, for every pair of adjacent cards, the integers written on their face-up sides are different. ### Constraints • 1\leq N \leq 2\times 10^5 • 1\leq A_i,B_i \leq 10^9 • All values in the input are integers. ### Input The input is given from Standard Input in the following format: N A_1 B_1 A_2 B_2 \vdots A_N B_N ### Output Print the answer as an integer. ### Sample Input 1 3 1 2 4 2 3 4 ### Sample Output 1 4 Let S be the set of card numbers to flip. For example, when S=\{2,3\} is chosen, the integers written on their visible sides are 1,2, and 4, from card 1 to card 3, so it satisfies the condition. On the other hand, when S=\{3\} is chosen, the integers written on their visible sides are 1,4, and 4, from card 1 to card 3, where the integers on card 2 and card 3 are the same, violating the condition. Four S satisfy the conditions: \{\},\{1\},\{2\}, and \{2,3\}. ### Sample Input 2 4 1 5 2 6 3 7 4 8 ### Sample Output 2 16 ### Sample Input 3 8 877914575 602436426 861648772 623690081 476190629 262703497 971407775 628894325 822804784 450968417 161735902 822804784 161735902 822804784 822804784 161735902 ### Sample Output 3 48
2023-03-26 12:44:00
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.6914071440696716, "perplexity": 10728.391445743173}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2023-14/segments/1679296945472.93/warc/CC-MAIN-20230326111045-20230326141045-00546.warc.gz"}
http://sites.mathdoc.fr/cgi-bin/sps?kwd=Levy+systems&kwd_op=contains
Browse by: Author name - Classification - Keywords - Nature 5 matches found V: 33, 362-372, LNM 191 (1971) WEIL, Michel Conditionnement par rapport au passé strict (Markov processes) Given a totally inaccessible terminal time $T$, it is shown how to compute conditional expectations of the future with respect to the strict past $\sigma$-field ${\cal F}_{T-}$. The formula involves the Lévy system of the process Comment: B. Maisonneuve pointed out once that the paper, though essentially correct, has a small mistake somewhere. See Dellacherie-Meyer, Probabilité et Potentiels, Chap. XX 46--48 Keywords: Terminal times, Lévy systems Nature: Original Retrieve article from Numdam VII: 01, 1-24, LNM 321 (1973) BENVENISTE, Albert Application de deux théorèmes de G.~Mokobodzki à l'étude du noyau de Lévy d'un processus de Hunt sans hypothèse (L) (Markov processes) The object of the theory of Lévy systems is to compute the previsible compensator of sums $\sum_{s\le t} f(X_{s-},X_s)$ extended to the jump times of a Markov process~$X$, i.e., the times $s$ at which $X_s\not=X_{s-}$. The theory was created by Lévy in the case of a process with independent increments, and the classical results for Markov processes are due to Ikeda-Watanabe, J. Math. Kyoto Univ., 2, 1962 and Watanabe, Japan J. Math., 34, 1964. An exposition of their results can be found in the Seminar, 106. The standard assumptions were: 1) $X$ is a Hunt process, implying that jumps occur at totally inaccessible stopping times and the compensator is continuous, 2) Hypothesis (L) (absolute continuity of the resolvent) is satisfied. Here using two results of Mokobodzki: 1) every excessive function dominated in the strong sense in a potential. 2) The existence of medial limits (this volume, 719), Hypothesis (L) is shown to be unnecessary Comment: Mokobodzki's second result depends on additional axioms in set theory, the continuum hypothesis or Martin's axiom. See also Benveniste-Jacod, Invent. Math. 21, 1973, which no longer uses medial limits Nature: Original Retrieve article from Numdam VII: 02, 25-32, LNM 321 (1973) MEYER, Paul-André Une mise au point sur les systèmes de Lévy. Remarques sur l'exposé de A. Benveniste (Markov processes) This is an addition to the preceding paper 701, extending the theory to right processes by means of a Ray compactification Comment: All this material has become classical. See for instance Dellacherie-Meyer, Probabilités et Potentiel, vol. D, chapter XV, 31--35 Keywords: Lévy systems, Ray compactification Nature: Original Retrieve article from Numdam X: 22, 481-500, LNM 511 (1976) YOR, Marc Sur les intégrales stochastiques optionnelles et une suite remarquable de formules exponentielles (Martingale theory, Stochastic calculus) This paper contains several useful results on optional stochastic integrals of local martingales and semimartingales, as well as the first occurence of the well-known formula ${\cal E}(X)\,{\cal E}(Y)={\cal E}(X+Y+[X,Y])$ where ${\cal E}$ denotes the usual exponential of semimartingales. Also, the s.d.e. $Z_t=1+\int_0^t Z_sdX_s$ is solved, where $X$ is a suitable semimartingale, and the integral is an optional one. The Lévy measure of a local martingale is studied, and used to rewrite the Ito formula in a form that involves optional integrals. Finally, a whole family of exponentials'' is introduced, interpolating between the standard one and an exponential involving the Lévy measure, which was used by Kunita-Watanabe in a Markovian set-up Keywords: Optional stochastic integrals, Stochastic exponentials, Lévy systems Nature: Original Retrieve article from Numdam XI: 37, 529-538, LNM 581 (1977) MAISONNEUVE, Bernard Changement de temps d'un processus markovien additif (Markov processes) A Markov additive process $(X_t,S_t)$ (Cinlar, Z. für W-theorie, 24, 1972) is a generalisation of a pair $(X,S)$ where $X$ is a Markov process with arbitrary state space, and $S$ is an additive functional of $X$: in the general situation $S$ is positive real valued, $X$ is a Markov process in itself, and the pair $(X,S)$ is a Markov processes, while $S$ is an additive functional of the pair. For instance, subordinators are Markov additive processes with trivial $X$. A simpler proof of a basic formula of Cinlar is given, and it is shown also that a Markov additive process gives rise to a regenerative system in a slightly extended sense
2022-12-06 10:18:14
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8105025291442871, "perplexity": 1635.6770866887996}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-49/segments/1669446711077.50/warc/CC-MAIN-20221206092907-20221206122907-00867.warc.gz"}
https://ivy-dl.org/ivy/core/reductions.html
# Reductions¶ Collection of reduction Ivy functions ivy.einsum(equation, *operands, f=None)[source] Sums the product of the elements of the input operands along dimensions specified using a notation based on the Einstein summation convention. Parameters • equation (str) – A str describing the contraction, in the same format as numpy.einsum. • operands (seq of arrays) – the inputs to contract (each one an ivy.Array), whose shapes should be consistent with equation. • f (ml_framework, optional) – Machine learning framework. Inferred from inputs if None. Returns The array with sums computed. ivy.reduce_max(x, axis=None, keepdims=False, f=None)[source] Computes the maximum value along the specified axis. The maximum is taken over the flattened array by default, otherwise over the specified axis. Parameters • x (array) – Array containing numbers whose max is desired. • axis (int or sequence of ints) – Axis or axes along which the maxes are computed. The default is to compute the max of the flattened array. If this is a tuple of ints, a max is performed over multiple axes, instead of a single axis or all the axes as before. • keepdims (bool, optional) – If this is set to True, the axes which are reduced are left in the result as dimensions with size one. With this option, the result will broadcast correctly against the input array. • f (ml_framework, optional) – Machine learning framework. Inferred from inputs if None. Returns The array with maxes computed. ivy.reduce_mean(x, axis=None, keepdims=False, f=None)[source] Computes the arithmetic mean along a given axis. Returns the average of the array elements. The average is taken over the flattened array by default, otherwise over the specified axis. Parameters • x (array) – Array containing numbers whose mean is desired. • axis (int or sequence of ints) – Axis or axes along which the means are computed. The default is to compute the mean of the flattened array. If this is a tuple of ints, a mean is performed over multiple axes, instead of a single axis or all the axes as before. • keepdims (bool, optional) – If this is set to True, the axes which are reduced are left in the result as dimensions with size one. With this option, the result will broadcast correctly against the input array. • f (ml_framework, optional) – Machine learning framework. Inferred from inputs if None. Returns The array with means computed. ivy.reduce_min(x, axis=None, keepdims=False, f=None)[source] Computes the minimum value along the specified axis. The minimum is taken over the flattened array by default, otherwise over the specified axis. Parameters • x (array) – Array containing numbers whose min is desired. • axis (int or sequence of ints) – Axis or axes along which the mins are computed. The default is to compute the min of the flattened array. If this is a tuple of ints, a min is performed over multiple axes, instead of a single axis or all the axes as before. • keepdims (bool, optional) – If this is set to True, the axes which are reduced are left in the result as dimensions with size one. With this option, the result will broadcast correctly against the input array. • f (ml_framework, optional) – Machine learning framework. Inferred from inputs if None. Returns The array with mins computed. ivy.reduce_prod(x, axis=None, keepdims=False, f=None)[source] Multiplies array elements along a given axis. Parameters • x (array) – Elements to multiply. • axis (int or sequence of ints) – Axis or axes along which a multiplication is performed. The default, axis=None, will multiply all of the elements of the input array. If axis is negative it counts from the last to the first axis. If axis is a tuple of ints, a multiplication is performed on all of the axes specified in the tuple instead of a single axis or all the axes as before. • keepdims (bool, optional) – If this is set to True, the axes which are reduced are left in the result as dimensions with size one. With this option, the result will broadcast correctly against the input array. • f (ml_framework, optional) – Machine learning framework. Inferred from inputs if None. Returns The array with multiplications computed. ivy.reduce_std(x, axis=None, keepdims=False, f=None)[source] Computes the arithmetic standard deviation along a given axis. The standard deviation is taken over the flattened array by default, otherwise over the specified axis. Parameters • x (array) – Array containing numbers whose standard deviation is desired. • axis (int or sequence of ints) – Axis or axes along which the means are computed. The default is to compute the mean of the flattened array. If this is a tuple of ints, a mean is performed over multiple axes, instead of a single axis or all the axes as before. • keepdims (bool, optional) – If this is set to True, the axes which are reduced are left in the result as dimensions with size one. With this option, the result will broadcast correctly against the input array. • f (ml_framework, optional) – Machine learning framework. Inferred from inputs if None. Returns The array with standard deviations computed. ivy.reduce_sum(x, axis=None, keepdims=False, f=None)[source] Computes sum of array elements along a given axis. Parameters • x (array) – Elements to sum. • axis (int or sequence of ints) – Axis or axes along which a sum is performed. The default, axis=None, will sum all of the elements of the input array. If axis is negative it counts from the last to the first axis. If axis is a tuple of ints, a sum is performed on all of the axes specified in the tuple instead of a single axis or all the axes as before. • keepdims (bool, optional) – If this is set to True, the axes which are reduced are left in the result as dimensions with size one. With this option, the result will broadcast correctly against the input array. • f (ml_framework, optional) – Machine learning framework. Inferred from inputs if None. Returns The array with sums computed. ivy.reduce_var(x, axis=None, keepdims=False, f=None)[source] Computes the arithmetic variance along a given axis. The variance is taken over the flattened array by default, otherwise over the specified axis. Parameters • x (array) – Array containing numbers whose variance is desired. • axis (int or sequence of ints) – Axis or axes along which the means are computed. The default is to compute the mean of the flattened array. If this is a tuple of ints, a mean is performed over multiple axes, instead of a single axis or all the axes as before. • keepdims (bool, optional) – If this is set to True, the axes which are reduced are left in the result as dimensions with size one. With this option, the result will broadcast correctly against the input array. • f (ml_framework, optional) – Machine learning framework. Inferred from inputs if None. Returns The array with variances computed.
2021-09-26 06:39:52
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.2551248073577881, "perplexity": 2171.331184287935}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-39/segments/1631780057830.70/warc/CC-MAIN-20210926053229-20210926083229-00348.warc.gz"}
http://aux.planetmath.org/comment/19856
induction proof of fundamental theorem of arithmetic Primary tabs Type of Math Object: Proof Major Section: Reference failure functions Background: In 1988 I read the book ”one, two, three, infinity ” by George Gammow. The book had a statement to the effect that no polynomial had been found such that it generates all the prime numbers and nothing but prime numbers. This was true at the time Gammow wrote the book; however subsequently a polynomial was constructed fulfiling the condition given above. I then experimented with some polynomials and found that although one cannot generally predict the prime numbers generated by a polynomial one can predict the composite numbers generated by a polynomial. Since I was originally trying to predict the primes generated by a given polynomial (which may be called ”successes ”) but could predict the ”failures” (composite numbers) I called functions which generate failures ”failure functions ”. I presented this concept at the Ramanujan Mathematical society in May 1988. Subsequently I used this tool in proving a theorem similar to the Ramanujan Nagell theorem at the AMS-BENELUX meeting in 1996. Abstract definition: Let $f(x)$ be a function of $x$. Then $x=g(x_{0})$ is a failure function if f(g(x_0)) is a failure in accordance with our definition of a failure.Note: $x_{0}$ is a specific value of $x$. Examples: 1) Let our definition of a faiure be a composite number. Let $f(x)beapolynomialinxwherexbelongsto$ Z$.Then$x$=$x_0 + kf(x_0) is a failure function since these values of $x$ are such that f(x) are composite. 2) Let our definition of a failure again be a composite number. Let the function be an exponential function $a^{x}+cwhereaandxbelongtoN,cbelongstoZ$ and a and c are fixed. Then $x=x_{0}+k*Eulerphi(f(x_{0})$ is a failure function.Here also $x_{0}$ is fixed. Here k belongs to N. 3) Let our definition of a failure be a non-Carmichael number. Let the mother function be $2^{n}+49$. Then $n=5+6*k$ is a failure function. Here also $k$ belngs to $N$. Applications: failure functions can be used for $a)$ indirect primality testing and $b)$ as a mathematical tool in proving theorems in number theory.
2018-01-16 07:44:02
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 65, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9034877419471741, "perplexity": 552.6191200997158}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-05/segments/1516084886237.6/warc/CC-MAIN-20180116070444-20180116090444-00003.warc.gz"}
https://getpractice.com/questions/237101
From the top of a building, $60$ metres high, the angles of depression of the top and bottom of a vertical lamp-post are observed to be ${30}^{o}$ and ${60}^{o}$ respectively. Find:
2021-01-23 11:08:33
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.5279715061187744, "perplexity": 209.1345134782967}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-04/segments/1610703537796.45/warc/CC-MAIN-20210123094754-20210123124754-00089.warc.gz"}
https://gertjanssenswillen.github.io/processmapR/reference/precedence_matrix.html
Construct a precendence matrix, showing how activities are followed by each other. precedence_matrix(eventlog, type = c("absolute", "relative", "relative_antecedent", "relative_consequent", "relative_case")) ## Arguments eventlog The event log object to be used The type of precedence matrix, which can be absolulte, relative, relative_antecedent or relative_consequent. Absolute will return a matrix with absolute frequencies, relative will return global relative frequencies for all antecedent-consequent pairs. Relative_antecedent will return relative frequencies within each antecendent, i.e. showing the relative proportion of consequents within each antecedent. Relative_consequent will do the reverse. ## Examples # NOT RUN { library(eventdataR) data(patients) precedence_matrix(patients) # }
2018-09-25 23:00:05
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.5007878541946411, "perplexity": 10347.184860864612}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-39/segments/1537267162563.97/warc/CC-MAIN-20180925222545-20180926002945-00319.warc.gz"}
https://notes.dzackgarza.com/Projects/2022%20Advanced%20Qual%20Projects/Geometry%20and%20Topology/00_Topics.html
# Topics • Basics of smooth manifolds: Inverse function theorem, implicit function theorem, submanifolds, integration on manifolds • Basics of matrix Lie groups over R and C: The definitions of Gl(n), SU(n), SO(n), U(n), their manifold structures, Lie algebras, right and left invariant vector fields and differential forms, the exponential map. • Definition of real and complex vector bundles, tangent and cotangent bundles, basic operations on bundles such as dual bundle, tensor products, exterior products, direct sums, pull-back bundles. • Definition of differential forms, exterior product, exterior derivative, de Rham cohomology, behavior under pull-back. • Metrics on vector bundles. • Riemannian metrics, definition of a geodesic, existence and uniqueness of geodesics. • Definition of a principal Lie group bundle for matrix groups. • Associated vector bundles: Relation between principal bundles and vector bundles • Definition of covariant derivative for a vector bundle and connection on a principal bundle. Relations between the two. • Definition of curvature, flat connections, parallel transport. • Definition of Levi-Civita connection and properties of the Riemann curvature tensor.
2022-08-15 21:14:32
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9181894659996033, "perplexity": 1064.7922883083088}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-33/segments/1659882572212.96/warc/CC-MAIN-20220815205848-20220815235848-00568.warc.gz"}
https://ijnetworktoolcanon.co/canon-ij-network-tool-printer-not-detected/
# Canon IJ Network Tool Printer Not Detected ## Canon IJ Network Tool Printer Not Detected Canon IJ Network Tool Printer Not Detected The printer cannot be detected (the message that the printer could not be detected on the network is displayed) – Canon IJ Network Tool Printer Not Detected MX892 . Description. The printer cannot be detected (the message that the printer could not be detected on the network is displayed) … Canon IJ Network Scanner Selector EX.app Updates and displays the contents of Printers: on the Canon IJ Network Tool screen to the latest information. Important. To change the printer ‘s network settings using IJ Network Tool, … If the printeron a network is not detected, make sure that the printer is turned on, then select Refresh Canon IJ Network Tool Printer Not Detected 4) tried running Canon IJ Network Tool – it showed the status-Not Detected-and a pop-up saying that there are ports that cannot be used with there current settings and to refer to a manual! I can still print when connecting to the printer with wires Canon IJ Network Tool The Canon IJ Network Tool is a utility that enables you to display and modify the machine network settings. It is installed when the machine is set up. Important: Do not start up the Canon IJ Network Tool while printing. Do not print when the Canon IJ Network Tool is running Solved: Hello, Has anyone been able to install a network Pixma MX920 on a different subnet from the PC using the Canon install software? I have a … Pixma MX920 on a different subnet not detected with install software. Options. … The biggest problem with failing to reliably print on a network printer is the dynamic IP and the computer Question: Q: My Canon MX870 PIXMA printer is not detected by my iMac. … Other wireless network problems are solved by giving the printer a fixed IP using the Canon IJ Network Tool.app. so that it is always available on the WiFi network. Hope that helps! It took me several weeks to resolve these issues Canon IJ Network Tool Printer Not Detected IJ Network Tool has been verified to work on Windows XP, however, IJ Network Tool does not support Switching Users Quickly. It is recommended to exit the IJ Network Tool when changing users. To change printer network settings using IJ Network Tool, the printer must be connected via LAN Canon Ij Network Tool that enable you to print and Scan from the wireless Canon IJ Network printerthat is connected through a network … Canon IJ Network ToolCanon IJ Network Tool configuration. Canon iP2772 Driver Download. Canon IP2772 Driver Download Canon IP2772 Driver Download – Canon Pixma IP2772 accompanies most of the extreme and The Printer is Not Displayed (Detected) on the Printers Screen The Printer is Not Displayed (Detected) on the Canon IJ Network Tool Screen The Printer is Not Displayed (Detected) on the Printer Setup Utility Screen P.1 P.2 Instructions as notes for operation or additional explanations. Description for Windows users. Description for Macintosh users. Canon Ij Network Tool that enable you to print and Scan from the wireless Canon IJ Network printerthat is connected through a network … Canon IJ Network ToolCanon IJ Network Tool configuration. Canon iP2772 Driver Download. Canon IP2772 Driver Download Canon IP2772 Driver Download – Canon Pixma IP2772 accompanies most of the extreme and Canon IJ Network Tool Printer Not Detected If the printer on a network is not detected, make sure that the printer is turned on, then click Update. … /Applications/Canon Utilities/IJ Network Tool/Canon IJ Network Tool.app; Setup. Procedures for the download and installation. … Canon IJ Network Tool (Mac) 4.7.1 Old Version Canon IJ Network Tool (Mac Launching the Ij Scan Utility Wont Detect PrinterCanon Ij Printer Utility Download in Windows 10, visit the start menu as well as discover Canon utilities using the All Apps segment. This is where you’ll discover the software. … IJ Network Driver Ver. 2.5.7 / Network Tool Ver. 2.5.7. Windows 10/8,1/8/Vista/XP/2000 32-64bit: DOWNLOAD Canon IJ Network Tool Printer Not Detected ### Canon IJ Network Tool Printer Not Detected Hyperlink Manual instruction all Canon Printers Canon IJ Network Tool Printer Not Detected
2019-07-18 11:59:44
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9295769333839417, "perplexity": 4526.72865641335}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-30/segments/1563195525627.38/warc/CC-MAIN-20190718104512-20190718130512-00425.warc.gz"}
https://gmatclub.com/forum/if-when-k-is-rounded-to-the-nearest-unit-the-result-is-equal-to-the-269629.html
GMAT Question of the Day - Daily to your Mailbox; hard ones only It is currently 19 Sep 2018, 08:16 ### GMAT Club Daily Prep #### Thank you for using the timer - this advanced tool can estimate your performance and suggest more practice questions. We have subscribed you to Daily Prep Questions via email. Customized for You we will pick new questions that match your level based on your Timer History Track every week, we’ll send you an estimated GMAT score based on your performance Practice Pays we will pick new questions that match your level based on your Timer History # If when k is rounded to the nearest unit, the result is equal to the Author Message TAGS: ### Hide Tags Math Expert Joined: 02 Sep 2009 Posts: 49251 If when k is rounded to the nearest unit, the result is equal to the  [#permalink] ### Show Tags 03 Jul 2018, 11:02 00:00 Difficulty: 65% (hard) Question Stats: 39% (01:11) correct 61% (01:09) wrong based on 36 sessions ### HideShow timer Statistics If when k is rounded to the nearest unit, the result is equal to the exact value of n, is it true that n = m? (1) When n is rounded to the nearest unit, the result is equal to the exact value of m. (2) When m is rounded to the nearest unit, the result is equal to the exact value of n. _________________ Math Expert Joined: 02 Aug 2009 Posts: 6787 Re: If when k is rounded to the nearest unit, the result is equal to the  [#permalink] ### Show Tags 03 Jul 2018, 11:25 Bunuel wrote: If when k is rounded to the nearest unit, the result is equal to the exact value of n, is it true that n = m? Imp inference :- n itself is in unit value (1) When n is rounded to the nearest unit, the result is equal to the exact value of m. We know n is itself exact unit value and if rounded to nearest unit it will remain itself But it is given to be equal to m, so n=m suff (2) When m is rounded to the nearest unit, the result is equal to the exact value of n. n is exact unit value, but we cannot say the same for m say m = 3.7... nearest unit = 4, so n =4 but both can be 4 also insuff A _________________ 1) Absolute modulus : http://gmatclub.com/forum/absolute-modulus-a-better-understanding-210849.html#p1622372 2)Combination of similar and dissimilar things : http://gmatclub.com/forum/topic215915.html 3) effects of arithmetic operations : https://gmatclub.com/forum/effects-of-arithmetic-operations-on-fractions-269413.html GMAT online Tutor Director Status: Learning stage Joined: 01 Oct 2017 Posts: 848 WE: Supply Chain Management (Energy and Utilities) Re: If when k is rounded to the nearest unit, the result is equal to the  [#permalink] ### Show Tags 05 Jul 2018, 13:13 Bunuel wrote: If when k is rounded to the nearest unit, the result is equal to the exact value of n, is it true that n = m? (1) When n is rounded to the nearest unit, the result is equal to the exact value of m. (2) When m is rounded to the nearest unit, the result is equal to the exact value of n. Question stem:- n=m? (Y/N) st1:- Given,when k is rounded to the nearest unit, the result is equal to the exact value of n. Suppose k=1.7, then n should be 2. When n is rounded to the nearest unit, the result is equal to the exact value of m. n=m=2 Hence sufficient. st2:- Given, when k is rounded to the nearest unit, the result is equal to the exact value of n. Suppose k=1.6, then n=2, When m is rounded to the nearest unit, the result is equal to the exact value of n. Suppose m=1.6, then n=2, so $$m\neq{n}$$ ,Not sufficient. But when k=1.6, then n=2 & when m=2 then n=2, so m=n; sufficient. Ans. (A) _________________ Regards, PKN Rise above the storm, you will find the sunshine Re: If when k is rounded to the nearest unit, the result is equal to the &nbs [#permalink] 05 Jul 2018, 13:13 Display posts from previous: Sort by # Events & Promotions Powered by phpBB © phpBB Group | Emoji artwork provided by EmojiOne Kindly note that the GMAT® test is a registered trademark of the Graduate Management Admission Council®, and this site has neither been reviewed nor endorsed by GMAC®.
2018-09-19 15:16:26
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.582539439201355, "perplexity": 1819.0029367901407}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-39/segments/1537267156252.31/warc/CC-MAIN-20180919141825-20180919161825-00394.warc.gz"}
https://en.m.wikipedia.org/wiki/Complex_analytic_space
# Complex analytic variety (Redirected from Complex analytic space) In mathematics, and in particular differential geometry and complex geometry, a complex analytic variety or complex analytic space is a generalization of a complex manifold which allows the presence of singularities. Complex analytic varieties are locally ringed spaces which are locally isomorphic to local model spaces, where a local model space is an open subset of the vanishing locus of a finite set of holomorphic functions. ## Definition Denote the constant sheaf on a topological space with value ${\displaystyle \mathbb {C} }$  by ${\displaystyle {\underline {\mathbb {C} }}}$ . A ${\displaystyle \mathbb {C} }$ -space is a locally ringed space ${\displaystyle (X,{\mathcal {O}}_{X})}$  whose structure sheaf is an algebra over ${\displaystyle {\underline {\mathbb {C} }}}$ . Choose an open subset ${\displaystyle U}$  of some complex affine space ${\displaystyle \mathbb {C} ^{n}}$ , and fix finitely many holomorphic functions ${\displaystyle f_{1},\dots ,f_{k}}$  in ${\displaystyle U}$ . Let ${\displaystyle X=V(f_{1},\dots ,f_{k})}$  be the common vanishing locus of these holomorphic functions, that is, ${\displaystyle X=\{x\mid f_{1}(x)=\cdots =f_{k}(x)=0\}}$ . Define a sheaf of rings on ${\displaystyle X}$  by letting ${\displaystyle {\mathcal {O}}_{X}}$  be the restriction to ${\displaystyle X}$  of ${\displaystyle {\mathcal {O}}_{U}/(f_{1},\ldots ,f_{k})}$ , where ${\displaystyle {\mathcal {O}}_{U}}$  is the sheaf of holomorphic functions on ${\displaystyle U}$ . Then the locally ringed ${\displaystyle \mathbb {C} }$ -space ${\displaystyle (X,{\mathcal {O}}_{X})}$  is a local model space. A complex analytic variety is a locally ringed ${\displaystyle \mathbb {C} }$ -space ${\displaystyle (X,{\mathcal {O}}_{X})}$  which is locally isomorphic to a local model space. Morphisms of complex analytic varieties are defined to be morphisms of the underlying locally ringed spaces, they are also called holomorphic maps.
2021-03-06 06:16:15
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 21, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9875911474227905, "perplexity": 411.06142446884564}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-10/segments/1614178374391.90/warc/CC-MAIN-20210306035529-20210306065529-00522.warc.gz"}
http://math.stackexchange.com/questions/8969/relationship-between-these-two-probability-mass-functions
# Relationship between these two probability mass functions If I have two different discrete distributions of random variables X and Y, such that their probability mass functions are related as follows: $P(X=x_i) = \lambda\frac{P (Y=x_i)}{x_i}$ what can I infer from this equation? Any observations or interesting properties that you see based on this relation? What if, P($X=x_i$) = $\lambda\sqrt{\frac{P (Y=x_i)}{x_i} }$ In both cases, $\lambda$ is a constant. - It would be clearer to write this with two random variables and one measure $P$; i.e. $P(X = x_i) = \lambda P(Y=x_i)/x_i$. Also, "what can I infer" is very vague; what sorts of things would you like to infer? –  Nate Eldredge Nov 4 '10 at 22:58 @Nate:Any observation will do. I am applying a polynomial decay on a pmf to get another pmf. Something like 'A uniform Y will result in a geometric distribution(?) for X.' –  SkypeMeSM Nov 4 '10 at 23:35 Also on MathOverflow: mathoverflow.net/questions/44902/… (repaired clipboard failure) –  Nate Eldredge Nov 5 '10 at 3:06 if you rewrite the equation as $P(Y = x) = xP(X = x)/\lambda$, then the distribution of $Y$ is called the length-biased distribution for $X$. it arises, for example, if one has a bunch of sticks in a bag and reaches in and selects one at random - where the probability a particular stick is selected is proportional to its length. if the lengths of the sticks are realizations of the random variable $X$, the distribution of the length of the selected stick is that of $Y$. - I don't have enough reputation yet, or this would be a comment. A random variable can have only one probability mass function, so it is not clear what you are asking. Where do these equations come from? - I am considering two different systems which operate on the same random variable X and have two different pmfs' u and v. Edited the question to reflect this. –  SkypeMeSM Nov 4 '10 at 22:05 First, you don't have the freedom to choose $\lambda$. (Maybe you already realize that?) In order for $P(X = x_i)$ to be a true probability distribution, you must have $$\lambda = \left(\sum_{x_i} \frac{P(Y = x_i)}{x_i}\right)^{-1}.$$ Given that, if $Y$ is geometric with success probability $p$, and $P(X = x_i) = \lambda P(Y = x_i)/x_i$ then $X$ does have the logarithmic distribution with parameter $q = 1-p$. Here's why: $P(Y = k) = (1-p)^{k-1}p$, for $k = 1, 2, \ldots$. Thus $$P(X = k) = \lambda \frac{P(Y = k)}{k} = \lambda \frac{q^{k-1} p}{k} = \frac{\lambda p}{q} \frac{q^k}{k}.$$ The expression on the right is the logarithmic probability mass function with $\lambda = \frac{-q}{p \ln p}$. -
2014-07-26 19:46:16
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9302886724472046, "perplexity": 361.8383593086774}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2014-23/segments/1405997904391.23/warc/CC-MAIN-20140722025824-00128-ip-10-33-131-23.ec2.internal.warc.gz"}
https://puzzling.stackexchange.com/questions/85430/the-puzzling-reverse-and-add-sequence
# The Puzzling Reverse and Add Sequence The sequence of numbers 0, 2, 4, 6, 8, 10, 12, 14, 16, 18, 11, 22,... (A056964 in the OEIS), in which the nth term equals n+reversal of digits of n, poses a number of intriguing puzzles. Here just four: 1. Does it contain infinitely many square numbers? 2. Infinitely many cubes? 3. Infinitely many pairs of consecutive numbers (like 887 and 888, both in the sequence)? 4. Arbitrarily long sets of consecutive numbers? • Did you miss one, or did you write "four" when you meant "three"? And do you know the answers to all three (four?) of your questions? – Gareth McCaughan Jun 23 '19 at 23:03 • @GarethMcCaughan I meant three, but, come to think of it, I might as well add a fourth question. I know the answer to three of the four. – Bernardo Recamán Santos Jun 23 '19 at 23:09 • Questions 1 to 3 have been answered satisfactorily by Gareth, but answer to question 4 is not convincing. – Bernardo Recamán Santos Jun 30 '19 at 1:36 Partial answer (resolves three of the four questions, probably the three to which OP knows the answer :-) ) Infinitely many squares: Yes. Let $$Z$$ be any sequence of $$0$$s; then $$2Z4Z2$$ reverses to itself and adding yields $$4Z8Z4$$ which is the square of $$2Z2$$. Infinitely many cubes: Yes. Let $$Z$$ be any sequence of $$0$$s; then $$1Z3ZZ00$$ reverses to $$3Z1$$ and adding yields $$1Z3Z3Z1$$ which is the cube of $$1Z1$$. Infinitely many pairs of consecutive numbers: Yes. The reverse of $$9^n51^n$$ is $$1^n59^n$$ and adding these yields $$1^{2n+1}0$$. One more than this is $$1^{2n+2}$$ which you get by reverse-and-add on $$1^{n+1}0^{n+1}$$. Arbitrarily long sequences of consecutive numbers: I don't know. I suspect not. Reversing-and-adding numbers up to $$10^7$$ the only length-3 sequences I find are 10,11,12 and 1331,1332,1333, and neither of those extends to length 4. Going up to $$10^8$$ we find a few more length-3 sequences, all involving 8-digit numbers beginning $$13$$, and again nothing of length 4. So I conjecture not very confidently that there are no length-4 sequences; I conjecture even less confidently that the length-3 sequences can be characterized nicely, just because they seem to be few in number and have some common features. However, for the first of the 8-digit ones I've looked at (13210131+{0,1,2}) there are quite a lot of ways to make each of the three numbers in the sequence. The second and third numbers in the sequence seem like they might be quite restrictive, but the possibilities for the first appear to be all over the place. The more I look at this the less optimistic I am that there's an elegant proof to be had. (And the less confident I am that there aren't longer sequences: it may just be that a bunch of coincidences need to happen, and up to $$10^8$$ there isn't enough time for many of them all at once.) • hope you dont mind me editing the mathjax ;) – Omega Krypton Jun 23 '19 at 23:33 • I don't mind, but I also don't see any particular reason for those bits to be MathJax rather than plain text: they aren't using any of its features. – Gareth McCaughan Jun 23 '19 at 23:45 • @OmegaKrypton Keep in mind that introducing MathJax to a page causes it to require longer load, processing and rendering time to display. If this doesn't really add readability to the content, it's not worth the extra cost. (Tiny, trivial edits are discouraged as well; if MathJax isn't actually materially improving readability or comprehension, adding it probably qualifies as a trivial edit.) – Rubio Jun 30 '19 at 0:23 EDIT: Added more stepwise explanation, hopefully it is clear enough to be reproduced EDIT 2: It was brought to my attention that the proof does not work for single digits (shown by a clear example). That is solved not, and the conclusion is the same. EDIT 3: Changed the example for the regular case, because the example actually wasn't a regular case EDIT 4: Added a lot of MathJax. Hope I didn't miss anything Gareth McCaughan answers questions 1-3 beautifully, here is my proof for q4 (is there an arbitrarily long sequence of number that can be written as M+rev(M)?): The maximum length of a sequence we can find is 3 Proof: Let us start with number $$N_0$$ which fulfills our rule, namely $$N_0=M_0+rev(M_0)$$ Where $$rev(M)$$ is the digit reverse of M: $$rev(123)=321$$ Let $$N_0$$ be an even number Let $$M_0$$ be a number of $$L_0$$ digits Let $$p_0$$ and $$r_0$$ be the first and last digit of $$M_0$$, such that: $$p_0$$ and $$r_0$$ are digits, that is in the range $${0..9}$$ $$p00...00r <= M_0 <= p99...99r$$ Let $$N_{-2}$$ equal $$N - 2$$, where also $$N_{-2}$$ can be written as $$M_0+rev(M_0)$$ Let the first and last digits of $$M_{-2}$$ be $$p_{-2}$$ and $$r_{-2}$$ If $$N_0$$ is an even number, so must be $$N_{-2}$$ Therefore, $$p_{-2}+r_{-2}$$ must be even, because that is the only way the last digit of $$M_{-2}+rev(M_{-2})$$ can be even and therefore $$N_{-2}$$ can be even The last digit of $$N_{-2}$$ is the last digit of $$p_{-2}+r_{-2}$$. Which can be written as $$p_{-2}+r_{-2} mod 10$$ The same is true for $$p_0$$ and $$r_0$$ regarding the last digit of $$N_0$$ Since $$N_{-2}$$ is $$2$$ less than $$N_0$$, so must the last digit of $$N_{-2}$$ be $$2$$ less than $$N_0$$, and therefore must '$$p_{-2}+r_{-2} mod 10$$' be $$2$$ less than '$$p_0+r_0 mod 10$$' An exception [case 1] occurs is when $$p_0+r_0 mod 10 = 8$$ and $$p_{-2}+r_{-2} mod 10 = 0$$ Special cases [case 2] occur when: $$p_0+r_0 = 10 + p_{-2}+r_{-2} - 2$$ or $$10 + p_0+r_0 = p_{-2}+r_{-2} - 2$$ Exception [case 3] is when number of digits $$L_0$$ is not equal to $$L_{-2}$$ Exception [case 4] is when $$M$$ and $$rev(M)$$ are single digits I will cover these expections later. For the regular case, where $$p_0+r_0 = p_{-2}+r_{-2} + 2$$ and [case 1, 2 and 3] do not apply, we can state: $$N_{-2}$$ can be at most $$p_{-2}99...99r_{-2} + r_{-2}99...99p_{-2}$$ So $$N_{-2}$$ is at most $$(p_{-2}+1)*10^L + (r_{-2}+1)*10^{L-1} - 20 + p_{-2} + r_{-2}$$ Which can be written as: $$(p_{-2} + r_{-2} + 2)*10^{L-1}$$ We know that $$p_0+r_0 = p_{-2}+r_{-2}+2$$, so: $$N_{-2}$$ is as most $$(p_0 + r_0)*10^{L-1} + p_0 + r_0 - 2 - 20$$ $$N_0$$ is at least $$(p_0 + r_0)*10^{L-1} + p_0 + r_0$$ For the more visual-minded, an example: Given $$L=4$$: $$N_{-2} <= 2994 + 4992$$ [$$p_{-2}=2,r_{-2}=4,p_0+r_0=6$$] Given $$L=4$$: $$N_0 >= 2006 + 6002$$ [$$p_{-2}=2,r_{-2}=6,p_{-2}+r_{-2}=8$$] $$N_{-2} <= (2+1 + 4+1)*10^{4-1} - 20 + 2 + 4$$ $$N_{-2} <= 8000 - 20 + 6$$ $$N_{-2} <= 7986$$ $$N_0 >= (2 + 6)*10^{4-1} + 2 + 6$$ $$N_0 >= 8008$$ Whatever $$p$$ and $$r$$ we choose, $$N_{-2}$$ can never be more that $$N_0-22$$, so it cannot be $$N_0-2$$, which is a condradiction. Therefore, our initial statement that an even number $$N_{-2}$$ exists which equals $$N_0 - 2$$, where $$N_0=M_0+rev(M_0)$$ is false. However, we didn't look at our exceptions yet. What if $$L_0$$ does not equal $$L_{-2}$$ [case 3], which is the case in for example odd number $$1300 + 0031 = 1331$$ and $$617 + 716 = 1333$$ This can only occur if the number with smallest $$L$$ overflows to add an additional digit. This must be a leading $$1$$ Therefore, the number with the largest $$L$$ must not overflow If that number does not overflow, its $$p+r$$ must equal $$1$$, since the leading digit equals $$1$$ If that is the case, then the last digit must be a $$1$$ This is an odd number. Therefore, this exception does not apply to our proof of the regular case and our proof holds This same argument goes for [case 2]. The statement $$p_0+r_0 = 10 + p_{-2}+r_{-2} - 2$$ or $$10 + p_0+r_0 = p_{-2}+r_{-2} - 2$$ Has the same meaning as saying either $$N_0$$ or $$N_{-2}$$ is the result of digit overflow This means that also the first digit will overflow Therefore, this can only happen is the length of $$L_0$$ does not equal $$L_{-2}$$ As mentioned for [case 3], this can only results in odd numbers given our conditions Therefore, the proof of our regular case still holds And again, the same argument goes for [case 1] The case where $$p_0+r_0 mod 10 = 8$$ and $$p_{-2}+r_{-2} mod 10 = 0$$ is just a special case of $$p_0+r_0 = 10 + p_{-2}+r_{-2} - 2$$ This only happens if the last digit overflows Therefore, the first digit must overflow too and $$L_0$$ must not have the same length as $$L_{-2}$$ We already showed that if that is the case, not valid even number exists given our conditions For [case 4], this does not work, so we have to do something different Let's start by showing that this will work for any 2-digit number or larger, before going into the single digits. For this, we go in from a different angle, but the proof is the same Again, let $$N_0$$ and $$N_{-2}$$ be two even nubmers, $$N_0=N_{-2}+2$$, written as $$N_0=M_0+rev(M_0)$$ and $$N_{-2}=M_{-2}+rev(M_{-2})$$ If $$M_{-2}=p_{-2}r_{-2}$$, then $$N_{-2}=p_{-2}r_{-2}+r_{-2}p_{-2}$$ $$N_{-2} = 10*(p_{-2}+r_{-2}) + p_{-2} + r_{-2}$$ $$N_{-2} = 11*p_{-2} + 11*r_{-2}$$ And since we know $$p_0 + r_0$$ must equal $$p_{-2} + r_{-2} + 2$$: $$N_0 = 10*(p_{-2}+r_{-2}+2) + p_{-2} + r_{-2} + 2$$ $$N_0 = 11*(p-2) + 11*(r-2) + 22$$ So $$N_0$$ is $$22$$ greater than $$N_{-2}$$, and can therefore not be $$2$$ greater than $$N_{-2}$$. This pair cannot exist. It's not surprising, because there is no wiggle-room for additional $$9$$'s in the middle of the number. For a 3-digit number this also holds: The maximum value of $$N_{-2}$$ is $$p_{-2}9r_{-2} + r_{-2}9p_{-2}$$ $$N_{-2} = 100*(p_{-2}+r_{-2}) + 2*90 + p_{-2} + r_{-2}$$ $$N_{-2} = 101*p_{-2} + 101*r_{-2} + 180$$ $$N_0 = p_00r_0 + r_00p_0$$ $$N_0 = 100*(p_0+r_0) + 0 + p_0 + r_0$$ $$N_0 = 100*(p_{-2}+r_{-2}+2) + p_{-2} + r_{-2} + 2$$ $$N_0 = 101*p_{-2} + 101*r_{-2} + 202$$ So $$N_0$$ is at least $$22$$ greater than $$N_{-2}$$ and can therefore not be $$2$$ greater than $$N_{-2}$$ For single digits this does not hold. This is shown by $$M_0=6$$ and $$M_{-2}=5$$, leaving $$N_0=12$$ and $$N_{-2}=10$$ Any $$N$$ below $$10$$ cannot be odd as it must be constructed by adding 2 single digits, where $$M=rev(M)$$, so $$N$$ must be even This means the in the range $$10<=N<=21$$ we have potential for more than 3 consecutive numbers (Note than $$N=20$$ must be constructed from multi-digit numbers, so if it can be written as $$M+rev(m)$$, then $$22$$ cannot) We know the even numbers in this range up to and including $$18$$ are possible $$(10,12,14,16,18)$$ We know that $$11=10+01$$ We know all odd number in this range must be constructed of multi-digit numbers For numbers $$13,15,17$$ to be constructed of multi-digit numbers we cannot use digit overflow (because single digits required) For number $$19$$ we cannot use digit overflow (has all $$9$$'s behing leading $$1$$) For any number in the 10's constructed from multi-digit numbers, we need an $$M$$ with lagging/leading $$0$$ to leave a $$1$$ as first digit the only 2-digit number that fulfills these rules is $$10$$ So in this range only odd number $$10+01 = 11$$ can be written as $$M+rev(M)$$ This gives us a nive bonus trio of $$10, 11, 12$$. But that's everything Of course, we could have just checked every case op to $$21$$, but where's the fun in that! To sum up, there are no pairs of even numbers greater than $$20$$ separated by $$2$$ that can both be written as $$M+rev(M)$$, because: If $$N_0$$ exists that is an even number and can be written as $$M+rev(M)$$, the number $$N_{-2}$$ which is $$2$$ smaller than $$N_0$$ cannot be written as $$M+rev(M)$$. This means that also $$N_{+2}$$ which is $$2$$ bigger than $$N_0$$ cannot be written as $$M+rev(M)$$, because if it were, then $$N_0$$ could not be written that way. In the range below $$20$$, this proof does not work. However, we found out that the only odd number in this range that works is $$11$$. Therefore by similar argument no $$N_0 = N_{+2}$$ exists for odd numbers below $$20$$ Therefore, the largest sequence of consecutive numbers that can be written as $$M+rev(M)$$ is three, where the first and last are odd, and the middle one is even. Their existence is shown by many examples. • I don't understand this proof. What does "The number N-2 is always smaller than the number N minus 2"? Either "N-2" or "N minus 2" must mean something very different from what I think it does. And how do you get from there to "N-2 cannot be M+rev(M)"? Something to do with the first and last digits, but the actual argument seems to be missing. – Gareth McCaughan Jun 30 '19 at 2:30 • Thanks for the feedback. I'll get back with a cleaner proof and definitions of terms used – P1storius Jul 1 '19 at 10:50 • You say "there are no pairs of even number separated by 2 that can both be written a M+rev(M)" . What about 10 and 12? – hexomino Jul 2 '19 at 13:42 • For completeness, you ought to prove the case with single digits also satisfies your conclusion, as @hexomino raises a good point. – El-Guest Jul 2 '19 at 14:06 • Ah yes, of course. There is a lower bound for which this works. I'm guessing it's only the case for single digits but I'll give it some more thought to be sure. It will be low for sure. Since we can check every number below the lower bound, we know the limit of 3 consecutive numbers still holds. And because 2xany single digit is always even, we will not find arbitrarily long sequences there either. – P1storius Jul 3 '19 at 6:58
2020-08-14 18:28:29
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 199, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.7194978594779968, "perplexity": 355.27313254360064}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-34/segments/1596439739347.81/warc/CC-MAIN-20200814160701-20200814190701-00331.warc.gz"}
https://math.stackexchange.com/questions/2587203/invert-multinomial-logit-link-with-three-unknown
# invert multinomial logit link with three unknown I am attempting to invert the multinomial logit link with three variables. I can do it with two variables, but I do not know how to do it with three. A multinomial logit function for three states, i.e., three probabilities, $a, b, c$ is written as follows: $a = \frac{e^{x}} {(1 + e^{x} + e^{y})}$ $b = \frac{e^{y}} {(1 + e^{x} + e^{y})}$ $c = 1 - a - b$ These three probabilities are defined by the parameters $x$ and $y$. If we know $x$ and $y$ we can obtain $a$, $b$ and $c$. However, given $a, b, c$, how do we obtain $x$ and $y$? One way is to use multinomial logistic regression. However, there should be a closed form solution in which $x$ and $y$ are obtained using basic algebra. I can obtain the closed form solution for two parameters, $x$ and $y$: $x = \log(\frac{a (1 - b) + (a b)}{ (1 - a) (1 - b) - a b})$ $y = \log(\frac{b (1 - a) + (b a) }{ (1 - b) (1 - a) - b a})$ Which simplifies to: $x = \log(\frac{a}{1 - a - b})$ $y = \log(\frac{b}{1 - a - b})$ How can I obtain the closed form solution when there are three parameters $x, y, z\;$? $a = \frac{e^{x}} {(1 + e^{x} + e^{y} + e^{z})}$ $b = \frac{e^{y}} {(1 + e^{x} + e^{y} + e^{z})}$ $c = \frac{e^{z}} {(1 + e^{x} + e^{y} + e^{z})}$ $d = 1 - a - b - c$ • what is the meaning of the 3rd equation c=1-a-b? Why do you need this? The last equation c=1-a-b-c seems to be nonsense. – miracle173 Jan 7 '18 at 10:50 • Why do you write a(1-b)+(ab) instead of a? – miracle173 Jan 7 '18 at 10:51 • Why not work out the explicit expression for $c$ first ? for the first case $c = 1 - a - b = \frac{1}{1+e^x + e^y}$, so $e^x = \frac{a}{c} \implies \cdots$. The rest is similar. – achille hui Jan 7 '18 at 11:08 • The very last equation should probably have a $d$, not a $c$, on the left hand side. – Barry Cipra Jan 7 '18 at 14:45 • @BarryCipra Thank you. That was a typo probably introduced when applying the revised formatting. – Mark Miller Jan 7 '18 at 16:01 For every fixed number of variables, you are considering, for every $i$, $$a_i=\frac{e^{x_i}}{1+s}$$ where $$s=\sum_ie^{x_i}$$ and $$z=\frac1{1+s}$$ and you are asking how to invert this system, that is, how to deduce the collection $(x_i)$ from the collection $(a_i)$ and $z$, or even, from $(a_i)$ only. To solve this, consider that $$z+\sum_ia_i=1$$ hence, for every $i$, $$e^{x_i}=a_i\cdot(1+s)=\frac{a_i}z$$ that is, $$x_i=\log a_i-\log z=\log\left(\frac{a_i}{1-\sum\limits_ka_k}\right)$$ • Rereading this post, I see that its content was entirely, albeit more concisely, explained by @achillehui in a comment on main posted 6 hours before it, with no visible reaction or understanding from the OP. – Did Jan 8 '18 at 6:34 If you want to solve $$\begin{eqnarray} a_1 &=& \frac{e^{x_1}} {1 + \sum_{i=1}^n e^{x_i}}\\ &\ldots& \tag{1}\\ a_n &=& \frac{e^{x_n}} {1 + \sum_{i=1}^n e^{x_i}}\\ \end{eqnarray}$$ with constants $$a_1,\ldots,a_n$$ and variables $$x_1,\ldots,x_n$$ then set $$\begin{eqnarray} u_1&=&e^{x_1} \\ &\ldots& \tag{2}\\ u_n&=&e^{x_n} \end{eqnarray}$$ and substitute in $(1)$ and multiply each equation by its denominator and you have $$\begin{eqnarray} a_1 + \sum_{i=1}^n a_1 u_i&=& u_1\\ &\ldots& \tag{3}\\ a_n + \sum_{i=1}^n a_1 u_i&=& u_n\\ \end{eqnarray}$$ This is a linear equation with variables $u_i.$ You can solve this numerically or algebraically with the well known methods for linear equations. Then you can so the back substitution to get the $x_i.$ $$\begin{eqnarray} x_1&=&\log{u_1} \\ &\ldots& \tag{4}\\ x_n&=&\log{u_n} \end{eqnarray}$$ Solve each variable $x$, $y$ and $z$ as a function of the other two. $x = \log(\frac{(a + a e^{y} + a e^{z})} {(1 - a)})$ $y = \log(\frac{(b + b e^{x} + b e^{z})} {(1 - b)})$ $z = \log(\frac{(c + c e^{x} + c e^{y})} {(1 - c)})$ Then: Substitute $x$ into the equation for $y$. Substitute $x$ into the equation for $z$. Substitute $y$ into the equation for $x$. Substitute $y$ into the equation for $z$. Substitute $z$ into the equation for $x$. And substitute $z$ into the equation for $y$. Each parameter $x$, $y$ and $z$ is now expressed as a function of just one of the other two. To express each parameter as a function of itself, substitute, for example, the equation expressing $x$ as a function of $y$ into the function expressing $y$ as a function of $x$. Once these three equations are simplified we have: $x = \log(-(a * c - a) / (c^2 + (b+a-2)*c + (a - 1)*b - a + 1 - a * b))$ $y = \log(-(b * c - b) / (c^2 + (b+a-2)*c + (a - 1)*b - a + 1 - a * b))$ $z = \log(((1-b)*c) / (((b+a-1)*c+b^2+(a-2)*b-a+1) - (a*c)))$
2020-09-30 10:08:00
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8902362585067749, "perplexity": 203.78761717118996}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-40/segments/1600402123173.74/warc/CC-MAIN-20200930075754-20200930105754-00267.warc.gz"}
https://errortools.com/windows/fix-windows-10-wont-upgrade-to-a-newer-version/
If when hovering over the upgrade icon on the taskbar you get: Your version of Windows 10 would reach the end of service soon, Click to download a newer version of Windows 10 to stay supported. or An unsupported version of Windows will no longer receive software updates from Windows Update. These updates include security updates that can help protect your PC from harmful viruses, spyware, and other malicious software which can steal your personal information. Windows Update also installs the latest software updates to improve the reliability of Windows—such as new drivers for your hardware. and you are unable to perform updates then this guide is for you. There are several things you can do to fix this issue presented here, it is advisable to follow them in a way how they are presented for best performance and system safety. 1. ### Run setupdiag Download and run Setupdiag from the official MICROSOFT website. SetupDiag is a standalone diagnostic tool that can be used to obtain details about why a Windows 10 upgrade was unsuccessful. It works by examining Windows Setup log files to determine the root cause of a failure to update or upgrade the computer. Once the scan is completed, check the generated log files. The SetupDiagResults.log will be generated and saved in the same folder where you downloaded Setupdiag. Open SetupDiagResults.log using Notepad. You may need to take a look at these folders: • \Windows\Panther • \$Windows.~bt\sources\panther • \$Windows.~bt\Sources\Rollback • \Windows\Panther\NewOS If there are any issues or conditions that are blocking the upgrade, they will be listed here. 2. ### Edit TargetReleaseVersionInfo Registry key Press ⊞ WINDOWS + R to open the run dialog In run dialog type Regedit and press ENTER Locate: HKLM\SOFTWARE\Policies\Microsoft\Windows\WindowsUpdate Locate two DWORD files, if they do not exist, create them as: TargetReleaseVersion TargetReleaseVersionInfo Set the value of TargetReleaseVersion to 1 If you are stuck on Windows 10 1909 and want to upgrade to Windows 10 20H2 now, you need to set the value for TargetReleaseVersionInfo to 20H2 Reboot the computer 3. ### Use Windows 10 update assistant Visit Microsoft.com and hit the ‘Update now’ button visible on the page Click on Update now to start the upgrade process One-click automated PC repair solution With Advanced System Repair Pro you can easily Replace damaged files Restore performance Free disk space Remove Malware Protects WEB browser Remove Viruses Stop PC freezing Advanced System Repair Pro is compatible with all versions of Microsoft Windows including Windows 11 ## You might also like Windows and Devices chief Panos Panay has revealed new focus sessions feature that will be in Windows 11 on his Twitter account today. He himself is referring to it as a game-changer especially with Spotify integration. ### So what is a focus session? From the video clip provided on Twitter, we can see that focus session users will be able to choose a specific task from the previously made task list, choose songs that will play in the background while the task is active, and set a timer for the chosen task with breaks. Maybe the best comparison and explanation would be a desktop google calendar task with music, basically, that’s it. A neat and good organizer inside your Windows 11 operating system. I think that this is generally a good idea and for sure it will find its audience. There are times when some data in the browser is conflicting with the loading of the website and triggers some problems like download getting stuck at 100%. And so you can try to clear your browser’s data. This might be a very basic solution but oftentimes it works in fixing this kind of error in Google Chrome. Follow the steps below to clear the data in your browser. • After that, tap the Ctrl + H keys. Doing so will open a new panel that allows you to delete the browsing history and other data in your browser. • Now select every checkbox that you see and click on the Clear data button. ### Option 2 – Try disabling Chrome virus scan The next thing you can do to resolve the problem is to disable the Chrome virus scan. It is possible that the virus scan is the one that’s preventing the download to be completed, thus, try to disable it and see if it works. The download getting stuck might also be caused by your antivirus program which could be interfering it from running. To fix this, you have to whitelist dism.exe. How? Refer to these steps: • Open the Windows Defender Security Center from the system tray area. • Next, click the “Virus & threat protection” option and then open the “Virus and threat protection settings”. • After that, scroll down until you find the “Exclusions” and click on the “Add or remove exclusions” option. • Then click the plus button and select the type of exclusion you want to add and from the drop-down list, select Folder. • Next, navigate to this path and select the WinSxS folder: C:/Windows/WinSxS • When a User Account Control or UAC prompt, just click on Yes to proceed. There are certain browser extensions, especially those security programs, that prevent any suspicious files from being downloaded. So the easy way to fix the problem is to launch the Chrome browser in Incognito mode and then try to download the file again. Additionally, you might want to consider disabling the problematic extension. ### Option 5 – Reset Chrome Resetting Chrome can also help you fix the problem. This means that you will be restoring its default settings, disabling all the extensions, add-ons, and themes. Aside from that, the content settings will be reset as well and the cookies, cache, and site data will also be deleted. To reset Chrome, here’s what you have to do: • Open Google Chrome, then tap the Alt + F keys. • After that, click on Settings. • Next, scroll down until you see the Advanced option, once you see it, click on it. • After clicking the Advanced option, go to the “Restore and clean up option and click on the “Restore settings to their original defaults” option to reset Google Chrome. ### Option 6 – Try to clean reinstall Chrome There are instances when programs leave files behind after you’ve uninstalled them and the same thing can happen to Chrome so before you reinstall Chrome, you have to make sure that you have deleted the User Data folder. To do so, refer to the following steps: • Hit the Win + R keys to open the Run dialog box. • Next, type “%LOCALAPPDATA%GoogleChromeUser Data” in the field and hit Enter to open the User Data folder. • From there, rename the default folder and name it something else, e.g. “Default.old”. • After that, install Google Chrome again and check if the issue is now fixed. ## Error Code 14 - What is it? Generated due to temporary device and Window system conflicts, Error code 14 is a typical Device Manager error.  This error code can pop up anytime and usually displayed in the following format: “This device cannot work properly until you restart your computer. (Code 14)” Though it is not a fatal error code like the infamous Blue Screen of Death and runtime error codes, nonetheless it is still advisable to repair it immediately before any delay to avoid inconvenience. It can lower your PC’s performance and hamper you from using certain hardware devices as a result of driver problems. ## Error Causes Error 14 is triggered when your system is unable to correctly read the files and settings which is important for running a certain piece of your PC hardware. Now this conflict may occur due to reasons like outdated, corrupted, or poorly installed drivers. Other causes may include corrupted registry entries. Simply put, Error code 14 is a good reminder that PC users should pay attention to updating device drivers to ensure healthy systems and optimum PC performance. ## Further Information and Manual Repair The good news is that error code 14 is quite easy to resolve. You don’t have to spend hundreds of dollars to hire a professional programmer to get it fixed. To repair, simply follow the DIY methods listed below. We have compiled some of the best, proven, and easy to perform solutions for PC users to resolve Device Manager error codes like error code 14. Follow the instructions here to resume the functionality of your PC. Let’s get started: ### Method 1 - Reboot Your System Sometimes, an action as simple as rebooting your PC can also resolve technical problems like error code 14. So, before you try any other method, try giving this a shot. The moment the error code pops on your screen, simply close all the programs running on your system and restart your PC. This refreshes your system settings, processes and services allowing it to run smoothly. However, if the error still persists, then try other methods given below. ### Method 2 - Delete the Corrupted Registry Entries Corrupted registry entry can also trigger error code 14. To resolve delete the corrupted registry entries. For this, go to the start menu and type Regedit. A dialog box will open. Now navigate through the HKEY_LOCAL_MACHINE key. Expand further to locate HKEY_LOCAL_MACHINESYSTEMCurrentControlSetControlClass. Once located, now in the right pane click upper filters and then click delete on the edit menu. Click yes when prompted to confirm the deletion. Now in the right pane, click lower filters. Repeat the same steps as performed to delete upper filters. Confirm deletion and then exit the registry editor. To activate changes, restart your PC. This will hopefully resolve the issue. If the error code pops up on your computer screen, then try method 3. ### Method 3 - Update Corrupted/Outdated Drivers Drivers are basically software applications that communicate and provide instructions to your system to operate hardware devices. When these become corrupt or outdated, you start experiencing problems like error code 14. To resolve, locate corrupted drivers and update them. You can do it both manually and automatically. We’ll discuss both ways. For a manual driver update, go to the start menu, control panel, and then Device Manager. Now go through all devices listed to locate problematic drivers. To identify problematic drivers, look for yellow exclamation marks next to each device. Devices with yellow exclamation marks indicate that driver issues. To repair, right-click on each hardware device and select update driver. ### Method 4 - Install DriverFIX - Alternative to Manual Driver Update Updating each driver separately and manually is a stressful and frustrating task. This can take a lot of your time. Sometimes, you may also have to download new driver versions from the internet to install perform updates. To avoid the hassle and save time, it is advisable to install a program like DriverFIX. This user-friendly and intuitive software is based on sophisticated technology featuring an intelligent programming system that automatically detects all your PC drivers in seconds. Once you install this software on your system, it instantly identifies problematic drivers and matches them to the latest versions. It updates PC drivers immediately thereby resolving the error code 14 problem in a few seconds. It enables accurate installations and ensures that your PC runs at its optimum level. More importantly, with this software installed on your PC, you don’t have to worry about keeping track of your driver updates anymore. The software updates drivers on a regular basis with new and compatible versions ensuring that your PC functions properly and you don’t experience any type of Device Manager error code. It is easy to use and install. It is compatible with all Windows versions. Windows 10 has displayed one of the most efficient and user-friendly interfaces. However there have been many issues on the backend of this commonly praised operating system: For example, Windows Update is still a wonky and error-laded system application. One example of this error is Windows Update Error 0x80073712 which stands in the way of users just wanting to keep their PCs updated hassle-free. The error code 0x80073712 signifies that a file needed by Windows Update to function is either damaged, missing, or corrupted. However, this does not mean that your Windows version will forever stay as-is with no mode to update it. Windows Update Error 0x80073712 is perfectly fixable with the set of provided steps below and some of Windows’ built-in troubleshooting steps: ### Solution 1: Open Windows Update Troubleshooter 1. Open the Windows Update Troubleshooter by pressing the Windows and S keys simultaneously. 2. Enter the word “Troubleshoot” in the search box and select the Troubleshoot result 3. On the new window, select “Windows Update” to troubleshoot. 4. Run the troubleshooter, then open Windows Update and try to install the update again. ### Solution 2: Run the DISM tool 1. Open the command prompt by pressing Windows and S keys simultaneously 2. Enter “cmd” in the search box. Right Click command prompt among the results and select “Run as administrator” 3. In the Command Prompt window type the following and press the Enter key after every command DISM.exe /Online /Cleanup-image /Scanhealth DISM.exe /Online /Cleanup-image /Restorehealth 1. To close the Administrator: Command prompt window, type Exit, and then press Enter. 2. Run Windows Update again. ### Solution 3: Rename the Software Distribution folder 1. Open the command prompt as previously mentioned 2. Input the following command pressing Enter after every line net stop wuauserv net stop cryptSvc net stop bits net stop msiserver rename c:/windows/SoftwareDistribution/softwaredistribution.old net start wuauserv net start cryptSvc net start bits net start msiserver exit 3. Restart the PC and run the updater if it works again. ### Solution 4: Restart Windows Update Services 1. Press the Windows logo key and R together to open Run -> Input services.msc -> and press Enter 2. Search for the Windows Update service -> Check its status 3. If it is not indicated, right-click on the service and select Start to force start your Windows Update 4. If you see an error, locate the Startup Type option and set it to Automatic 5. Now you should reboot your computer and see if your Windows Update is OK ### Solution 5: Fix Registry Issues If after all the aforementioned solutions, you still experience problems with Windows Update, the problem may lie in the registry that is either damaged or corrupted. You may choose to do manual editing of your Windows registry by opening your Windows registry editor. But doing so is risky as one wrong letter may do incalculable damage to your system. To do so safely for more inexperienced users we recommend using a third-party registry cleaner/tools, many of which can be found online. If you always use the Google Chrome browser in browsing the internet, then you might have come across an error message saying, “He’s dead, Jim!” along with a funny looking face that’s peeking its tongue out and another detailed message saying, “Either Chrome Ran out of memory or process for the webpage has terminated for some other reason. To continue, reload the webpage or go to another page”. This error message in Google Chrome is actually quite famous and it appears for various reasons but it may have something to do with a memory issue. The Google Chrome browser is known to consume a lot of memory and the more web pages you open and load, it takes up more resources. Thus, the first thing you need to do when you encounter this error is to simply click the Reload button to continue browsing the internet or close the browser and then open it again. On the other hand, if you keep seeing this error message, then that’s a whole different story as you have to take some action to prevent it from popping up again, for good. Follow the instructions given below to fix the error in Chrome. ### Option 1 – Reduce Google Chrome’s memory usage The first thing you can try is reducing the memory usage of the Chrome browser. However, this option has a bit of a disadvantage. If a website crashes, all the instances of that website will also crash although other open tabs and websites won’t be affected. This process is referred to as “Process-per-site” mode which you will have to launch Chrome within this parameter. ### Option 2 – Run Google Chrome with Strict Site Isolation Aside from reducing Chrome’s memory usage, you can also run the browser with the Strict Site Isolation which makes sure that the crashing of one tab in the browser won’t affect the entire Windows as this feature will run every website you open on its own isolated process. ### Option 3 – Run the built-in Malware Scanner and Cleanup tool in Chrome In case you don’t know, there is actually a built-in malware scanner and cleanup tool in Chrome that helps you get rid of any unwanted ads, pop-ups, and even malware, as well as unusual startup pages, toolbars, and other things that could affect the performance of the browser. ### Option 4 – Reset Google Chrome Resetting Chrome can also help you get rid of the “He’s dead, Jim!” error message for good. Resetting Chrome means restoring its default settings, disabling all the extensions, add-ons, and themes. Aside from that, the content settings will be reset as well and the cookies, cache, and site data will also be deleted. To reset Chrome, here’s what you have to do: • Open Google Chrome, then tap the Alt + F keys. • After that, click on Settings. • Next, scroll down until you see the Advanced option, once you see it, click on it. • After clicking the Advanced option, go to the “Restore and clean up the option and click on the “Restore settings to their original defaults” option to reset Google Chrome. ### Option 5 – Perform a clean reinstall on the Chrome browser Although reinstalling any program is easy, not so much for Google Chrome as you need to make sure that the User Data folder is deleted before you reinstall it. • Tap the Win + R keys to open the Run prompt. • Then type %LOCALAPPDATA%GoogleChromeUser Data in the field and hit Enter. • Next, rename the “Default” folder inside the path you were redirected to. For instance, you can rename it to “Default-old”. • After that, install the Chrome browser again. ### Option 6 – Try to flush the DNS and reset the TCP/IP There are instances when a network goes into haywire because of a bad DNS. Thus, a bad DNS might be the one that’s causing this headache so it’s time for you to reset the entire network to resolve the issue. To reset the network, here’s what you have to do: • Click the Start button and type in “command prompt” in the field. • From the search results that appear, right-click on Command Prompt and select the “Run as administrator” option. • After opening Command Prompt, you have to type each one of the commands listed below. Just make sure that after you type each command, you hit Enter • ipconfig /release • ipconfig /all • ipconfig /flushdns • ipconfig /renew • netsh int ip set dns • netsh winsock reset After you key in the commands listed above, the DNS cache will be flushed and the Winsock, as well as the TCP/IP, will reset. • Now restart your computer and open Google Chrome then try opening the website you were trying to open earlier. Note: You can also try changing the DNS server to the Google Server, i.e. 8.8.8.8, and then see if it works for you or not. ### Option 7 – Disable both the antivirus and firewall temporarily As you know, both the firewall and antivirus programs are there to protect the operating system from any malicious threats. So if they find that there is some malicious content in a website you are visiting, they will block the site right away. Thus, it could also be the reason why you’re getting the “He’s dead, Jim!” error so you need to disable both the firewall and antivirus program temporarily and then try opening the website again. If you are able to open the website, you need to add this site as an exception and then enable the firewall and antivirus program back. If you suddenly find the Windows Recovery Environment not working and you see an error message saying, “Could not find the recovery environment”, then you’ve come to the right place as this post will guide you on how you can fix it. In times when you can’t boot into the Windows Recovery Environment, there could be several reasons behind it. However, have you ever wondered where exactly the Windows Recovery Environment is in your computer? Windows initially places the Windows RE Image file in the installation partition during Windows Setup so if you have installed Windows in the C drive, you can find the Windows RE at the C:/Windows/System32/Recovery or C:/Recovery folder. Keep in mind that this folder is hidden and later on, the system copies the image file into the recovery tools partition to make sure that one can boot into recovery if there are any issues with the drive partition. The “Could not find the recovery environment” error mostly occurs if the Windows Recovery Environment is disabled or if the “Winre.wim” file is corrupted. Thus, to fix this error, you need to refer to the given suggestions below. ### Option 1 – Try to enable Windows Recovery Environment • In the Windows Start Search, type “PowerShell” and from the search results that appear, right-click on Windows PowerShell and then select the “Run as administrator” option to open it with admin privileges. • Next, type the “reagentc /info” command and tap Enter to execute it. • After that, if the output states that Status is enabled, then you’re all set. • Now type the “reagentc /enable” command and tap Enter to enable the Windows Recovery Environment. You will see a success message at the end signifying that Windows RE is available. ### Option 2 – Try to fix the corrupted or missing “Winre.wim” file If the Winre.wim file is either corrupted or missing, you need to get a new copy of this file from another computer where the Windows RE is working. Once you’re able to get a new copy of the Winre.wim file, you have to set the image path to a new location. For more details, refer to these steps: • First, type “Powershell” in Windows Start Search and right-click on Windows PowerShell from the results, and select Run as administrator. • Next, execute the given command below to change the path of the WIM file to the new location. Note that the steps should be used when the file path of the Windows Recovery Environment is different from the usual spot. Reagentc /setreimage /path C:RecoveryWindowsRE • As mentioned, if the file is corrupted, you just have to get a new copy from another PC but before you do that, make sure that the WINRE on that computer is disabled (just enable it later on) and then place it in the C:/Recovery path and then set its path again using the command given above and then verify its path by executing the following command. reagentc /info command Note: Since the Recovery folder is hidden as well as the WINRE folder in it and you won’t be able to access them using the Windows File Explorer, you need to use the Windows PowerShell or Command Prompt so that you can access them. ### Option 3 – Try checking and fixing the WinRE Reference in the Windows Boot Loader The Windows Boot Loader is the one that determines if it has to load the Windows Recovery Environment. It could be that the boot loader is pointing to an incorrect location which is why you’re getting the error. To resolve it, you have to check and fix the WinRE Reference in the boot loader. How? Follow these steps: • In the Windows Start Search, type “PowerShell” and from the search results that appear, right-click on Windows PowerShell and then select the “Run as administrator” option to open it with admin privileges. • After that, execute the “bcdedit /enum all” command. • Next, look for an entry in the Windows Boot Loader identifier set as Current and look for “recoverysequence” in that section and take note of the GUID. • Ensure that the device and the osdevice items show the path for the Winre.wim file and that they are the same. If not, you need to point the current identifier to the one which has the same. • Once you’ve found the new GUID, execute this command: bcdedit /set {current} recoverysequence {GUID_which_has_same_path_of_device_and_device} • Now check if the error in the Recovery Environment is fixed or not. ### Option 4 – Try creating a Recovery Media You could also try creating a Recovery Media to resolve the error in the Windows RE. All you have to do is download the Windows 10 ISO file using the Media Creation tool and then create a recovery drive. Once you’re done, check if it fixes the problem or not. EliteUnzip is a program developed by Mindspark Interactive. This program lets you compress and extract all the popular archive types. From the Author: Elite Unzip is a program for creating and extracting archive files; it has support for over 20 file formats. This application downloads onto your computer in two parts: one for your desktop, and one for your browser. They both work together to make packing and unpacking archive files easy. While EliteUnzip itself is not a threat, it comes bundled with other software that might cause a problem to your computer. Due to its bundled nature, several anti-virus scanners have marked EliteUznip as a Potentially Unwanted Program and is therefore not recommended to keep on y our computer, especially because there are other free programs that do the same functions without the additional bundled software. ### Precisely what is a Potentially Unwanted Program (PUP)? Have you ever found out an unwanted program running on your computer and wondered how the heck it got there? These unwanted programs, which are known as Potentially Unwanted Programs, or PUPs in short, often tag along as a software bundle when downloading the program and can result in major problems for computer users. It’s clear by the name – unwanted programs – but did not really constitute “malware” in the traditional sense. The reason for this is that the majority of PUPs enter into users’ computers not because they exploit security weaknesses, for instance, but mainly because the users give consent to download it – unknowingly in many instances. No matter whether it is regarded as malware or otherwise, PUPs are almost always bad for a computer owner as they might bring on adware, spyware, keystroke logging, and other nasty “crapware” features on your PC. ## Protect yourself from unwanted programs All malware is inherently unsafe, but certain kinds of malicious software do a lot more damage to your computer or laptop than others. Some malware variants alter browser settings by adding a proxy server or change the computer’s DNS configuration settings. When this happens, you will be unable to visit some or all of the sites, and therefore unable to download or install the necessary security software to eliminate the malware. If you are reading this, chances are you’re stuck with a virus infection that is preventing you to download and install Safebytes Anti-Malware software on your computer. There are some actions you can take to circumvent this issue. ## Install in Safe Mode In the event the malware is set to run at Windows start-up, then booting in safe mode should avoid it. Since only the minimal applications and services start-up in safe mode, there are hardly any reasons for conflicts to happen. The following are the steps you should follow to start your computer into the Safe Mode of your Windows XP, Vista, or 7 computers (go to Microsoft site for directions on Windows 8 and 10 computers). 1) Tap the F8 key continuously as soon as your computer boots, however, before the large windows logo comes up. This should bring up the Advanced Boot Options menu. 2) Make use of the arrow keys to select Safe Mode with Networking and press ENTER. 3) Once you get into this mode, you should have internet access once again. Now, obtain the malware removal software you need by using the web browser. To install the program, follow the guidelines in the setup wizard. 4) Following installation, run a full scan and let the program eliminate the threats it finds. Some malware mainly targets particular browsers. If this sounds like your case, make use of another internet browser as it may circumvent the virus. If you appear to have malware attached to Internet Explorer, then switch over to a different browser with built-in safety features, such as Firefox or Chrome, to download your favorite anti-malware program – Safebytes. ## Install antivirus on a thumb drive Here’s yet another solution which is utilizing a portable USB anti-malware software package that can check your system for malware without needing installation. Adopt these measures to use a flash drive to clean your infected system. 1) Download the anti-malware on a virus-free computer. 2) Plug the pen drive into the uninfected computer. 3) Double click on the exe file to open the installation wizard. 4) When asked, select the location of the pen drive as the place where you want to store the software files. Follow the directions to complete the installation process. 5) Transfer the thumb drive from the uninfected computer to the infected PC. 6) Double-click the anti-malware software EXE file on the thumb drive. 7) Click on “Scan Now” to run a scan on the affected computer for viruses. ## Benefits and Features of SafeBytes Anti-Malware If you are looking to purchase anti-malware for your PC, there are various brands and applications for you to consider. A few of them are great and some are scamware applications that pretend as authentic anti-malware software waiting to wreak havoc on your computer. You need to pick a tool that has gained a good reputation and detects not only computer viruses but other types of malware as well. When considering commercial application options, many people opt for popular brands, such as SafeBytes, and are very happy with them. SafeBytes anti-malware is really a powerful, highly effective protection application intended to help users of all levels of IT literacy in finding and removing harmful threats out of their computer. This application could easily identify, remove, and protect your computer from the most advanced malware intrusions such as adware, spyware, trojan horses, ransomware, worms, PUPs, as well as other possibly damaging software applications. ## Technical Details and Manual Removal (Advanced Users) To remove EliteUnzip manually, go to the Add/Remove programs list in the Control Panel and choose the program you want to get rid of. For internet browser plug-ins, go to your browser’s Addon/Extension manager and select the plug-in you want to remove or disable. You will additionally also want to totally reset your internet browser to its default settings. To make sure of complete removal, find the following Windows registry entries on your system and remove it or reset the values appropriately. However, this is often a tricky task and only computer professionals could perform it safely. Also, certain malware is capable of replicating itself or preventing its deletion. Carrying out this malware-removal process in Safe Mode is suggested. Files: %PROGRAMFILES%\EliteUnzip_aa\bar.bin\aaSrcAs.dll %PROGRAMFILES(x86)%\EliteUnzip_aa\bar.bin\aabar.dll %PROGRAMFILES%\EliteUnzip_aa\bar.bin\aaHighIn.exe %PROGRAMFILES(x86)%\EliteUnzip_aa\bar.bin\CrExtPaa.exe %USERPROFILE%\Application Data\EliteUnzip_aa %USERPROFILE%\AppData\LocalLow\EliteUnzip_aa %UserProfile%\Local Settings\Application Data\Google\Chrome\User Data\Default\Extensions\gaklecphgkijookgheachpgdkeminped %LOCALAPPDATA%\EliteUnzip_aa %USERPROFILE%\Meus documentos\Downloads\EliteUnzipSetup.EliteUnzip_aa.ffjcmnpnoopgilmnfhloocdcbnimmmea.ch.exe %PROGRAMFILES(x86)%\aaUninstall Elite Unzip.dll %USERPROFILE%\Downloads\EliteUnzipSetup.exe C:\Program Files\EliteUnzip\EliteUnzip.exe Search And Delete: RebootRequired.exe IAC.UnifiedLogging.dll DesktopSdk.dll IAC.Helpers.dll UnifiedLogging.dll SevenZipSharp.dll 7z.dll 7z64.dll LogicNP.FileView.WPF.dll LogicNP.FolderView.WPF.dll LogicNP.ShComboBox.WPF.dll lua5.1.dll Registry: HKEY_CURRENT_USER\Software\AppDataLow\Software\EliteUnzip_aa HKEY_LOCAL_MACHINE\SOFTWARE\Microsoft\Windows\CurrentVersion\Run, value: EliteUnzip EPM Support HKEY_LOCAL_MACHINE\SOFTWARE\Microsoft\Windows\CurrentVersion\Run, value: EliteUnzip Search Scope Monitor HKEY_CURRENT_USER\Software\EliteUnzip_aa HKEY_LOCAL_MACHINE\SOFTWARE\Wow6432Node\Microsoft\Internet Explorer\Toolbar, value: ef55cb9f-2729-4bff-afe5-ee59593b16e8 HKEY_LOCAL_MACHINE\SOFTWARE\Wow6432Node\Microsoft\Windows\CurrentVersion\Run, value: EliteUnzip AppIntegrator 64-bit HKEY_LOCAL_MACHINE\SOFTWARE\Wow6432Node\Microsoft\Windows\CurrentVersion\Run, value: EliteUnzip EPM Support HKEY_LOCAL_MACHINE\SOFTWARE\Wow6432Node\Microsoft\Windows\CurrentVersion\Run, value: EliteUnzip Search Scope Monitor HKEY_LOCAL_MACHINE\SOFTWARE\Microsoft\Windows\CurrentVersion\Run, value: EliteUnzip AppIntegrator 32-bit HKEY_LOCAL_MACHINE\SOFTWARE\Microsoft\Windows\CurrentVersion\Run, value: EliteUnzip AppIntegrator 64-bit HKEY_LOCAL_MACHINE\SYSTEM\CurrentControlSet\services\EliteUnzip_aaService HKEY_LOCAL_MACHINE\SYSTEM\ControlSet001\services\EliteUnzip_aaService HKEY_LOCAL_MACHINE\SYSTEM\ControlSet002\services\EliteUnzip_aaService HKEY_LOCAL_MACHINE\SOFTWARE\Microsoft\Windows\CurrentVersion\RunOnce, value: EliteUnzip_aabar Uninstall HKEY_LOCAL_MACHINE\SOFTWARE\Wow6432Node\EliteUnzip_aa HKEY_LOCAL_MACHINE\SOFTWARE\Wow6432Node\Mindspark\EliteUnzip HKEY_LOCAL_MACHINE\SOFTWARE\Mindspark\EliteUnzip We have often talked about the security of your computer, we have been giving you tips and tried to explain how to best protect your computer from various attacks and malicious software. Today we will be talking about antivirus applications. Antivirus software has become the kind of a must-have in every computer in this day and age. When you think about it, our systems are connected to the internet most of the time if not always, and therefore kind placed in from the line of various cyber-attacks. Here antivirus software comes into focus, especially because it has evolved long from just a simple virus removal tool to full security suites. We will be going through the best of these applications in order to present both their good and bad sides and hope that we will help you in picking the right one for you. Remember, picking either one of the presented solutions is way better than not having one at all. The list is made from best down in our opinion so number one is highly recommended. ### List of Best antivirus applications of 2021 1. #### BitDefender In our opinion the best overall security protection suite for this age and time. Bitdefender has cemented itself as number one a few years back and it holds that status even today. It has top-of-the-game virus protection, an incredible amount of features, safepay banking online protection and it is amazingly cheap. Its downside we could say is that it can be annoying sometimes, especially if you set it to maximum protection and paranoid mode. In this case, it will often ask what to do and how to behave leading to minor annoyances. 2. #### Norton antivirus Norton antivirus is well known to older computer users, this package has been around a long time and it is our pick for closest one to challenge top place because of its packed features. The number of features it has are really stunning and it comes also with great and maybe best-browsing protection of all candidates. It also has a backup tool bundled with it but the reason why it is in the second place is that it is taxing to the system and can have a real impact on computer performance. Great protection is important, but so it is working on it without slowdowns. 3. #### Kaspersky Another one of the old antivirus software on the list. Kaspersky Lab was once top tier antivirus software but it dropped down due to its high prices, later they have changed their pricing to be more in trend with other rival companies but many have switched to something else. Today it still packs one of the best antivirus engines with fast and configurable scans. It also has very impressive anti-ransomware features but sadly most of the things it just simply does for you leaving you to fully trust it since you cannot really configure much. It is not in step with new technologies and it is lacking some features like support for the Chrome browser that places it lower in our ranking. 4. #### Trend Micro antivirus The biggest sell point for this antivirus application is probably its ease of use and user-friendliness. It also packs a great antivirus engine and impressive anti-ransomware performance but sadly it comes with very limited configurability and somewhat lack of features comparing it with the top three entries. Never the less a very user-friendly application that you can configure without knowing any kind of tech talk, everything is laid out in simple English. 5. #### Avira Perhaps best known for its free version, Avira has a premium one that is well better worth than the free version. Strong suites of this software are mostly aimed at the internet with its great anti-phishing and web protection along with a low price for all of its features. Sadly in the domain of virus protection, there are some reports from independent websites that its antivirus engine is not so great, it will offer you moderate protection but not the best. 6. #### Webroot Secure Anywhere If you are on the lookout for software that has a great virus database and plenty of features but is also incredibly light weighted and fast then look no further, Webroot Secure Anywhere is an application for you, incredibly fast and incredibly small is a great solution for older machines. It keeps all of its databases up in the cloud and this feature among its great advantages is also its greatest disadvantage since if you are out of the internet you will not be able to have the latest virus definitions available to you making this tool very situational. 7. #### Avast Avast has many great protection features and it is very highly configurable. The firewall comes also in its premium edition and it offers great virus protection including a file shredder and awesome WI-FI inspector for an extra layer of security. This package would be higher on the list if it was not for its lack of WEB protection and its tool on the system resources. 8. #### Sophos home antivirus Lack of features and somewhat strange user interface are downsides of this software but on its positive side it has good antivirus engine and its user interface is very friendly. Where it shines though is in its price, for one affordable price you get protection for 10 devices making this option a great choice for anyone wanting to protect more devices or just use a single license for the whole family. 9. #### ESET antivirus Greatly configurable antivirus software with tons of options and very light on system resources makes ESET one of the best out there. Virus engine and database is also top tier but some testing labs have reported that protection offering is not really what it is advertised and if we talk about its strong suite of a great many options and configurations it is at the same moment its downside since it is not very friendly to novice and beginner users. 10. #### McAfee antivirus McAfee antivirus comes in its package with unlimited VPN service and if we take look at the top tier pricing plan it is a great investment. Sadly for its entry-level price, it covers only a single device and it was reported that it packs little outdated virus engine if we compare it with its rivals. Nevertheless, it still offers good virus protection and if you take into account the VPN that comes with it, it can find its users. ### Conclusion No matter which antivirus you choose you will not go wrong, after all any protection is way better than none. The GameStream is an NVIDIA service that allows users to stream games from their Windows 10 computers to other supported devices which includes the NVIDIA SHIELD devices. However, a number of users reported that the NVIDIA GameStream is not working on their Windows 10 computers. This kind of issue is most likely caused by improper installation, some glitches with the network, and many more. To fix this issue in the NVIDIA GameStream, there are several options you need to check out to fix the problem. You can try to lo logout of GameStream and try logging back in again. You could also try to update or uninstall and reinstall the drivers related to NVIDIA or fix some network glitches or update the NVIDIA SHIELD device. For more details, you can refer to each one of the given potential fixes below. ### Option 1 – Try to logout and log back into NVIDIA GameStream The first thing you can do is to log out and then log back into NVIDIA GameStream. Some users claimed that by doing this simple task, they were able to resolve the problem. This is probably because with the re-login, the entire cache of the system and service is rebuilt and any bad sectors of that data will be replaced with the fresh one so this should resolve the problem with the NVIDIA GameStream, if not, refer to the other given options below. ### Option 2 – Try updating the drivers from the official site of NVIDIA If both the first and second given options didn’t work, you can also try updating the drivers from the official NVIDIA website. And in case you don’t know the type of Nvidia graphics card that your computer is on, follow the steps below: • Tap the Win + R keys to open the Run dialog box. • Next type in “dxdiag” in the field and click OK or hit Enter to open the DirectX Diagnostic Tool. • From there, you can see what type of Nvidia graphics card that your system is on. • Take note of your graphics card information and then look for the best drivers for your operating system. Once you’ve downloaded and installed the file, restart your PC. ### Option 3 – Try to roll back the driver to the previous version If updating the NVIDIA display drivers didn’t work for you, then it’s time to roll back the device drivers. It is most likely that after you updated your Windows computer that your driver also needs a refresh. • Tap the Win + R keys to launch the Run window and then type in the “MSC” command and hit Enter to open the Device Manager window. • Under the Device Manager, you will see a list of drivers. From there, look for the NVIDIA Drivers and expand it. • Next, select the driver entries that are labeled appropriately. • Then select each one of them and double click to open a new mini window. • After that, make sure that you’re on the Driver tab and if you are not, just navigate to it then click the Roll Back Driver button to switch back to the previous version of the NVIDIA Drivers. ### Option 4 – Try fixing your network The next thing you can do to fix the problem with the NVIDIA GameStream is to fix the glitches in your network. Make sure that you connect both of your devices to a 5 GHz Wi-Fi network and you also have to ensure that the Wi-Fi connection you’re connected to is strong enough for both the devices for the latency to go down. Once you’ve covered all of these things with your network, restart your computer and change the Wi-Fi channel both devices are connected to. This should resolve the problem. ### Option 5 – Try to update the NVIDIA SHIELD device You might also want to update the NVIDIA SHIELD device. There are times when an outdated NVIDIA SHIELD device can result to several issues like the problem with the NVIDIA GameStream. Thus, you need to update NVIDIA SHIELD and check if it fixes the problem or not. Microsoft sent an email to users on the Dev build channel saying that the company intends to push some builds that don’t represent what consumers will receive with Windows 11 when it officially releases. In other words, these are going to be some rather buggy builds that won’t be too enjoyable to use. The company recommends users switch from the Dev to the beta channel if they aren’t prepared to deal with the instability. We’ll have to wait and see just how buggy these builds are, but if Microsoft is actually sending out a warning about them it is very likely that builds will be plagued with issues and maybe even stability problems. ### Back to Windows 10 How we can expect some buggy build of Windows 11 if you prefer a stable system over new features maybe best decision would be to switch back to Windows 10 until the new OS hits official release. ### Switching from dev build channel to beta channel Another solution, if you do not want to deal with too many issues, is to switch from Dev build channel to beta where things will be more stable. Follow the guide below in order to quickly switch to the beta channel. Following instructions only apply to Windows 11 installations that are linked up to the Windows Insider program, not clean installation of OS. 1. Press ⊞ WINDOWS + I to open settings 2. Inside settings click on Windows update 3. In Windows Update click on Windows Insider Program 4. Inside click on Choose your Insider Settings 5. Click on the button next to Beta Channel to select it (you can switch back to the Dev channel here if you change your mind) The setting will be saved automatically and from now on you will only receive beta channel updates.
2023-03-31 00:23:16
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.19539138674736023, "perplexity": 2541.1056520465927}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2023-14/segments/1679296949506.62/warc/CC-MAIN-20230330225648-20230331015648-00668.warc.gz"}
https://math.stackexchange.com/questions/936235/measure-theory-proof-of-the-standardproof-given-theorem
# Measure theory: proof of the “Standardproof” given theorem. Measure theory: proof of the "Standardproof" given theorem. Let $(X, \mathcal E)$ be a measurable space. Let $W \subseteq \mathcal M(\mathcal E)$ (set of measurable $\mathcal E$-$\mathcal B(\mathbb R)$-functions) and suppose $W$ satisfy: (i) $1_A \in W$ for all $A \in \mathcal E$. (ii) $W$ is a subspace of the vectorspace of reel functions defined on $X$. (iii) If $(f_n)$ is a increasing sequence of functions of $W$ ($f_n \le f_{n+1}$) s.t $\sup_{n \in \mathbb N} f_n(x) < \infty$, $x \in X$, then $\lim_{n \rightarrow \infty} f_n = \sup_{n \in \mathbb N} f_n(x) \in W$. I want to prove $W =\mathcal M(\mathcal E)$. I am given the following theorem: Let $f \in \mathcal M(\mathcal E)$. Then there exist a sequence $(s_n)$ of simple functions s.t (i) $f(x) = \lim_{n \rightarrow \infty} s_n(x)$ for all $x \in X$. (ii) $|s_n(x)| \le |f(x)|$ for all $n \in \mathbb N$ and $x \in X$. (iii) If $f \ge 0$ then $(s_n)$ can be chosen s.t $0 \le s_n \le s_{n+1}$. I see that $s_n \in W$ for all $n \in \mathbb N$ by considering the standard representation of $s_n$. However I cannot make the sequence $(s_n)$ increasing even by considering a subsequence of $(s_n)$ ? • Point (iii) says that $(s_n)_n$ is increasing. You just have to reduce to the case $f \geq 0$ (how?). – PhoemueX Sep 18 '14 at 8:19 • I have considered this, but I've no idea. I have already proved the theorem in case $f \in \mathcal M(\mathcal E)^+$. But in this case $f$ might be negative for certain $x$'s and then $f \ge 0$ is not true. – Shuzheng Sep 18 '14 at 8:23 Simply decompose $f \in \mathcal{M}(\mathcal{E})$ as $f = f_+ - f_-$ with $f_+ = \max \{0, f\}$ and $f_- = - \min\{0, f\}$. Then $f_+, f_- \in \mathcal{M}(\mathcal{E})^+$ (why?). By the part of the proof you have already done, $f_+, f_- \in W$. • Ahh, yes - and then I use the fact that $\mathcal M(\mathcal E)$ is a vector space to conclude that $f \in \mathcal M(\mathcal E)$. – Shuzheng Sep 18 '14 at 8:31
2020-04-09 04:51:39
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9871968030929565, "perplexity": 89.02863254218337}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-16/segments/1585371829677.89/warc/CC-MAIN-20200409024535-20200409055035-00120.warc.gz"}
https://groupprops.subwiki.org/wiki/Left-inner_subgroup_property
# Left-inner subgroup property BEWARE! This term is nonstandard and is being used locally within the wiki. [SHOW MORE] This article defines a subgroup metaproperty: a property that can be evaluated to true/false for any subgroup property View a complete list of subgroup metaproperties View subgroup properties satisfying this metaproperty| View subgroup properties dissatisfying this metaproperty VIEW RELATED: subgroup metaproperty satisfactions| subgroup metaproperty dissatisfactions ## Definition ### Symbol-free definition A subgroup property is said to be left-inner if, in the function restriction formalism, it has a restriction formal expression with the left side being inner automorphisms. ### Definition with symbols A subgroup property $p$ is said to be left-inner if, in the function restriction formalism, there exists a restriction formal expression for $p$ of the form: Inner automorphism $\to b$ where $b$ is any function property. In other words, a subgroup $H$ satisfies property $p$ in $G$ if and only if every inner automorphism on $G$ restricts to a function on the subgroup satisfying property $b$. ### In terms of the left expressibility operator The subgroup metaproperty of being left-inner is obtained by applying the left expressibility operator to the function metaproperty of being exactly equal to the inner automorphism
2021-04-10 22:51:27
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 9, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8595435619354248, "perplexity": 1946.3282355105734}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-17/segments/1618038059348.9/warc/CC-MAIN-20210410210053-20210411000053-00015.warc.gz"}
http://clay6.com/qa/40913/if-a-b-c-and-d-find-vi-c-cap-d
Browse Questions Home  >>  CBSE XI  >>  Math  >>  Sets If A={x : x is a natural number},B={x : x is an even natural number},C={x : x is an odd natural number } and D={x : x is a prime number },Find (vi) $C\cap D$ $\begin{array}{1 1}(A)\;\{1,2,3,4,5.....\}&(B)\;\{2,3,5,7.....\}\\(C)\;\{3,5,7,11,17......\}&(D)\;\{1,3,5,7....\}\end{array}$ Can you answer this question? $C=\{1,3,5,7,9,11....\}$ $D=\{2,3,5,7,11,17,19....\}$ Here odd prime numbers are common elements in both C & D $C\cap D=\{1,3,5,7,9,11....\}\cap \{2,3,5,7,11,17,19....\}$ $\Rightarrow \{3,5,7,11,17,19....\}$ Hence (C) is the correct answer. answered May 27, 2014
2017-01-23 00:29:35
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.4193344712257385, "perplexity": 1105.652811029384}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560281659.81/warc/CC-MAIN-20170116095121-00280-ip-10-171-10-70.ec2.internal.warc.gz"}