url
stringlengths
14
2.42k
text
stringlengths
100
1.02M
date
stringlengths
19
19
metadata
stringlengths
1.06k
1.1k
https://www.gradesaver.com/textbooks/math/algebra/intermediate-algebra-for-college-students-7th-edition/chapter-1-test-page-101/2
## Intermediate Algebra for College Students (7th Edition) $170$ Substitute $10$ to $x$ to obtain: $=8 + 2(10-7)^4$ Simplify within the parentheses to obtain: $\\=8+2(3)^4$ Apply the exponent. An exponent of 4 means $3$ will be multiplied to itself four times: $=8+2(3 \cdot 3 \cdot 3 \cdot 3) \\=8+2(81) \\=8+162 \\=170$
2019-12-12 03:01:22
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.764951765537262, "perplexity": 1040.1276257482386}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-51/segments/1575540536855.78/warc/CC-MAIN-20191212023648-20191212051648-00092.warc.gz"}
https://chemistry.stackexchange.com/questions/172237/the-extinction-coefficients-of-oxymyoglobin-and-ferryl-myoglobin
# The extinction coefficients of oxymyoglobin and ferryl myoglobin I want to know what the extinction coefficients of oxymyoglobin and ferryl myoglobin are. I was able to find the one for the oxymyoglobin in this pretty old article (it says that it equals $$121 \text{mM}^{-1}\text{cm}^{-1}$$), but I couldn't locate any other reference supporting this claim. Furthermore, I couldn't find any reference in the case of ferryl myoglobin. I am also curious if, in general, there are tables containing the extinction coeffcients of various compounds (at different wavelengths). • Which organism are you interested in? They will be very similar among mammals, but not identical. Also, for which wavelength do you want to know it? – Karsten Mar 20 at 21:26 • @Karsten I am interested in horse heart Mb and I want to know it for the absorption maximum, so somewhere around 410-425 nm for the Soret band. Mar 20 at 21:33 • Which pH are you interested in? – Karsten Mar 20 at 22:16 • @Karsten I am interested in pH 9. Mar 20 at 22:44
2023-04-02 10:55:38
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 1, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.49779826402664185, "perplexity": 1251.114406560702}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2023-14/segments/1679296950528.96/warc/CC-MAIN-20230402105054-20230402135054-00201.warc.gz"}
https://en.m.wikibooks.org/wiki/Calculus/Parametric_Integration
# Calculus/Parametric Integration ## IntroductionEdit Because most parametric equations are given in explicit form, they can be integrated like many other equations. Integration has a variety of applications with respect to parametric equations, especially in kinematics and vector calculus. ${\displaystyle x=\int x'(t)dt}$ ${\displaystyle y=\int y'(t)dt}$ So, taking a simple example: ${\displaystyle y=\int \cos(t)dt=\sin(t)}$
2017-01-20 14:11:33
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 3, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8347272872924805, "perplexity": 945.036961228125}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560280835.22/warc/CC-MAIN-20170116095120-00094-ip-10-171-10-70.ec2.internal.warc.gz"}
https://people.maths.bris.ac.uk/~matyd/GroupNames/192i1/C2xD4xDic3.html
Copied to clipboard ## G = C2×D4×Dic3order 192 = 26·3 ### Direct product of C2, D4 and Dic3 Series: Derived Chief Lower central Upper central Derived series C1 — C6 — C2×D4×Dic3 Chief series C1 — C3 — C6 — C2×C6 — C2×Dic3 — C22×Dic3 — C23×Dic3 — C2×D4×Dic3 Lower central C3 — C6 — C2×D4×Dic3 Upper central C1 — C23 — C22×D4 Generators and relations for C2×D4×Dic3 G = < a,b,c,d,e | a2=b4=c2=d6=1, e2=d3, ab=ba, ac=ca, ad=da, ae=ea, cbc=b-1, bd=db, be=eb, cd=dc, ce=ec, ede-1=d-1 > Subgroups: 840 in 426 conjugacy classes, 215 normal (21 characteristic) C1, C2, C2, C2, C3, C4, C4, C22, C22, C22, C6, C6, C6, C2×C4, C2×C4, D4, C23, C23, C23, Dic3, Dic3, C12, C2×C6, C2×C6, C2×C6, C42, C22⋊C4, C4⋊C4, C22×C4, C22×C4, C2×D4, C24, C2×Dic3, C2×Dic3, C2×C12, C3×D4, C22×C6, C22×C6, C22×C6, C2×C42, C2×C22⋊C4, C2×C4⋊C4, C4×D4, C23×C4, C22×D4, C4×Dic3, C4⋊Dic3, C6.D4, C22×Dic3, C22×Dic3, C22×Dic3, C22×C12, C6×D4, C23×C6, C2×C4×D4, C2×C4×Dic3, C2×C4⋊Dic3, D4×Dic3, C2×C6.D4, C23×Dic3, D4×C2×C6, C2×D4×Dic3 Quotients: C1, C2, C4, C22, S3, C2×C4, D4, C23, Dic3, D6, C22×C4, C2×D4, C4○D4, C24, C2×Dic3, C22×S3, C4×D4, C23×C4, C22×D4, C2×C4○D4, S3×D4, D42S3, C22×Dic3, S3×C23, C2×C4×D4, D4×Dic3, C2×S3×D4, C2×D42S3, C23×Dic3, C2×D4×Dic3 Smallest permutation representation of C2×D4×Dic3 On 96 points Generators in S96 (1 40)(2 41)(3 42)(4 37)(5 38)(6 39)(7 74)(8 75)(9 76)(10 77)(11 78)(12 73)(13 33)(14 34)(15 35)(16 36)(17 31)(18 32)(19 43)(20 44)(21 45)(22 46)(23 47)(24 48)(25 54)(26 49)(27 50)(28 51)(29 52)(30 53)(55 90)(56 85)(57 86)(58 87)(59 88)(60 89)(61 80)(62 81)(63 82)(64 83)(65 84)(66 79)(67 91)(68 92)(69 93)(70 94)(71 95)(72 96) (1 51 16 48)(2 52 17 43)(3 53 18 44)(4 54 13 45)(5 49 14 46)(6 50 15 47)(7 62 93 57)(8 63 94 58)(9 64 95 59)(10 65 96 60)(11 66 91 55)(12 61 92 56)(19 41 29 31)(20 42 30 32)(21 37 25 33)(22 38 26 34)(23 39 27 35)(24 40 28 36)(67 90 78 79)(68 85 73 80)(69 86 74 81)(70 87 75 82)(71 88 76 83)(72 89 77 84) (1 33)(2 34)(3 35)(4 36)(5 31)(6 32)(7 77)(8 78)(9 73)(10 74)(11 75)(12 76)(13 40)(14 41)(15 42)(16 37)(17 38)(18 39)(19 46)(20 47)(21 48)(22 43)(23 44)(24 45)(25 51)(26 52)(27 53)(28 54)(29 49)(30 50)(55 82)(56 83)(57 84)(58 79)(59 80)(60 81)(61 88)(62 89)(63 90)(64 85)(65 86)(66 87)(67 94)(68 95)(69 96)(70 91)(71 92)(72 93) (1 2 3 4 5 6)(7 8 9 10 11 12)(13 14 15 16 17 18)(19 20 21 22 23 24)(25 26 27 28 29 30)(31 32 33 34 35 36)(37 38 39 40 41 42)(43 44 45 46 47 48)(49 50 51 52 53 54)(55 56 57 58 59 60)(61 62 63 64 65 66)(67 68 69 70 71 72)(73 74 75 76 77 78)(79 80 81 82 83 84)(85 86 87 88 89 90)(91 92 93 94 95 96) (1 81 4 84)(2 80 5 83)(3 79 6 82)(7 21 10 24)(8 20 11 23)(9 19 12 22)(13 89 16 86)(14 88 17 85)(15 87 18 90)(25 96 28 93)(26 95 29 92)(27 94 30 91)(31 56 34 59)(32 55 35 58)(33 60 36 57)(37 65 40 62)(38 64 41 61)(39 63 42 66)(43 73 46 76)(44 78 47 75)(45 77 48 74)(49 71 52 68)(50 70 53 67)(51 69 54 72) G:=sub<Sym(96)| (1,40)(2,41)(3,42)(4,37)(5,38)(6,39)(7,74)(8,75)(9,76)(10,77)(11,78)(12,73)(13,33)(14,34)(15,35)(16,36)(17,31)(18,32)(19,43)(20,44)(21,45)(22,46)(23,47)(24,48)(25,54)(26,49)(27,50)(28,51)(29,52)(30,53)(55,90)(56,85)(57,86)(58,87)(59,88)(60,89)(61,80)(62,81)(63,82)(64,83)(65,84)(66,79)(67,91)(68,92)(69,93)(70,94)(71,95)(72,96), (1,51,16,48)(2,52,17,43)(3,53,18,44)(4,54,13,45)(5,49,14,46)(6,50,15,47)(7,62,93,57)(8,63,94,58)(9,64,95,59)(10,65,96,60)(11,66,91,55)(12,61,92,56)(19,41,29,31)(20,42,30,32)(21,37,25,33)(22,38,26,34)(23,39,27,35)(24,40,28,36)(67,90,78,79)(68,85,73,80)(69,86,74,81)(70,87,75,82)(71,88,76,83)(72,89,77,84), (1,33)(2,34)(3,35)(4,36)(5,31)(6,32)(7,77)(8,78)(9,73)(10,74)(11,75)(12,76)(13,40)(14,41)(15,42)(16,37)(17,38)(18,39)(19,46)(20,47)(21,48)(22,43)(23,44)(24,45)(25,51)(26,52)(27,53)(28,54)(29,49)(30,50)(55,82)(56,83)(57,84)(58,79)(59,80)(60,81)(61,88)(62,89)(63,90)(64,85)(65,86)(66,87)(67,94)(68,95)(69,96)(70,91)(71,92)(72,93), (1,2,3,4,5,6)(7,8,9,10,11,12)(13,14,15,16,17,18)(19,20,21,22,23,24)(25,26,27,28,29,30)(31,32,33,34,35,36)(37,38,39,40,41,42)(43,44,45,46,47,48)(49,50,51,52,53,54)(55,56,57,58,59,60)(61,62,63,64,65,66)(67,68,69,70,71,72)(73,74,75,76,77,78)(79,80,81,82,83,84)(85,86,87,88,89,90)(91,92,93,94,95,96), (1,81,4,84)(2,80,5,83)(3,79,6,82)(7,21,10,24)(8,20,11,23)(9,19,12,22)(13,89,16,86)(14,88,17,85)(15,87,18,90)(25,96,28,93)(26,95,29,92)(27,94,30,91)(31,56,34,59)(32,55,35,58)(33,60,36,57)(37,65,40,62)(38,64,41,61)(39,63,42,66)(43,73,46,76)(44,78,47,75)(45,77,48,74)(49,71,52,68)(50,70,53,67)(51,69,54,72)>; G:=Group( (1,40)(2,41)(3,42)(4,37)(5,38)(6,39)(7,74)(8,75)(9,76)(10,77)(11,78)(12,73)(13,33)(14,34)(15,35)(16,36)(17,31)(18,32)(19,43)(20,44)(21,45)(22,46)(23,47)(24,48)(25,54)(26,49)(27,50)(28,51)(29,52)(30,53)(55,90)(56,85)(57,86)(58,87)(59,88)(60,89)(61,80)(62,81)(63,82)(64,83)(65,84)(66,79)(67,91)(68,92)(69,93)(70,94)(71,95)(72,96), (1,51,16,48)(2,52,17,43)(3,53,18,44)(4,54,13,45)(5,49,14,46)(6,50,15,47)(7,62,93,57)(8,63,94,58)(9,64,95,59)(10,65,96,60)(11,66,91,55)(12,61,92,56)(19,41,29,31)(20,42,30,32)(21,37,25,33)(22,38,26,34)(23,39,27,35)(24,40,28,36)(67,90,78,79)(68,85,73,80)(69,86,74,81)(70,87,75,82)(71,88,76,83)(72,89,77,84), (1,33)(2,34)(3,35)(4,36)(5,31)(6,32)(7,77)(8,78)(9,73)(10,74)(11,75)(12,76)(13,40)(14,41)(15,42)(16,37)(17,38)(18,39)(19,46)(20,47)(21,48)(22,43)(23,44)(24,45)(25,51)(26,52)(27,53)(28,54)(29,49)(30,50)(55,82)(56,83)(57,84)(58,79)(59,80)(60,81)(61,88)(62,89)(63,90)(64,85)(65,86)(66,87)(67,94)(68,95)(69,96)(70,91)(71,92)(72,93), (1,2,3,4,5,6)(7,8,9,10,11,12)(13,14,15,16,17,18)(19,20,21,22,23,24)(25,26,27,28,29,30)(31,32,33,34,35,36)(37,38,39,40,41,42)(43,44,45,46,47,48)(49,50,51,52,53,54)(55,56,57,58,59,60)(61,62,63,64,65,66)(67,68,69,70,71,72)(73,74,75,76,77,78)(79,80,81,82,83,84)(85,86,87,88,89,90)(91,92,93,94,95,96), (1,81,4,84)(2,80,5,83)(3,79,6,82)(7,21,10,24)(8,20,11,23)(9,19,12,22)(13,89,16,86)(14,88,17,85)(15,87,18,90)(25,96,28,93)(26,95,29,92)(27,94,30,91)(31,56,34,59)(32,55,35,58)(33,60,36,57)(37,65,40,62)(38,64,41,61)(39,63,42,66)(43,73,46,76)(44,78,47,75)(45,77,48,74)(49,71,52,68)(50,70,53,67)(51,69,54,72) ); G=PermutationGroup([[(1,40),(2,41),(3,42),(4,37),(5,38),(6,39),(7,74),(8,75),(9,76),(10,77),(11,78),(12,73),(13,33),(14,34),(15,35),(16,36),(17,31),(18,32),(19,43),(20,44),(21,45),(22,46),(23,47),(24,48),(25,54),(26,49),(27,50),(28,51),(29,52),(30,53),(55,90),(56,85),(57,86),(58,87),(59,88),(60,89),(61,80),(62,81),(63,82),(64,83),(65,84),(66,79),(67,91),(68,92),(69,93),(70,94),(71,95),(72,96)], [(1,51,16,48),(2,52,17,43),(3,53,18,44),(4,54,13,45),(5,49,14,46),(6,50,15,47),(7,62,93,57),(8,63,94,58),(9,64,95,59),(10,65,96,60),(11,66,91,55),(12,61,92,56),(19,41,29,31),(20,42,30,32),(21,37,25,33),(22,38,26,34),(23,39,27,35),(24,40,28,36),(67,90,78,79),(68,85,73,80),(69,86,74,81),(70,87,75,82),(71,88,76,83),(72,89,77,84)], [(1,33),(2,34),(3,35),(4,36),(5,31),(6,32),(7,77),(8,78),(9,73),(10,74),(11,75),(12,76),(13,40),(14,41),(15,42),(16,37),(17,38),(18,39),(19,46),(20,47),(21,48),(22,43),(23,44),(24,45),(25,51),(26,52),(27,53),(28,54),(29,49),(30,50),(55,82),(56,83),(57,84),(58,79),(59,80),(60,81),(61,88),(62,89),(63,90),(64,85),(65,86),(66,87),(67,94),(68,95),(69,96),(70,91),(71,92),(72,93)], [(1,2,3,4,5,6),(7,8,9,10,11,12),(13,14,15,16,17,18),(19,20,21,22,23,24),(25,26,27,28,29,30),(31,32,33,34,35,36),(37,38,39,40,41,42),(43,44,45,46,47,48),(49,50,51,52,53,54),(55,56,57,58,59,60),(61,62,63,64,65,66),(67,68,69,70,71,72),(73,74,75,76,77,78),(79,80,81,82,83,84),(85,86,87,88,89,90),(91,92,93,94,95,96)], [(1,81,4,84),(2,80,5,83),(3,79,6,82),(7,21,10,24),(8,20,11,23),(9,19,12,22),(13,89,16,86),(14,88,17,85),(15,87,18,90),(25,96,28,93),(26,95,29,92),(27,94,30,91),(31,56,34,59),(32,55,35,58),(33,60,36,57),(37,65,40,62),(38,64,41,61),(39,63,42,66),(43,73,46,76),(44,78,47,75),(45,77,48,74),(49,71,52,68),(50,70,53,67),(51,69,54,72)]]) 60 conjugacy classes class 1 2A ··· 2G 2H ··· 2O 3 4A 4B 4C 4D 4E ··· 4L 4M ··· 4X 6A ··· 6G 6H ··· 6O 12A 12B 12C 12D order 1 2 ··· 2 2 ··· 2 3 4 4 4 4 4 ··· 4 4 ··· 4 6 ··· 6 6 ··· 6 12 12 12 12 size 1 1 ··· 1 2 ··· 2 2 2 2 2 2 3 ··· 3 6 ··· 6 2 ··· 2 4 ··· 4 4 4 4 4 60 irreducible representations dim 1 1 1 1 1 1 1 1 2 2 2 2 2 2 2 4 4 type + + + + + + + + + + - + + + - image C1 C2 C2 C2 C2 C2 C2 C4 S3 D4 D6 Dic3 D6 D6 C4○D4 S3×D4 D4⋊2S3 kernel C2×D4×Dic3 C2×C4×Dic3 C2×C4⋊Dic3 D4×Dic3 C2×C6.D4 C23×Dic3 D4×C2×C6 C6×D4 C22×D4 C2×Dic3 C22×C4 C2×D4 C2×D4 C24 C2×C6 C22 C22 # reps 1 1 1 8 2 2 1 16 1 4 1 8 4 2 4 2 2 Matrix representation of C2×D4×Dic3 in GL5(𝔽13) 12 0 0 0 0 0 1 0 0 0 0 0 1 0 0 0 0 0 1 0 0 0 0 0 1 , 1 0 0 0 0 0 1 0 0 0 0 0 1 0 0 0 0 0 0 1 0 0 0 12 0 , 1 0 0 0 0 0 12 0 0 0 0 0 12 0 0 0 0 0 1 0 0 0 0 0 12 , 1 0 0 0 0 0 1 12 0 0 0 1 0 0 0 0 0 0 12 0 0 0 0 0 12 , 12 0 0 0 0 0 8 5 0 0 0 0 5 0 0 0 0 0 8 0 0 0 0 0 8 G:=sub<GL(5,GF(13))| [12,0,0,0,0,0,1,0,0,0,0,0,1,0,0,0,0,0,1,0,0,0,0,0,1],[1,0,0,0,0,0,1,0,0,0,0,0,1,0,0,0,0,0,0,12,0,0,0,1,0],[1,0,0,0,0,0,12,0,0,0,0,0,12,0,0,0,0,0,1,0,0,0,0,0,12],[1,0,0,0,0,0,1,1,0,0,0,12,0,0,0,0,0,0,12,0,0,0,0,0,12],[12,0,0,0,0,0,8,0,0,0,0,5,5,0,0,0,0,0,8,0,0,0,0,0,8] >; C2×D4×Dic3 in GAP, Magma, Sage, TeX C_2\times D_4\times {\rm Dic}_3 % in TeX G:=Group("C2xD4xDic3"); // GroupNames label G:=SmallGroup(192,1354); // by ID G=gap.SmallGroup(192,1354); # by ID G:=PCGroup([7,-2,-2,-2,-2,-2,-2,-3,112,297,6278]); // Polycyclic G:=Group<a,b,c,d,e|a^2=b^4=c^2=d^6=1,e^2=d^3,a*b=b*a,a*c=c*a,a*d=d*a,a*e=e*a,c*b*c=b^-1,b*d=d*b,b*e=e*b,c*d=d*c,c*e=e*c,e*d*e^-1=d^-1>; // generators/relations ׿ × 𝔽
2021-09-19 18:01:46
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9999790191650391, "perplexity": 7815.155251755787}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-39/segments/1631780056892.13/warc/CC-MAIN-20210919160038-20210919190038-00677.warc.gz"}
https://math.stackexchange.com/questions/3201092/a-binomial-bound-for-the-cdf-of-the-hypergeometric-distribution
A Binomial bound for the CDF of the Hypergeometric distribution? • Let $$H \sim Hyp(N,K,n)$$, where $$Hyp$$ denotes the hypergeometric distribution, $$N$$ the number of objects, $$K$$ the number of "good" objects, and $$n$$ the number of draws. • I am interested in a particular bound for $$\mathbb{P}(H \leq x)$$. • Let $$B_x \sim Bi\left(n, \frac{K-x}{N-x}\right)$$, where $$Bi$$ denotes the Binomial distribution. Intuitively, if no more than $$x$$ of the $$n$$ draws associated with the Hypergeometric distribution are successful (i.e., result in drawing a "good" object), the probability of a success in each of these draws never falls below $$\frac{K-x}{N-x}$$. Therefore, the following inequality might seem like a reasonable conjecture: (1)$$\qquad$$ $$\mathbb{P}(H\leq x) \leq \mathbb{P}(B_x \leq x)$$, $$\qquad$$ for all $$x \leq K$$. I've looked online a little bit and couldn't find any reference to (1). Maybe this inequality is easy to prove or disprove, but I haven't been able to. Beside the intuitive "argument" above, her is some (arguably very limited) suggestive evidence that (1) might be true. Example 1 Suppose that $$N =4$$, $$K=2$$, and $$n = 2$$. Then, • $$\mathbb{P}(H \leq 0) = (1/2)*(1/3)= 1/6$$ • $$\mathbb{P}(B_0 \leq 0) = (1/2)*(1/2)= 1/4$$ Also, • $$\mathbb{P}(H \leq 1) = (1/2)*(1/3) + (1/2)*(2/3) + (1/2)*(2/3)= 5/6$$ • $$\mathbb{P}(B_1 \leq 1) = (2/3)*(2/3) + (1/3)*(2/3) + (2/3)*(1/3)= 8/9$$ Example 2 In Mathematica, plotting the difference between the two CDFs for a couple of values of the paramaters and $$x$$ DiscretePlot[ Table[CDF[HypergeometricDistribution[n, 50, 100], k], {n, {10, 20, 50}}] - Table[CDF[BinomialDistribution[n, (50 - k)/(100 - k)], k], {n, {10, 20, 50}}] // Evaluate, {k, 0, 32}, PlotRange -> All] yields Some things I found difficult when trying to prove the inequality: 1. As far as I know, there is no really convenient formula for the CDF's of Binomial and (even less so) of Hypergeometric distributions. 2. If the inequality holds, it certainly does not hold "pointwise", in the sense that we don't have $$\mathbb{P}(H = y) \leq \mathbb{P}(B_x = y)$$ for all $$y \in \{0,\dots, x\}$$. So if the inequality holds, it really has to do with the whole sum of the PMFs from $$0$$ to $$x$$, which I find hard to play with. My questions: 1. Can someone provide a counter-examples or a proof of (1)? 2. If (1) is true, is there a good reference for it that I could cite?
2019-06-19 03:04:00
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 27, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9114797711372375, "perplexity": 415.29529639450453}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-26/segments/1560627998882.88/warc/CC-MAIN-20190619023613-20190619045613-00436.warc.gz"}
https://stats.stackexchange.com/questions/151947/is-it-possible-to-calculate-q1-median-q3-stdev-from-already-aggregated-data
# Is it possible to calculate Q1, Median, Q3, StDev from already aggregated data? We have data that will get aggregated per hour into the following values • Q1 • Median • Mean • Q3 • Standard Deviation • Max • Min • Count of Values So the data will look more or like this in the end. 00:00-01:00 01:00-02:00 02:00-03:00 03:00-04:00 ... -------------------------------------------------------------------------------- Q1 68,72 69,64 64,31 64,40 ... Median 118,72 124,42 115,54 118,11 ... Mean 119,17 119,97 117,23 117,60 ... Q3 169,64 171,72 170,63 168,72 ... StDev 59,30 59,15 61,23 59,62 ... Max 219,70 219,44 219,76 219,71 ... Min 15,02 15,07 15,05 15,05 ... Count 1000,00 1000,00 1000,00 1000,00 ... Now we want to aggregate the same values for a whole day (24h) without using the original data if possible (because in our real scenario it would require a significantly longer time to aggregate from those). For most of them it's pretty straight forward, like MIN is simply the overall MIN, AVG is the overall AVG, etc. But the tricky part is Q1, Median, Q3 and StDev. From what I understand it's not possible to simply calculate the (weighted) average value of the 24 separate values. But is there a method to achieve this from already aggregated values (for example by storing some additional data)? Is the difference from such a huge dataset even significant? Or will the data always be distorted except for calculating it from the whole dataset? The quantiles are trickier. Consider, Q1 of two samples. They form the bounds of the Q1 of the combined sample. If $Q1_1>Q1_2$, then it's easy to see that aggregated $Q1_2<Q1$ and $Q1<Q1_1$. That's all you can say about the quantiles, i.e. in your case $min(Q1_i)<Q1<max(Q1_i)$.
2019-05-25 03:27:43
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.5085484385490417, "perplexity": 1620.656865170382}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-22/segments/1558232257847.56/warc/CC-MAIN-20190525024710-20190525050710-00190.warc.gz"}
https://www.physicsforums.com/threads/air-bubble-placed-in-water.559412/
Air bubble placed in water 1. Dec 12, 2011 rktpro I was tackling a problem that came to my mind whether air bubble placed in water is converging or diverging lens. What I have concluded is that since bubble is sphere, we can assume it to be made of two similar plano convex lens. Now both of them will have same focal length but applying convention states that one would be positive and other would be negative. Thus the effective focal length would be zero and hence it wouldn't act like a lens but a glass slab. What do you say? 2. Dec 12, 2011 JHamm Light will be deflected as it enters the bubble, will it necessarily contact the other edge of the sphere at the same angle as it contacted the first edge? 3. Dec 12, 2011 nasu This is not true. The two halves are both convergent (or divergent) so their powers have the same sign. A glass ball in air does not behave like a glass slab. To find out the character of the air bubble (or half bubble) lens you just need to trace one or two rays. 4. Dec 12, 2011 rktpro Probably not. 5. Dec 12, 2011 rktpro How can both be of same focal length. If we apply convention, one would be positive and one negative. Because the focal points would be in two different directions. If they are both convergent, it would mean that the image formed would be outside the bubble? Please illustrate with a diagram, if possible.
2018-10-21 05:33:33
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8470900058746338, "perplexity": 547.1389487162015}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-43/segments/1539583513760.4/warc/CC-MAIN-20181021052235-20181021073735-00086.warc.gz"}
https://socratic.org/questions/how-do-you-find-the-gcf-of-24-and-42
# How do you find the GCF of 24 and 42? Then teach the underlying concepts Don't copy without citing sources preview ? #### Explanation Explain in detail... #### Explanation: I want someone to double check my answer 28 Jan 9, 2017 $6$ #### Explanation: The Greatest Common Factor is the highest number that can be used to divide the two given numbers. It is easily found by writing down the factors of the two numbers and selecting the common one(s) which are the highest. In the given examples, the factors of the two numbers are as follows: $24 : 2 , 2 , 2 , 3$ $42 : 2 , 3 , 7$ Since there are two numbers common between the two sets of factors, the GCF is: $2 \times 3 = 6$ • 17 minutes ago • 18 minutes ago • 19 minutes ago • 20 minutes ago • A minute ago • 2 minutes ago • 6 minutes ago • 6 minutes ago • 10 minutes ago • 11 minutes ago • 17 minutes ago • 18 minutes ago • 19 minutes ago • 20 minutes ago
2018-03-22 04:09:46
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 4, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.774756669998169, "perplexity": 2384.792304082466}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-13/segments/1521257647768.45/warc/CC-MAIN-20180322034041-20180322054041-00092.warc.gz"}
https://zbmath.org/?q=an:1049.03003
# zbMATH — the first resource for mathematics Nonclassical mereology and its application to sets. (English) Zbl 1049.03003 The paper consists of two parts. The author himself qualifies the first part as “the case against classical mereology”; it contains some objections to the latter and some motivation to what is called there Heyting mereology. The second part shows how this alternative system of mereology provides us with “all the sets we need”. The systems of mereology discussed in the paper are based on first-order logic with identity and the definite description operator; the non-logical primitives are the proper part relation and a name for the “fictious null thing”. (However, the null thing is admitted in the scope of quantified variables and thus, formally, is not fictious.) Therefore, the classical mereology is reduced in the paper to the elementary part of Leśniewski’s original system. The author argues that the axiom of the classical mereology which declares uniqueness of mereological classes (called fusions in the paper) is a careless extrapolation from the finite to the infinite case. However, he does not reject the principle that things have a unique sum. The position of the author can be explained more precisely as follows. A model of a mereology is a complete lattice with zero deleted. In such a lattice, the fusion of a subset $$F$$ is an upper bound $$z$$ of $$F$$ such that any $$x$$ disjoint from every element of $$F$$ is disjoint from $$z$$. The sum of $$F$$ is a fusion which happens to be the join of $$F$$. In classical mereology, $$F$$ has to have a unique fusion; hence the notions of fusion, join and sum coincide there. In the author’s nonclassical mereology, uniqueness of fusions is not required, but, in order to obtain a mathematically well-behaved mereology, it is assumed that meet distributes over arbitrary joins (the latter condition is known to be equivalent to the one that the lattice is Heyting). By the way, this assumption provides that the join of $$F$$ is always a fusion. In contrast to classical mereology, the sum of proper parts of a thing $$x$$ in Heyting mereology may itself be a proper part, which is then the unique maximal proper part (mpp) of $$x$$. If this is the case, $$x$$ is said to be an atom. The author uses the term ‘gunk’ for a thing that has no atomic parts, and discusses the problem of measuring gunks. A sufficient motivation is found here for the hypothesis that a model of merology should be a Heyting lattice. Heyting mereology provides tools for treating sets or, more accurately, pseudosets. Let $$x A y$$ mean that $$x$$ is the mpp of $$y$$. The pseudomembership relation $$E$$ is defined by $$x E y$$ iff $$x A y$$ and there is no $$u$$ such that $$x A u$$ and $$u A y$$. A pseudoset is then any $$y$$ such that $$x E y$$ and $$y E z$$ for some $$x$$ and $$z$$. The author shows, shortly, in what sense his mereology has the resources to perform the work done in a usual set theory by simply founded sets ($$z$$ is simply well-founded if the membership relation of $$\in$$ restricted to $$z$$ is a tree), pure sets or sets with urelements. [Reviewers remark: The formal definition of a fusion, HMD11, evidently contains some misprint. The second argument of conjunction there should read as $$\forall w(\forall z(Fz \supset \sim w \circ z) \supset \sim w \circ x)$$.] ##### MSC: 03A05 Philosophical and critical aspects of logic and foundations 03E70 Nonclassical and second-order set theories Full Text: ##### References: [1] Divers, J., Possible Worlds , Routledge, London, 2002. [2] Forrest, P., “How innocent is mereology?,” Analysis , vol. 56 (1996), pp. 127–31. · Zbl 0943.03578 [3] Forrest, P., ”Grit or gunk: The implications of the Banach-Tarski paradox”, forthcoming in The Monist , vol. 87 (2004). [4] Johnstone, P. T., Stone Spaces , vol. 3 of Cambridge Studies in Advanced Mathematics , Cambridge University Press, Cambridge, 1982. · Zbl 0499.54001 [5] Lewis, D., Parts of Classes , Basil Blackwell, Oxford, 1991. · Zbl 0900.03061 [6] Simons, P. M., ”On understanding Leśniewski”, History and Philosophy of Logic , vol. 3 (1982), pp. 165–91. · Zbl 0516.03003 [7] Simons, P. M., Parts: A Study of Ontology , Clarendon Press, Oxford, 1987. [8] Tarski, A., ”Foundations of the geometry of solids”, pp. 24–29 in Logic, Semantics, Metamathematics. Papers from 1923 to 1938 , translated by J. H. Woodger, Clarendon Press, Oxford, 1956. · Zbl 0075.00702 [9] van Fraassen, B., ”Singular terms, truth-value gaps and free logic”, Journal of Philosophy , vol. 63 (1966), pp. 481–95. This reference list is based on information provided by the publisher or from digital mathematics libraries. Its items are heuristically matched to zbMATH identifiers and may contain data conversion errors. It attempts to reflect the references listed in the original paper as accurately as possible without claiming the completeness or perfect precision of the matching.
2021-09-16 21:00:42
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.7994437217712402, "perplexity": 846.7805768947688}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-39/segments/1631780053759.24/warc/CC-MAIN-20210916204111-20210916234111-00065.warc.gz"}
https://math.stackexchange.com/questions/2595102/show-that-2x53x42x16-has-exactly-one-real-root
# Show that $2x^5+3x^4+2x+16$ has exactly one real root It's clear that this function has a zero in the interval $[-2,-1]$ by the Intermediate Value Theorem. I have graphed this function, and it's easy to see that it only has one real root. But, this function is not injective and I'm having a very hard time proving that it has exactly one real zero. I can't calculate the other 4 complex roots, and my algebra is relatively weak. I have also looked at similar questions, where the solutions use Rolle's Theorem, but I can't seem to apply it to this problem. • Not sure but if f'(x)=10x^4+12x^3+2 is positive positive or negative on the interval it can't have more than one root. – fleablood Jan 7 '18 at 2:40 • The derivative, $f'$ has two real zeros, both in the interval $[-1,0]$. Although, I can't prove these values are unique either. – Caleb Nastasi Jan 7 '18 at 2:47 • If you know Descartes' rule of signs, it is clear that this polynomial has either three or one negative real root(s). Now we need to show that there can not be three (negative) real roots. – Bumblebee Jan 7 '18 at 3:11 • Right. If f(x) = 0 and f(y) = 0 then f'(k) = 0 for some x < k < y. Since f'(x) = 0 means -1 < x < 0 that mus f(x) can only have at most one zero in x < -1. So [-2,-1] has only one root. Now you have to show there are no zeros for f(x) > -1 whic it clearly can't as the $2x^5 +3x^4 +2x + 16 > -2 +0 -2 + 16> 0$. – fleablood Jan 7 '18 at 3:40 • @fleablood Nice observation. But how do we know that $f^{\prime}$ has only roots in $[-1,0]$ ? – Rene Schipperus Jan 7 '18 at 3:48 Any real roots must be in $\,(-\infty, -1)\,$, because: • there can be no positive roots $\,x \ge 0\,$ since all coefficients are positive; • furthermore, there can be no roots with magnitude $\,1\,$ or smaller $\,x = a \in [-1,1]\,$, since $\,f(a)=2a^5+3a^4+2a+16 \ge -2+0-2+16 = 12 \gt 0\,$. Let $\,x = -(y+1) \,$, so that $\,x \lt -1 \iff y \gt 0\,$. Substituting back: $$\,-2(y+1)^5+3(y+1)^4-2(y+1)+16 \;=\; -2 y^5 - 7 y^4 - 8 y^3 - 2 y^2 + 15\,$$ The latter can only have one real positive root $\,y \gt 0\,$ by Descartes' rule of signs, so there is only one real root $\,x \lt -1\,$. • Nicely done +1! – Macavity Jan 7 '18 at 6:15 $f(x) = 2x^5+3x^4+2x+16$ clearly cannot have non-negative roots, so let us investigate negative roots, considering $f(-x) = -2x^5+3x^4-2x+16$. It has three sign changes, so by Descartes' rule of signs this can have either $1$ or $3$ negative roots. Then again, $f(-x) = x^4(-2x+3)+(-2x+3) + 13 = (x^4+1)(-2x+3)+13$. As $x$ increases, the only term which can cause a sign change is $-2x+3$, which can only change signs once. Hence there is only one negative root. we have $p(x) = 2x^5+3x^4+2x+16 = (x^4+1)(2x+3)+13=0$. So we are intersecting $f(x) = x^4+1$ with $g(x) = \frac{-13}{2x+3}$. Obviously this two colide just once at the given interval $[-2,-1]$ that you mentioned earlier and this root is unique because when $x>0$ we have always $x^4+1 > \frac{-13}{2x+3}$ (LHS is always positive while RHS is always negative). when $x<0$ LHS is strictly decreasing and RHS is strictly increasing and knowing the fact that for large $x$ we have $x^4+1 > \frac{-13}{2x+3}$ and for $x$ close to $-1.5$ we have $x^4+1 < \frac{-13}{2x+3}$. Thus according to Bolzano's Intermediate Value Theorem this equation have one root which is unique because of monotonically behaving functions. Exploiting Descartes law of signs is the way to go, anyway there is a (longer) alternative which consists in studying the variations of $f$ starting by its second derivative which is easily factorisable. $f(x)=2x^5+3x^4+2x+16$ $f'(x)=10x^4+12x^3+2=2(x+1)(5x^3+x^2-x+1)$ $f''(x)=40x^3+36x^2=4x^2(10x+9)$ So we can start draw a variation array $\begin{array}{|c|ccccc|}\hline x & -\infty && -\frac 9{10} && 0 && +\infty\\\hline f'' & -\infty & \nearrow & 0 & \searrow & 0 & \nearrow &+\infty\\ && -&&+&&+\\\hline\end{array}$ $\begin{array}{|c|ccccc|}\hline x & -\infty && -1 && -\frac 9{10} && \alpha && +\infty\\\hline f' &+\infty &\searrow& 0 &\searrow & -0.187 & \nearrow & 0 &\nearrow& +\infty\\ &&+&&-&&-&&+&\\\hline\end{array}$ Since $f'(-\frac 9{10})<0$ and $\lim\limits_{x\to+\infty} f'(x)=+\infty$ by intermediate value theorem there is a root $f'(\alpha)=0$ in the interval $[-\frac 9{10},+\infty[$. We don't need to calculate it, we just need to know that it annulates $g(x)=5x^3+x^2-x+1$. $\begin{array}{|c|ccccc|}\hline x & -\infty && \beta && -1 && \alpha && +\infty\\\hline f &-\infty &\nearrow &0 &\nearrow & 15 &\searrow & f(\alpha) &\nearrow& +\infty\\\hline\end{array}$ Since $\lim\limits_{x\to-\infty}f(x)=-\infty$ and $f(-1)>0$ by intermediate value theorem there is a root $f(\beta)=0$ in the interval $]-\infty,-1]$. To show it is the only one we have to prove that $f(\alpha)>0$. The polynomial division of $f$ by $g$ gives $f(x)=\dfrac{50x^2+65x-3}{125}g(x)+\dfrac{2003+182x+18x^2}{125}$ Since $g(\alpha)=0$ then $f(\alpha)$ is the same sign as $2003+182\alpha+18\alpha^2$ but this quadratic has no real root so it is always positive and $f(\alpha)>0$. You can eventually refine the interval for $\beta$ noticing $f(-2)=-4<0$ so $\beta\in]-2,-1[$. Just thought I'd give this as an answer, $$2(-x-1)^5 +3(-x-1)^4 +2(-x-1)+16=-2x^5-7x^4-8x^3-2x+15$$ there is only one sign change so by Descartes rule of signs, there is exactly one root of the original polynomial $x<-1$. On the other hand by fleablood's observation if $-1<x$ then $$0<-2+0-2+16<2x^5 +3x^4 +2x+16$$ Thus there is only one real root.
2019-10-18 01:02:34
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9287394881248474, "perplexity": 158.22085034911794}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-43/segments/1570986677412.35/warc/CC-MAIN-20191018005539-20191018033039-00142.warc.gz"}
https://encyclopediaofmath.org/index.php?title=Goryachev-Chaplygin_top&diff=47106&oldid=14870&printable=yes
Difference between revisions of "Goryachev-Chaplygin top" Jump to: navigation, search A rigid body rotating about a fixed point, for which: a) the principal moments of inertia $\lambda = ( \lambda _ {1} , \lambda _ {2} , \lambda _ {3} )$, with regard to the fixed point, satisfy the relation $\lambda _ {1} = \lambda _ {2} = 4 \lambda _ {3}$; b) the centre of mass belongs to the equatorial plane through the fixed point; c) the principal angular momentum is perpendicular to the direction of gravity, i.e., $\langle {m, \gamma } \rangle = 0$. First introduced by D. Goryachev [a4] in 1900, the system was later integrated by S.A. Chaplygin [a3] in terms of hyper-elliptic integrals (cf. also Hyper-elliptic integral). The system merely satisfying a) and b) is not algebraically integrable, but on the locus, defined by c), it is; namely, it has an extra invariant of homogeneous degree $3$: $$Q _ {4} = ( m ^ {2} _ {1} + m _ {2} ^ {2} ) m _ {3} + 2m _ {1} \gamma _ {3} .$$ C. Bechlivanidis and P. van Moerbeke [a1] have shown that the problem has asymptotic solutions which are meromorphic in $\sqrt t$; the system linearizes on a double cover of a hyper-elliptic Jacobian (i.e., of the Jacobi variety of a hyper-elliptic curve; cf. also Plane real algebraic curve), ramified exactly along the two hyper-elliptic curves, where the phase variables blow up; see also [a5]. An elementary algebraic mapping transforms the Goryachev–Chaplygin equations into equations closely related to the $3$- body Toda lattice. A Lax pair is given in [a2]: $$- { \frac{i}{2} } ( { {h _ {- 1 } } tilde } h ^ {- 1 } + { {L _ {0} } tilde } + { {L _ {1} } tilde } h ) ^ \bullet =$$ $$= [ { {L _ {- 1 } } tilde } h ^ {- 1 } + { {L _ {0} } tilde } + { {L _ {1} } tilde } h, { {B _ {0} } tilde } - { {L _ {1} } tilde } h ] ,$$ where ${ {L _ {0} } tilde }$ and ${ {L _ {1} } tilde }$ are given by the $( 3 \times 3 )$ right-lower corner of $L _ {0}$ and $L _ {1}$ and where $${ {L _ {- 1 } } tilde } = { \frac{1}{2} } \left ( \begin{array}{ccc} 0 &- y _ {3} & 0 \\ y _ {3} & 0 &y _ {1} - x _ {1} ^ {2} \\ 0 &- y _ {2} + x _ {2} ^ {2} & 0 \\ \end{array} \right ) ,$$ $${ {B _ {0} } tilde } = \left ( \begin{array}{ccc} { \frac{3}{2} } x _ {3} & 0 &- x _ {1} \\ 0 &{ \frac{3}{2} } x _ {3} & 0 \\ - x _ {2} & 0 &- x _ {3} \\ \end{array} \right ) .$$ See also Kowalewski top. References [a1] C. Bechlivanidis, P. van Moerbeke, "The Goryachev–Chaplygin top and the Toda lattice" Comm. Math. Phys. , 110 (1987) pp. 317–324 [a2] A.I. Bobenko, V.B. Kuznetsov, "Lax representation and new formulae for the Goryachev–Chaplygin top" J. Phys. A , 21 (1988) pp. 1999–2006 [a3] S.A. Chaplygin, "A new case of rotation of a rigid body, supported at one point" , Collected works , I , Gostekhizdat (1948) pp. 118–124 (In Russian) [a4] D. Goryachev, "On the motion of a rigid material body about a fixed point in the case " Mat. Sb. , 21 (1900) (In Russian) [a5] L. Piovan, "Cyclic coverings of Abelian varieties and the Goryachev–Chaplygin top" Math. Ann. , 294 (1992) pp. 755–764 How to Cite This Entry: Goryachev-Chaplygin top. Encyclopedia of Mathematics. URL: http://encyclopediaofmath.org/index.php?title=Goryachev-Chaplygin_top&oldid=14870 This article was adapted from an original article by P. van Moerbeke (originator), which appeared in Encyclopedia of Mathematics - ISBN 1402006098. See original article
2022-06-26 05:43:05
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9114945530891418, "perplexity": 1710.8147794011095}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": false}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-27/segments/1656103037089.4/warc/CC-MAIN-20220626040948-20220626070948-00496.warc.gz"}
https://pykeen.readthedocs.io/en/stable/api/pykeen.losses.BCEWithLogitsLoss.html
# BCEWithLogitsLoss¶ class BCEWithLogitsLoss(size_average=None, reduce=None, reduction='mean')[source] A module for the binary cross entropy loss. For label function $$l:\mathcal{E} \times \mathcal{R} \times \mathcal{E} \rightarrow \{0,1\}$$ and interaction function $$f:\mathcal{E} \times \mathcal{R} \times \mathcal{E} \rightarrow \mathbb{R}$$, the binary cross entropy loss is defined as: $L(h, r, t) = -(l(h,r,t) \cdot \log(\sigma(f(h,r,t))) + (1 - l(h,r,t)) \cdot \log(1 - \sigma(f(h,r,t))))$ where represents the logistic sigmoid function $\sigma(x) = \frac{1}{1 + \exp(-x)}$ Thus, the problem is framed as a binary classification problem of triples, where the interaction functions’ outputs are regarded as logits. Warning This loss is not well-suited for translational distance models because these models produce a negative distance as score and cannot produce positive model outputs. Initializes internal Module state, shared by both nn.Module and ScriptModule. Attributes Summary Methods Summary forward(scores, labels) Defines the computation performed at every call. Attributes Documentation synonyms: ClassVar[Optional[Set[str]]] = {'Negative Log Likelihood Loss'} Methods Documentation forward(scores, labels)[source] Defines the computation performed at every call. Should be overridden by all subclasses. Note Although the recipe for forward pass needs to be defined within this function, one should call the Module instance afterwards instead of this since the former takes care of running the registered hooks while the latter silently ignores them. Return type FloatTensor
2021-10-17 07:28:51
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.7750446796417236, "perplexity": 3201.8780136593246}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-43/segments/1634323585121.30/warc/CC-MAIN-20211017052025-20211017082025-00021.warc.gz"}
https://www.nature.com/articles/s41467-019-13065-w
Thank you for visiting nature.com. You are using a browser version with limited support for CSS. To obtain the best experience, we recommend you use a more up to date browser (or turn off compatibility mode in Internet Explorer). In the meantime, to ensure continued support, we are displaying the site without styles and JavaScript. # Electrodeposition of crystalline silicon films from silicon dioxide for low-cost photovoltaic applications ## Abstract Crystalline-silicon solar cells have dominated the photovoltaics market for the past several decades. One of the long standing challenges is the large contribution of silicon wafer cost to the overall module cost. Here, we demonstrate a simple process for making high-purity solar-grade silicon films directly from silicon dioxide via a one-step electrodeposition process in molten salt for possible photovoltaic applications. High-purity silicon films can be deposited with tunable film thickness and doping type by varying the electrodeposition conditions. These electrodeposited silicon films show about 40 to 50% of photocurrent density of a commercial silicon wafer by photoelectrochemical measurements and the highest power conversion efficiency is 3.1% as a solar cell. Compared to the conventional manufacturing process for solar grade silicon wafer production, this approach greatly reduces the capital cost and energy consumption, providing a promising strategy for low-cost silicon solar cells production. ## Introduction Currently, global energy generation still depends strongly on fossil fuels1. Driven by the rapidly increasing energy demands and the negative environmental impact of fossil fuels, renewable energy has attracted tremendous attention in recent decades. Solar cells, utilizing sunlight to generate electricity directly, have been recognized as one of the most promising technologies for solving the energy issues1,2,3,4,5,6,7,8,9,10,11,12. Crystalline-silicon solar cells have dominated the photovoltaics market for the past several decades and are most likely to continue to be the primary technology for the photovoltaics industry in the future due to its abundant raw materials supply and non-toxicity1,6. To make silicon-based solar cells more competitive, improving the power conversion efficiency and decreasing the module costs are the most direct routes. For efficiency enhancement, innovative cell architectures involving complex processing procedures are usually required, often resulting in increased overall cost13. Reducing the silicon production cost and silicon material usage thus offers an alternative route for continued growth of crystalline-silicon photovoltaics technology in the future. One of the largest contributions to overall module manufacturing cost still comes from silicon wafer production, involving complex processing and intensive energy consumption due to the high temperature requirement for silicon crystallization process13. To address this problem, direct production of silicon at low temperature in liquid/molten salts has been proposed and intensively investigated since 1980’s14,15,16,17,18,19,20,21,22,23,24,25,26. The major challenge for the molten salt technology, of which fluoride-based molten salt is dominant, is impurity control, due to the nature of fluoride-based molten salts and other multicomponent eutectic molten salts systems. Until now there has been no demonstration of a photovoltaic effect for a silicon film electrodeposited in fluoride-based molten salts. Therefore, chloride molten salt has been considered to be a promising alternative molten salt for silicon electrodeposition in recent years25,27,28,29. In this work, we report the successful demonstration of a direct molten salt electrodeposition process of high purity (99.99989% (close to 6N)) crystalline silicon films in molten calcium chloride from abundant and inexpensive silicon dioxide. Calcium oxide is used as intermediate for the continuous ionization of silicon dioxide to form silicate ions in the molten salt. The doping type, either p-type or n-type, can be controlled by varying the dopants in the molten salt. Solar cell devices based on the as-prepared silicon films exhibit clear photovoltaic effects, with power conversion efficiency around 3.1%. This technique provides a promising approach for low-cost silicon solar cells production and potentially for high quality crystalline silicon film production for other applications. ## Results ### Design of electrodeposition of crystalline silicon films Silicon dioxide is the primary source for silicon production. However, its solubility in chloride-based molten salts is generally low, inadequate for efficient electrodeposition20,21,22,25. Inspired by aluminum electrolysis in molten salt, efforts have been put into finding the right intermediate to facilitate the dissolution of silicon dioxide in chloride-based molten salts. Thanks to the considerable solubility of calcium oxide in molten calcium chloride and its reaction with silicon dioxide (Supplementary Note 1), the dissolution process from silicon dioxide to silicate ions is possible28,29, which offers access to the electrodeposition of high quality silicon films in molten calcium chloride. As shown in Fig. 1, the only input materials for this molten salt electrodeposition of crystalline silicon films are abundant and low-cost silicon dioxide, calcium oxide and calcium chloride. Additional experimental details for the production of high-purity silicon films and reaction mechanisms can be found in Methods, Supplementary Tables 1 and 2, and Supplementary Figs. 16. Calcium oxide is added as an intermediate for continuing ionization of silicon dioxide to form silicate ions (expressed as SiOyn, including SiO32−, SiO44−, etc.), which are then electrodeposited onto substrates to form crystalline silicon films. The general reactions for the electrodeposition process can be simply expressed as follows: $$x\mathrm{SiO}_2 + y\mathrm{CaO}\left( {\mathrm{Ca}^{2 + },\mathrm{O}^{2-}} \right) \to \mathrm{Ca}_y\mathrm{Si}_x\mathrm{O}_{(2x + y)}\left( {y\mathrm{Ca}^{2 + },\mathrm{Si}_x\mathrm{O}_{(2x + y)}^{2y-}} \right)$$ (1) $$\mathrm{Si}_x\mathrm{O}_{(2x + y)}^{2y-} + 4xe^- \to x\mathrm{Si} + \left( {2x + y} \right)\mathrm{O}^{2-}$$ (2) As shown in reactions (1) and (2) and Fig. 1b, the electrodeposition route is a cyclic reaction process. By periodically feeding silicon dioxide into the molten salt, crystalline silicon films can be produced continuously, which makes this method suitable for large scale production. It has been proved that various dopants can be added into molten salt, such as boric anhydride or alumina for p-type and antimony oxide or calcium phosphate for n-type, to control the doping type of deposited silicon films. In addition, a proof-of-concept demonstration of p-n junction formation all by molten salt electrodeposition is shown in Fig. 1b, c and our recent work28. Therefore, crystalline silicon films with tunable film thickness and doping type can be facilely electrodeposited. Similar to the aluminum electrolysis process, this one-step molten salt electrodeposition process offers the potential to dramatically reduce the cost of silicon products. ### Characterization of crystalline silicon films Silicon dioxide, calcium oxide and calcium chloride mixtures were first homogenized to form a molten electrolyte at 850 °C, and then silicate ions were gradually generated during the subsequent dissolution process. The silicate ions (denoted as SiOyn, e.g. SiO32−, SiO44−, etc.) can be reduced to form silicon film on graphite substrates at approximately −1.5 V, as confirmed by CV results shown in Fig. 2a. By controlling the various dopants, either p-type and n-type silicon films can be produced. The CV results in Supplementary Fig. 7 reveal that the electrodeposition of n-type silicon film is also a simple reduction process and is similar to that of the p-type silicon films (Fig. 2a). We have demonstrated experimentally that crystalline silicon film can be deposited by using either constant potential/current density electrodeposition method or a pulse electrodeposition method. Pulse electrodeposition can yield dense and homogeneous silicon films formation, mainly due to the homogeneous silicate ion concentration at the cathode surface, which has been confirmed in our recent work28,29. The representative potential/current-time curves of the electrodeposition processes are shown in Supplementary Fig. 8. It is worth noting that calcium co-deposition would occur approximately at −2.7 V and calcium chloride would decompose at −3.2 V29. Therefore, the electrodeposition processes are all controlled at a potential less than −2.6 V. In addition, current density strongly influences the formation of dense films, with 15 to 20 mA cm−2 being the optimal current density. Higher or lower current densities will result in the formation of silicon powders and nanowires, respectively, as illustrated in Supplementary Fig. 9. By controlling current density, compact silicon films including p-n junction silicon films can be readily produced (Supplementary Figs. 1012). The film thickness can be controlled in a range of about 5 µm to more than 60 µm, on various substrates, including graphite, silicon wafers and others27,28,29,30. The X-ray diffraction patterns, as shown in Fig. 2b, confirm the good crystallinity of as-prepared films. As studied in previous work31, Ti, Cu, Ni, Cr, and Fe are impurities having the most harmful impact on crystalline silicon solar cells, thus concentrations of these impurities were analyzed by glow discharge mass spectrometry (GDMS), as shown in Fig. 2c and Supplementary Fig. 13. It is clear to see that all the impurities levels in the electrodeposited silicon are below the tolerable threshold. The overall purity based on full spectrum GDMS analysis is calculated to be 99.99989% (close to 6N, solar grade). To the best of our knowledge, this is the highest purity yet reported for electrodeposited silicon in molten salts. For the doped silicon films, the dopant concentrations of P for n-type, and Al for p-type were characterized to be 3.5 and 10 ppm, respectively. We note that the impurity control is crucial for the electrodeposition of high-purity silicon films (Supplementary Figs. 35). Periodical pre-electrolysis process (about 120 h) was used to purify the molten salt to achieve an ultra-purified system, and then metallic impurities contained in the deposited silicon films can be strictly controlled at low level (e.g., magnesium less than 0.05 ppm, tungsten less than 0.05 ppm, sodium less than 0.05 ppm, calcium is about 5 ppm, etc.). In addition, to decrease the generation of carbon dioxide gas and influence the silicon films during electrodeposition, it has been experimentally demonstrated that the atmosphere needs to be strictly controlled by high-purity argon gas (flow rate of 50 to 100 mL min−1), current density and deposition potential also need to be remained at 10 to 20 mA cm−2 and less than −2.6 V, respectively. More details about the impurity control can be found in “Methods” and Supplementary Fig. 5. The crystallinity, thickness and morphology of the deposited silicon films generally depend on the electrodeposition time, current density, and silicate ion concentration, as shown in Supplementary Figs. 1418. In principle, p-type, n-type, and p-n junction silicon films with various thicknesses and surface morphologies can be produced by varying electrodeposition conditions, as preliminary confirmed in Fig. 2d–f and Supplementary Figs. 1518. Energy dispersive spectroscopy (EDS) analysis further confirms that dense and uniform silicon films are deposited on graphite substrates, and no obvious boundary exists in the p-n junction silicon films, as shown in Supplementary Figs. 17f and 19. However, the growth rate of silicon films during the electrodeposition process is not constant. As shown in Supplementary Fig. 20, the growth rate is high within the first 4 h and then decreases over time. In addition, small amount of silicon powders would commonly generate on the surface of silicon films, which lower the film formation efficiency, and thus the current efficiency for the deposition of silicon films is hard to be accurately calculated. However, according to our previous work28,29 and based on this experimental observation, the current efficiency for the formation of silicon film is about 60 to 80%, depending on the growth rate of silicon film, which is not constant during the electrodeposition. The loss of current efficiency is mainly attributed to the formation of silicon powder on the film’s surface, which is expected can be further optimized by programming the deposition parameters. ### Device characterization and outlook Characterization of a liquid-junction photoelectrochemical (PEC) cell enables rapid assessment of the quality of as prepared silicon films32. The fabrication and test of real solar cell devices will be discussed later. Here, electrodeposited p-type and n-type silicon films were prepared to form silicon/liquid junctions with redox agent and then characterized photoelectrochemically, as shown in Fig. 3. For comparison, commercial p-type and n-type silicon wafers were also characterized by PEC. The light is chopped on/off during the sweep and the photocurrent can be clearly observed. Figure 3b shows the photocurrent density of the as prepared p-type silicon film and a commercial p-type silicon wafer for reduction of ethyl viologen cations (EV2+). The photocurrent density of electrodeposited p-type silicon film is approximately 50% that of the commercial p-type silicon wafer. Figure 3d shows the photocurrent density of the as prepared n-type silicon film and commercial n-type silicon wafer for the oxidation of ferrocene. The photocurrent density of the electrodeposited n-type silicon film is about 40% that of the commercial n-type silicon wafer. For comparison, the PEC result of the electrodeposited silicon film without any dopant is shown in Supplementary Fig. 21. In addition, p-type and n-type silicon films electrodeposited under different conditions exhibit different PEC performances, as shown in Supplementary Figs. 2224, suggesting the possibility to optimize the film quality by varying the electrodeposition conditions. The PEC performance of the deposited silicon film almost has no degradation after 6 months exposure in ambient (Supplementary Fig. 25). To compare with current silicon solar cell technology, solar cell devices were fabricated on the electrodeposited p-type silicon films as example. Device current density versus voltage is shown in Fig. 4a, with 295 mV open circuit potential (Voc), 23.4 mA cm−2 short circuit current (Jsc) and 3.1% power conversion efficiency (PCE) being achieved. The cost benefit of silicon films by molten salt electrodeposition was further investigated by a detailed cost analysis33,34,35. More details of the cost analysis can be found in Supplementary Note 2 and Supplementary Table 3. Figure 4b is a brief summary of the dependence of total module cost ($Wp−1) on the module efficiency. It shows that a cell with only 6% and 10% PCE could enable 0.35 and 0.20$ Wp−1 total module cost, respectively, due to the significant reduction in the cost of the silicon wafer production, as presented in Fig. 4c. It is surprising to see that the fraction of silicon wafer cost can be reduced to 5% assuming a 10% PCE. The cell efficiencies have been enhanced steadily along with the improvement of film quality, including making uniform pin-hole free films, increasing the films thickness and reducing the impurities contents as shown in Fig. 4d and Supplementary Fig. 26. ## Discussion In summary, we demonstrate a simple molten salt electrodeposition process for preparing crystalline silicon films for low-cost solar cells. p-type, n-type and p-n junction silicon films with tunable thicknesses can be directly produced from abundant and inexpensive silicon dioxide all in molten calcium chloride. The electrodeposited crystalline silicon films exhibit high-purity (99.99989% (close to 6N)) and clear photovoltaic effects with PCE as high as 3.1%. There is a large margin for improving the PCE with optimization of the electrodeposition process. Cost analysis further confirms that a module cost lower than 0.20 \$ Wp−1 can be achieved with PCE higher than 10%, making this technology promising for low-cost silicon solar cells. ## Methods ### Materials Silicon dioxide (SiO2, Sigma-Aldrich, nanopowder, 10 to 20 nm, with purity of 99.5%, trace metals basis), calcium oxide (CaO, Sigma-Aldrich, with purity of 99.9%, trace metals basis) and calcium chloride (CaCl2, Sigma-Aldrich, ACS reagent, with purity of 99%, St. Louis, MO. Ba less than 0.005%, Fe less than 0.001%, K less than 0.01%, Mg less than 0.005%, NH4+ less than 0.005%, Na less than 0.02%, Sr less than 0.01%, heavy metals less than 5 ppm) were used to form a molten electrolyte for electrodeposition. Antimony oxide (Sb2O3, Sigma-Aldrich, with purity of 99.999%) and calcium phosphate (Ca3(PO4)2, Sigma-Aldrich, 4 μm, total heavy metals: less than 20 ppm) powders were used to provide antimony and phosphorus as dopants for n-type silicon film, respectively. Alumina (Al2O3, Sigma-Aldrich, with purity higher than 99.9%) and boric anhydride (B2O3, Sigma-Aldrich, with purity of 99.999%) were used to provide aluminum and boron as dopants for p-type silicon film, respectively. High-purity quartz crucible (Technical Glass Products, O.D. 40 × I.D. 37 × height 180 mm, Painesville, OH. Al: 0.5 ppm, B less than 0.2 ppm, Ca: 0.4 ppm, Cu less than 0.05 ppm, Cr less than 0.05 ppm, Fe: 0.2 ppm, K: 0.6 ppm, Li: 0.6 ppm, Mg: 0.1 ppm, Mn less than 0.05 ppm, Na: 0.7 ppm, Ni less than 0.1 ppm, P less than 0.2 ppm, Sb less than 0.003 ppm, Ti: 1.1 ppm, Zr: 0.8 ppm) was used as electrolytic cell. POCO graphite plate (AXF-5Q, Entegris POCO, Decatur, TX, US; 75 mm in length, 6–20 mm in width, and 1 mm in thickness) was used as substrate for the electrodeposition of silicon. Graphite rod (Alfa Aesar, with purity of 99.995%, diameter 6 mm, Haverhill, MA, US) or the POCO graphite plate was used as anode. Tungsten wires (Alfa Aesar, with purity of 99.9%, diameter 1 mm) were used as the electrode leads and protected by quartz tubes (Technical Glass Products, 3 mm, 6 mm and 10 mm in diameter). ### Silicon films characterization The deposited silicon films were characterized by using scanning electron microscopy (SEM, Quanta 650 FEG, FEI Inc., Hillsboro, OR) and energy dispersive spectroscopy (EDS, XFlash Detector 5010, Bruker, Fitchburg, WI). The impurity concentration of the silicon film was analyzed by glow discharge mass spectrometry (GDMS, VG 9000, Thermo Fisher Scientific Inc., Waltham, MA, USA). X-ray diffraction spectroscopy (XRD, Philips X-ray diffractometer equipped with Cu Kα radiation) was also used to analyze the produced silicon films. The electrodeposited silicon films were tested as photoelectrodes for the PEC measurement. For the p-type silicon film, the PEC test was carried out in an argon gas-purged acetonitrile (CH3CN, 99%, Extra-dry, Acros, Fair Lawn, NJ) solution containing 0.1 M tetrabutylammonium hexafluorophosphate (TBAPF6, with purity higher than 99.9%, Fluka, Allentown, PA) as supporting electrolyte and 0.05 M ethyl viologen diperchlorate (EV(ClO4)2, Sigma-Aldrich, with purity of 98%) as the redox agent. For the n-type silicon film, the PEC test was performed in an argon gas-purged CH3CN solution containing 0.1 M TBAPF6 as supporting electrolyte and 0.05 M Ferrocene (Fe(C5H5)2, Sigma-Aldrich, St. Louis, MO) as the redox reagent. The PEC properties under UV-visible light illumination by a xenon lamp at 100 mW cm−2 were tested and compared with commercial p-type and n-type silicon wafers (University Wafers, 5 to 10 ohm-cm, (100), boron-doped (p-silicon wafer), phosphorus-doped (n-silicon wafer), Boston, MA, US). ### Device fabrications and characterization The electrodeposited silicon films were first mechanically polished and rinsed. Then the shallow p-n junction was made by spin-on-dopant, including spin-coating and rapid thermal annealing for dopant activation (950 °C, 60 s). The top contact patterns were then made by lithography and metallization. The I-V characterization of solar cell devices was performed using a B1500A Semiconductor Device Analyzer (Agilent Technologies) and Summit 11000 AP probe station (Cascade Microtech). Solar simulator (Newport) with AM 1.5G filter, calibrated to 100 mW cm−2, was used as light source. ### Cost analysis The cost analysis model is based on sum of costs of each fabrication steps. The cost of electrodeposition silicon films is analyzed by ownership model includes costs associated with materials, labor fees, depreciation and maintenance of equipment and facilities. Detailed data are collected from industry members, vendors and official reports. For cell fabrication processes and module assembling, costs are used based on state-of-art technologies, collected from various resources, including publicly available databases and official reports. More details can be found in Supplementary Note 2. ## Data availability The data that support the findings of this study are available from the corresponding author upon reasonable request. ## References 1. Green, M. A. Commercial progress and challenges for photovoltaics. Nat. Energy 1, 15015 (2016). 2. Battaglia, C., Cuevas, A. & De Wolf, S. High-efficiency crystalline silicon solar cells: status and perspectives. Energy Environ. Sci. 9, 1552–1576 (2016). 3. Yoon, J. et al. Ultrathin silicon solar microcells for semitransparent, mechanically flexible and microconcentrator module designs. Nat. Mater. 7, 907–915 (2008). 4. Oh, J., Yuan, H.-C. & Branz, H. M. An 18.2%-efficient black-silicon solar cell achieved through control of carrier recombination in nanostructures. Nat. Nanotechnol. 7, 743–748 (2012). 5. Lewis, N. S. Research opportunities to advance solar energy utilization. Science 351, aad1920 (2016). 6. Polman, A., Knight, M., Garnett, E. C., Ehrler, B. & Sinke, W. C. Photovoltaic materials: Present efficiencies and future challenges. Science 352, aad4424 (2016). 7. Ribeyron, P.-J. Crystalline silicon solar cells: Better than ever. Nat. Energy 2, 17067 (2017). 8. Bullock, J. et al. Efficient silicon solar cells with dopant-free asymmetric heterocontacts. Nat. Energy 1, 15031 (2016). 9. Green, M. A. Silicon photovoltaic modules: A brief history of the first 50 years. Prog. Photovolt: Res. Appl. 13, 447–455 (2005). 10. Swanson, R. M. A vision for crystalline silicon photovoltaics. Prog. Photovolt: Res. Appl 14, 443–453 (2006). 11. Banerjee, A. et al. High-efficiency, multijunction nc-Si:H-based solar cells at high deposition rate. IEEE J. Photovolt. 2, 99–103 (2012). 12. Banerjee, A. et al. 12.0% efficiency on large-area, encapsulated multijunction nc-Si:H-based solar cells. IEEE J. Photovolt. 2, 104–108 (2012). 13. SEMI PV Group, “International Technology Roadmap for Photovoltaics”, (2017). 14. Rao, G. M., Elwell, D. & Feigelson, R. S. Elctrowinning of silicon from K2SiF6-molten fluoride systems. J. Electrochem. Soc. 127, 1940–1944 (1980). 15. Elwell, D. & Feigelson, R. S. Electrodeposition of solar silicon. Sol. Energy Mater. 6, 123–145 (1982). 16. Elwell, D. & Rao, G. M. Electrolytic production of silicon. J. Appl. Electrochem. 18, 15–22 (1988). 17. Haarberg, G. M., Famiyeh, L., Martinez, A. M. & Osen, K. S. Electrodeposition of silicon from fluoride melts. Electrochim. Acta 100, 226–228 (2013). 18. Nohira, T., Yasuda, K. & Ito, Y. Pinpoint and bulk electrochemical reduction of insulating silicon dioxide to silicon. Nat. Mater. 2, 397–401 (2003). 19. Yasuda, K., Nohira, T., Hagiwara, R. & Ogata, Y. H. Direct electrolytic reduction of solid SiO2 in molten CaCl2 for the production of solar grade silicon. Electrochim. Acta 53, 106–110 (2007). 20. Yasuda, K., Nohira, T., Hagiwara, R. & Ogata, Y. H. Diagrammatic representation of direct electrolytic reduction of SiO2 in molten CaCl2. J. Electrochem. Soc. 154, E95–E101 (2007). 21. Yasuda, K., Maeda, K., Nohira, T., Hagiwara, R. & Homma, T. Silicon electrodeposition in water-soluble KF-KCl molten salt: optimization of electrolysis conditions at 923 K. J. Electrochem. Soc. 163, D95–D99 (2016). 22. Jin, X., Gao, P., Wang, D., Hu, X. & Chen, G. Z. Electrochemical preparation of silicon and its alloys from solid oxides in molten calcium chloride. Angew. Chem. Int. Ed. 43, 733–736 (2004). 23. Abdelkader, A., Kilby, K. T., Cox, A. & Fray, D. J. DC voltammetry of electro-deoxidation of solid oxides. Chem. Rev. 113, 2863–2886 (2013). 24. Xiao, W. & Wang, D. The electrochemical reduction processes of solid compounds in high temperature molten salts. Chem. Soc. Rev. 43, 3215–3228 (2014). 25. Xiao, W. et al. Verification and implications of the dissolution-electrodeposition process during the electro-reduction of solid silica in molten CaCl2. RSC Adv. 2, 7588–7593 (2012). 26. Gu, J., Fahrenkrug, E. & Maldonado, S. Direct electrodeposition of crystalline silicon at low temperatures. J. Am. Chem. Soc. 135, 1684–1687 (2013). 27. Cho, S. K., Fan, F. R. F. & Bard, A. J. Electrodeposition of crystalline and photoactive silicon directly from silicon dioxide nanoparticles in molten CaCl2. Angew. Chem. Int. Ed. 51, 12740–12744 (2012). 28. Zou, X. et al. Electrochemical formation of a p-n junction on thin film silicon deposited in molten salt. J. Am. Chem. Soc. 139, 16060–16063 (2017). 29. Yang, X. et al. Toward cost-effective manufacturing of silicon solar cells: electrodeposition of high-quality Si films in a CaCl2-based molten salt. Angew. Chem. Int. Ed. 56, 15078–15082 (2017). 30. Peng, J. et al. Liquid-tin-assisted molten salt electrodeposition of photoresponsive n-type silicon films. Adv. Funct. Mater. 28, 1703551 (2018). 31. Coletti, G. Sensitivity of state-of-the-art and high efficiency crystalline silicon solar cells to metal impurities. Prog. Photovolt: Res. Appl 21, 1163–1170 (2013). 32. Hsu, H.-Y. et al. A liquid junction photoelectrochemical solar cell based on p-type MeNH3PbI3 perovskite with 1.05 V open-circuit photovoltage. J. Am. Chem. Soc. 137, 14758–14764 (2015). 33. Song, Z. et al. A technoeconomic analysis of perovskite solar module manufacturing with low-cost materials and techniques. Energy Environ. Sci. 10, 1297–1305 (2017). 34. Powell, D. M. et al. The capital intensity of photovoltaics manufacturing: barrier to scale and opportunity for innovation. Energy Environ. Sci. 8, 3395–3408 (2015). 35. Sofia, S. E. et al. Economic viability of thin-film tandem solar modules in the United States. Nat. Energy 3, 387–394 (2018). 36. Xie, H. et al. Anodic gases generated on a carbon electrode in oxide-ion containing molten CaCl2 for the electro-deoxidation process. J. Electrochem. Soc. 165, E759–E762 (2018). ## Acknowledgements This work was financially supported by the Global Climate and Energy Project (GCEP, Agreement No. 60853646-118146), the Welch Foundation (F-0021)), and the National Science Foundation (CBET 1702944). This work was performed in part at The University of Texas Microelectronics Research Center, a member of the National Nanotechnology Coordinated Infrastructure (NNCI), which is supported by the National Science Foundation (grant ECCS-1542159). X.Z. would like to thank Shanghai Rising-Star Program (19QA1403600), the Program for Professor of Special Appointment (Eastern Scholar) at Shanghai Institutions of Higher Learning, and the CAS Interdisciplinary Innovation Team support. We sincerely appreciate Prof. Xionggang Lu (Shanghai University), Prof. Qian Xu (Shanghai University), Dr. Xiaole Chen (UT-Austin) for the kind help and Prof. Cynthiz Zoski (UT-Austin), Dr. Xiao Yang (UT-Austin), Dr. Na Gao (Xiamen University), Dr. Ji Zhao (MIT), Dr. Junjun Peng (MIT), Dr. Huayi Yin (MIT) for the valuable discussions. ## Author information Authors ### Contributions X.Z., L.J. and A.J.B. conceived the concept. X.Z. conducted the silicon films electrodeposition, characterization and photoelectrochemical measurements. L.J. fabricated and tested the solar cells devices as well as conducted the cost analysis. J.G. contributed to the XRD characterization. D.R.S., E.T.Y. and A.J.B. discussed all the data and assisted with the strategy design. X.Z. and L.J. prepared the manuscript. D.R.S., E.T.Y. and A.J.B. revised the manuscript. A.J.B. supervised the project. ### Corresponding author Correspondence to Li Ji. ## Ethics declarations ### Competing interests The authors declare no competing interests. Peer review information Nature Communications thanks Dihua Wang and the other, anonymous, reviewer(s) for their contribution to the peer review of this work. Publisher’s note Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations. ## Rights and permissions Reprints and Permissions Zou, X., Ji, L., Ge, J. et al. Electrodeposition of crystalline silicon films from silicon dioxide for low-cost photovoltaic applications. Nat Commun 10, 5772 (2019). https://doi.org/10.1038/s41467-019-13065-w • Accepted: • Published: • DOI: https://doi.org/10.1038/s41467-019-13065-w • ### Electrodeposition of Si Films from SiO2 in Molten CaCl2-CaO: The Dissolution-Electrodeposition Mechanism and Its Epitaxial Growth Behavior • Xiang Li • Zhongya Pang • Xionggang Lu Metallurgical and Materials Transactions B (2022) • ### In-situ anodic precipitation process for highly efficient separation of aluminum alloys • Yu-Ke Zhong • Ya-Lan Liu • Wei-Qun Shi Nature Communications (2021) • ### Facile Electrodeposition of Ti5Si3 Films from Oxide Precursors in Molten CaCl2 • Wei Tang • Guangshi Li • Xionggang Lu Metallurgical and Materials Transactions B (2021) • ### Current density enhancement for quantum dot-sensitized solar cells by modulation on the quantum dot loading amount of anatase nanowire array photoelectrodes • Meng Wang • Zhuoyin Peng • Dong Huang Journal of Solid State Electrochemistry (2021)
2022-08-09 03:05:30
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.5677445530891418, "perplexity": 9486.235200837877}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-33/segments/1659882570879.37/warc/CC-MAIN-20220809003642-20220809033642-00062.warc.gz"}
http://math.stackexchange.com/questions/143977/trouble-with-cauchy-riemann-not-sure-which-law-to-use
# Trouble with Cauchy Riemann…not sure which law to use? I'm unsure of which Cauchy-Riemann law to use when I'm given either a real or imaginary function. For instance. I might be given a real function and asked to work out the imaginary part. For instance, if I'm given the real part: $-3xy^2-2y^2+x^3+2x$ and asked to work out the imaginary, then I'd need to use the $\frac{du}{dx}=\frac{dv}{dy}$ rule rather than the $-\frac{du}{dy}=\frac{dv}{dx}$ rule before finding the imaginary part. Why is this? - I assume you are talking about finding a harmonic conjugate (so you are given the real part of a holomorphic function and you're supposed to find the imaginary part)? It's not entirely clear. In that case, I'm pretty sure you would need both Cauchy-Riemann equations. –  Daan Michiels May 11 '12 at 18:39 Hi, yes. Although from my solution here, I've only used one: u=−3xy^2 −2y^2 +x^3 +2x^2 ∂u/∂x = −3 y2 + 3 x2 + 4 x = ∂v/∂y by C-R Hence v = −y3 + 3 x2 y + 4 x y –  Flo May 11 '12 at 18:46 Oh! I think I've got you now...thanks for the help! –  Flo May 11 '12 at 18:51 Once you know $\partial v/\partial y$, you can find $v$ by integrating with respect to $y$ (I assume this is what you did). However, this gives a constant of integration that still depends on $x$. To find the integration constant, you need the other equation. –  Daan Michiels May 11 '12 at 18:54 Are you sure this is the correct expression for $u$? It is not harmonic. –  Daan Michiels May 11 '12 at 19:43 You need both. Let us take $$u(x,y)=-3xy^2+x^3+2x+y.$$ Then we get $$\frac{\partial u}{\partial x} = -3y^2+3x^2+2 = \frac{\partial v}{\partial y}.$$ Integrating with respect to $y$ leaves us with $$v(x,y) = -y^3+3x^2y+2y + C(x),$$ noting that the integration constant could be different for different $x$. To find $C(x)$, you would use the other Cauchy-Riemann equation: $$\frac{\partial v}{\partial x} = 6xy + C'(x)$$ and $$-\frac{\partial u}{\partial y} = 6xy-1$$ and these should be equal, so $C'(x)=-1$. This implies $$C(x) = -x+D$$ for some constant (really constant, this time) $D$. The final result is then $$v(x,y) = -y^3+3x^2y+2y-x+D .$$ Note that a harmonic conjugate is only defined up to a constant (in this case it's called $D$). You may wonder why I chose this $u$, and not the one you mentioned. There is a good reason for this: the $u$ you mentioned is not a harmonic function (did you type it correctly?), so it cannot be the real part of a holomorphic function. I could have just left out the term $-2y^2$, but then we would have gotten $C'(x)=0$, which by coincidence defeats the point I was trying to make. –  Daan Michiels May 11 '12 at 19:13
2015-07-05 18:18:46
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9455683827400208, "perplexity": 227.29444198808915}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2015-27/segments/1435375097546.28/warc/CC-MAIN-20150627031817-00060-ip-10-179-60-89.ec2.internal.warc.gz"}
https://codereview.stackexchange.com/questions/157559/ackermann-function-in-python-2-7
# Ackermann function in Python 2.7 I'm relatively new to Python. I just want some constructive feedback on how to improve my code efficiency, style, error handling, etc. In this case, I've programmed the Ackermann function, but it won't evaluate well since it's built to handle the recursions relatively poorly - I'll have to fix that in the future. Any tips? Also, if there are any areas of exploitation in the errors, please let me know too. For reference, I'm programming this on Ubuntu 16.04 LTS and the language is Python 2.7.12 # Make sure the collected values are of the appropriate forms def collection (): while True: # Collect raw_input for m_val and n_val m_val = raw_input ("Please enter a nonnegative integer (m) : ") n_val = raw_input ("Please enter another nonnegative integer (n): ") # Make sure input is acceptable try: m_val = float(m_val) n_val = float(n_val) # m_val needs to be a nonnegative integer # Check equivalence to 0 first if m_val == 0: pass # Then check if the value is nonintegral elif int(m_val) - m_val: print("\nYou entered a float for m!\n") continue # Then make sure the value is nonnegative elif m_val < 0: print("\nYou entered a negative value for m!\n") continue # n_val needs to be a nonnegative integer # Check equialence to 0 first if n_val == 0: pass # Then check if the value is nonintegral elif int (n_val) - n_val: print("\nYou enterd a float for n!\n") continue # Then make sure the value is nonnegative elif n_val < 0: print("\nYou entered a negative value for n!\n") continue # If the code makes it this far, it should be ready for return return int(m_val), int(n_val) except ValueError: print("\nPlease enter numerical values for m and n.\n") def ackermann(m,n): if m == 0: return n + 1 elif m > 0 and n == 0: return ackermann(m-1,1) elif m > 0 and n > 0: return ackermann (m-1, ackermann(m,n-1)) else: print "The Value Doesn't Go into the Domain!" (m,n) = collection() print ( ackermann(m,n) ) def ackermann(m,n): ... else: print "The Value Doesn't Go into the Domain!" Printing the error message to standard output looks like a bad design choice (if you code is run automatically and no one is checking the output, it would just silently return None. I don't think you want it). It would more reasonable to throw an exception in this case (I'd throw an instance of the ValueError) because a situation is exceptional (the input is invalid and there's no meaningful way for this function to handle it). Moreover, one function should be responsible for one thing. Your code computes the value of the ackermann's function and does the logging at the same time. It's another reason to handle invalid inputs using exceptions. You can also simplify the code in the collection function by parsing the input as an integer. There's no point in parsing it as a float and then checking if it's an integer. You could rely on a standard int function to do this job for you. This way, it would be enough to check that both numbers are non-negative. Now let's talk about variable naming: 1. collection is a strange name for a function. It's name doesn't give any clue about what it actually does. Even something simple and generic like read_input would be better. 2. It's a bad practice to use one variable for two different things. n_val and m_val stand for the user's input as a string and then they turn into integers. I'd rather create two separate variables for each of them (one for the input and the other one for it's integral value). You use of whitespace is inconsistent. For instance, there's a space after the name of the collection function, but there's none after the name of the ackermann function. Whatever style you choose, you should always be consistent with it (according to the PEP standard, there should be no space after the function name, but there should be one after the comma. I would stick to it unless I have compelling reasons not to). It's also a good practice to write doc comments for all your functions and classes. The comments inside the code should be more about telling why the code does what it does or why a specific design decision was made. Redundant comments that just tell what the code does create noise and actually make it less readable. For instance, this comment is self-evident: # Check equivalence to 0 first if m_val == 0: pass You can just remove it (and all other similar comments). Moreover, if you need to make a comment about what the piece of code does (like "# Then check if the value is nonintegral"), it's a hint that it should probably be implemented as another function with a descriptive name. You should strive for a self-documenting code.
2022-05-17 21:16:29
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.23482701182365417, "perplexity": 1308.8134661878305}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-21/segments/1652662520817.27/warc/CC-MAIN-20220517194243-20220517224243-00574.warc.gz"}
https://meangreenmath.com/category/theorems/
# A nice article on recent progress on solving the twin prime conjecture The twin prime conjecture (see here, here and here for more information) asserts that there are infinitely many primes that have a difference of 2. For example: 3 and 5 are twin primes; 5 and 7 are twin primes; 11 and 13 are twin primes; 17 and 19 are twin primes; 29 and 31 are twin primes; etc. While most mathematicians believe the twin prime conjecture is correct, an explicit proof has not been found. Indeed, this has been one of the most popular unsolved problems in mathematics — not necessarily because it’s important, but for the curiosity that a conjecture so simply stated has eluded conquest by the world’s best mathematicians. Still, research continues, and some major progress has been made in the past few years. (I like sharing this story with my students to convince them that not everything that can be known about mathematics has been figure out yet — a misconception encouraged by the structure of the secondary curriculum — and that research continues to this day.) Specifically, it was recently shown that, for some integer $N$ that is less than 70 million, there are infinitely many pairs of primes that differ by $N$. http://video.newyorker.com/watch/annals-of-ideas-yitang-zhang-s-discovery-2015-01-28 http://www.newyorker.com/magazine/2015/02/02/pursuit-beauty For more on recent progress: # My Favorite One-Liners: Part 100 In this series, I’m compiling some of the quips and one-liners that I’ll use with my students to hopefully make my lessons more memorable for them. Today’s quip is one that I’ll use surprisingly often: If you ever meet a mathematician at a bar, ask him or her, “What is your favorite application of the Cauchy-Schwartz inequality?” The point is that the Cauchy-Schwartz inequality arises surprisingly often in the undergraduate mathematics curriculum, and so I make a point to highlight it when I use it. For example, off the top of my head: 1. In trigonometry, the Cauchy-Schwartz inequality states that $|{\bf u} \cdot {\bf v}| \le \; \parallel \!\! {\bf u} \!\! \parallel \cdot \parallel \!\! {\bf v} \!\! \parallel$ for all vectors ${\bf u}$ and ${\bf v}$. Consequently, $-1 \le \displaystyle \frac{ {\bf u} \cdot {\bf v} } {\parallel \!\! {\bf u} \!\! \parallel \cdot \parallel \!\! {\bf v} \!\! \parallel} \le 1$, which means that the angle $\theta = \cos^{-1} \left( \displaystyle \frac{ {\bf u} \cdot {\bf v} } {\parallel \!\! {\bf u} \!\! \parallel \cdot \parallel \!\! {\bf v} \!\! \parallel} \right)$ is defined. This is the measure of the angle between the two vectors ${\bf u}$ and ${\bf v}$. 2. In probability and statistics, the standard deviation of a random variable $X$ is defined as $\hbox{SD}(X) = \sqrt{E(X^2) - [E(X)]^2}$. The Cauchy-Schwartz inequality assures that the quantity under the square root is nonnegative, so that the standard deviation is actually defined. Also, the Cauchy-Schwartz inequality can be used to show that $\hbox{SD}(X) = 0$ implies that $X$ is a constant almost surely. 3. Also in probability and statistics, the correlation between two random variables $X$ and $Y$ must satisfy $-1 \le \hbox{Corr}(X,Y) \le 1$. Furthermore, if $\hbox{Corr}(X,Y)=1$, then $Y= aX +b$ for some constants $a$ and $b$, where $a > 0$. On the other hand, if $\hbox{Corr}(X,Y)=-1$, if $\hbox{Corr}(X,Y)=1$, then $Y= aX +b$ for some constants $a$ and $b$, where $a < 0$. Since I’m a mathematician, I guess my favorite application of the Cauchy-Schwartz inequality appears in my first professional article, where the inequality was used to confirm some new bounds that I derived with my graduate adviser. # My Favorite One-Liners: Part 99 In this series, I’m compiling some of the quips and one-liners that I’ll use with my students to hopefully make my lessons more memorable for them. Today’s quip is a light-hearted one-liner that I’ll use to lighten the mood when in the middle of a complex calculation, like the following limit problem from calculus: Let $f(x) = 11-4x$. Find $\delta$ so that $|f(x) - 3| < \epsilon$ whenever $|x-2| < \delta$. The solution of this problem requires isolating $x$ in the above inequality: $|(11-4x) - 3| < \epsilon$ $|8-4x| < \epsilon$ $-\epsilon < 8 - 4x < \epsilon$ $-8-\epsilon < -4x < -8 + \epsilon$ At this point, the next step is dividing by $-4$. So, I’ll ask my class, When we divide by $-4$, what happens to the crocodiles? This usually gets the desired laugh out of the middle-school rule about how the insatiable “crocodiles” of an inequality always point to the larger quantity, leading to the next step: $2 + \displaystyle \frac{\epsilon}{4} > x > 2 - \displaystyle \frac{\epsilon}{4}$, so that $\delta = \min \left( \left[ 2 + \displaystyle \frac{\epsilon}{4} \right] - 2, 2 - \left[2 - \displaystyle \frac{\epsilon}{4} \right] \right) = \displaystyle \frac{\epsilon}{4}$. Formally completing the proof requires starting with $|x-2| < \displaystyle \frac{\epsilon}{4}$ and ending with $|f(x) - 3| < \epsilon$. # My Favorite One-Liners: Part 88 In this series, I’m compiling some of the quips and one-liners that I’ll use with my students to hopefully make my lessons more memorable for them. In the first few weeks of my calculus class, after introducing the definition of a derivative, $\displaystyle \frac{dy}{dx} = y' = f'(x) = \lim_{h \to 0} \displaystyle \frac{f(x+h) - f(x)}{h}$, I’ll use the following steps to guide my students to find the derivatives of polynomials. 1. If $f(x) = c$, a constant, then $\displaystyle \frac{d}{dx} (c) = 0$. 2. If $f(x)$ and $g(x)$ are both differentiable, then $(f+g)'(x) = f'(x) + g'(x)$. 3.  If $f(x)$ is differentiable and $c$ is a constant, then $(cf)'(x) = c f'(x)$. 4. If $f(x) = x^n$, where $n$ is a nonnegative integer, then $f'(x) = n x^{n-1}$. 5. If $f(x) = a_n x^n + a_{n-1} x^{n-1} + \dots + a_1 x + a_0$ is a polynomial, then $f'(x) = n a_n x^{n-1} + (n-1) a_{n-1} x^{n-2} + a_1$. After doing a few examples to help these concepts sink in, I’ll show the following two examples with about 3-4 minutes left in class. Example 1. Let $A(r) = \pi r^2$. Notice I’ve changed the variable from $x$ to $r$, but that’s OK. Does this remind you of anything? (Students answer: the area of a circle.) What’s the derivative? Remember, $\pi$ is just a constant. So $A'(r) = \pi \cdot 2r = 2\pi r$. Does this remind you of anything? (Students answer: Whoa… the circumference of a circle.) Generally, students start waking up even though it’s near the end of class. I continue: Example 2. Now let’s try $V(r) = \displaystyle \frac{4}{3} \pi r^3$. Does this remind you of anything? (Students answer: the volume of a sphere.) What’s the derivative? Again, $\displaystyle \frac{4}{3} \pi$ is just a constant. So $V'(r) = \displaystyle \frac{4}{3} \pi \cdot 3r^2 = 4\pi r^2$. Does this remind you of anything? (Students answer: Whoa… the surface area of a sphere.) By now, I’ve really got my students’ attention with this unexpected connection between these formulas from high school geometry. If I’ve timed things right, I’ll say the following with about 30-60 seconds left in class: Hmmm. That’s interesting. The derivative of the area of a circle is the circumference of the circle, and the derivative of the area of a sphere is the surface area of the sphere. I wonder why this works. Any ideas? (Students: stunned silence.) This is what’s known as a cliff-hanger, and I’ll give you the answer at the start of class tomorrow. (Students groan, as they really want to know the answer immediately.) Class is dismissed. If you’d like to see the answer, see my previous post on this topic. # My Favorite One-Liners: Part 50 In this series, I’m compiling some of the quips and one-liners that I’ll use with my students to hopefully make my lessons more memorable for them. Here’s today’s one-liner: “To prove that two things are equal, show that the difference is zero.” This principle is surprisingly handy in the secondary mathematics curriculum. For example, it is the basis for the proof of the Mean Value Theorem, one of the most important theorems in calculus that serves as the basis for curve sketching and the uniqueness of antiderivatives (up to a constant). And I have a great story that goes along with this principle, from 30 years ago. I forget the exact question out of Apostol’s calculus, but there was some equation that I had to prove on my weekly homework assignment that, for the life of me, I just couldn’t get. And for no good reason, I had a flash of insight: subtract the left- and right-hand sides. While it was very difficult to turn the left side into the right side, it turned out that, for this particular problem, was very easy to show that the difference was zero. (Again, I wish I could remember exactly which question this was so that I could show this technique and this particular example to my own students.) So I finished my homework, and I went outside to a local basketball court and worked on my jump shot. Later that week, I went to class, and there was a great buzz in the air. It took ten seconds to realize that everyone was up in arms about how to do this particular problem. Despite the intervening 30 years, I remember the scene as clear as a bell. I can still hear one of my classmates ask me, “Quintanilla, did you get that one?” I said with great pride, “Yeah, I got it.” And I showed them my work. And, either before then or since then, I’ve never heard the intensity of the cussing that followed. Truth be told, probably the only reason that I remember this story from my adolescence is that I usually was the one who had to ask for help on the hardest homework problems in that Honors Calculus class. This may have been the one time in that entire two-year calculus sequence that I actually figured out a homework problem that had stumped everybody else. # My Favorite One-Liners: Part 46 In this series, I’m compiling some of the quips and one-liners that I’ll use with my students to hopefully make my lessons more memorable for them. Today’s one-liner is something I’ll use after completing some monumental calculation. For example, if $z, w \in \mathbb{C}$, the proof of the triangle inequality is no joke, as it requires the following as lemmas: • $\overline{z + w} = \overline{z} + \overline{w}$ • $\overline{zw} = \overline{z} \cdot \overline{w}$ • $z + \overline{z} = 2 \hbox{Re}(z)$ • $|\hbox{Re}(z)| \le |z|$ • $|z|^2 = z \cdot \overline{z}$ • $\overline{~\overline{z}~} = z$ • $|\overline{z}| = |z|$ • $|z \cdot w| = |z| \cdot |w|$ With all that as prelude, we have $|z+w|^2 = (z + w) \cdot \overline{z+w}$ $= (z+w) (\overline{z} + \overline{w})$ $= z \cdot \overline{z} + z \cdot \overline{w} + \overline{z} \cdot w + w \cdot \overline{w}$ $= |z|^2 + z \cdot \overline{w} + \overline{z} \cdot w + |w|^2$ $= |z|^2 + z \cdot \overline{w} + \overline{z} \cdot \overline{~\overline{w}~} + |w|^2$ $= |z|^2 + z \cdot \overline{w} + \overline{z \cdot \overline{w}} + |w|^2$ $= |z|^2 + 2 \hbox{Re}(z \cdot \overline{w}) + |w|^2$ $\le |z|^2 + 2 |z \cdot \overline{w}| + |w|^2$ $= |z|^2 + 2 |z| \cdot |\overline{w}| + |w|^2$ $= |z|^2 + 2 |z| \cdot |w| + |w|^2$ $= (|z| + |w|)^2$ In other words, $|z+w|^2 \le (|z| + |w|)^2$. Since $|z+w|$ and $|z| + |w|$ are both positive, we can conclude that $|z+w| \le |z| + |w|$. QED In my experience, that’s a lot for students to absorb all at once when seeing it for the first time. So I try to celebrate this accomplishment: Anybody ever watch “Home Improvement”? This is a Binford 6100 “more power” mathematical proof. Grunt with me: RUH-RUH-RUH-RUH!!! # My Favorite One-Liners: Part 43 In this series, I’m compiling some of the quips and one-liners that I’ll use with my students to hopefully make my lessons more memorable for them. q Q Years ago, my first class of students decided to call me “Dr. Q” instead of “Dr. Quintanilla,” and the name has stuck ever since. And I’ll occasionally use this to my advantage when choosing names of variables. For example, here’s a typical proof by induction involving divisibility. Theorem: If $n \ge 1$ is a positive integer, then $5^n - 1$ is a multiple of 4. Proof. By induction on $n$. $n = 1$: $5^1 - 1 = 4$, which is clearly a multiple of 4. $n$: Assume that $5^n - 1$ is a multiple of 4. At this point in the calculation, I ask how I can write this statement as an equation. Eventually, somebody will volunteer that if $5^n-1$ is a multiple of 4, then $5^n-1$ is equal to 4 times something. At which point, I’ll volunteer: Yes, so let’s name that something with a variable. Naturally, we should choose something important, something regal, something majestic… so let’s choose the letter $q$. (Groans and laughter.) It’s good to be the king. So the proof continues: $n$: Assume that $5^n - 1 = 4q$, where $q$ is an integer. $n+1$. We wish to show that $5^{n+1} - 1$ is also a multiple of 4. At this point, I’ll ask my class how we should write this. Naturally, I give them no choice in the matter: We wish to show that $5^{n+1} - 1 = 4Q$, where $Q$ is some (possibly different) integer. Then we continue the proof: $5^{n+1} - 1 = 5^n 5^1 - 1$ $= 5 \times 5^n - 1$ $= 5 \times (4q + 1) - 1$ by the induction hypothesis $= 20q + 5 - 1$ $= 20q + 4$ $= 4(5q + 1)$. So if we let $Q = 5q +1$, then $5^{n+1} - 1 = 4Q$, where $Q$ is an integer because $q$ is also an integer. QED On the flip side of braggadocio, the formula for the binomial distribution is $P(X = k) = \displaystyle {n \choose k} p^k q^{n-k}$, where $X$ is the number of successes in $n$ independent and identically distributed trials, where $p$ represents the probability of success on any one trial, and (to my shame) $q$ is the probability of failure. # My Favorite One-Liners: Part 13 In this series, I’m compiling some of the quips and one-liners that I’ll use with my students to hopefully make my lessons more memorable for them. Here’s a story that I’ll tell my students when, for the first time in a semester, I’m about to use a previous theorem to make a major step in proving a theorem. For example, I may have just finished the proof of $\hbox{Var}(X+Y) = \hbox{Var}(X) + \hbox{Var}(Y)$, where $X$ and $Y$ are independent random variables, and I’m about to prove that $\hbox{Var}(X-Y) = \hbox{Var}(X) + \hbox{Var}(Y)$. While this can be done by starting from scratch and using the definition of variance, the easiest thing to do is to write $\hbox{Var}(X-Y) = \hbox{Var}(X+[-Y]) = \hbox{Var}(X) + \hbox{Var}(-Y)$, thus using the result of the first theorem to prove the next theorem. And so I have a little story that I tell students about this principle. I think I was 13 when I first heard this one, and obviously it’s stuck with me over the years. At MIT, there’s a two-part entrance exam to determine who will be the engineers and who will be the mathematicians. For the first part of the exam, students are led one at a time into a kitchen. There’s an empty pot on the floor, a sink, and a stove. The assignment is to boil water. Everyone does exactly the same thing: they fill the pot with water, place it on the stove, and then turn the stove on. Everyone passes. For the second part of the exam, students are led one at a time again into the kitchen. This time, there’s a pot full of water sitting on the stove. The assignment, once again, is to boil water. Nearly everyone simply turns on the stove. These students are led off to become engineers. The mathematicians are ones who take the pot off the stove, dump the water into the sink, and place the empty pot on the floor… thereby reducing to the original problem, which had already been solved. # Engaging students: Completing the square In my capstone class for future secondary math teachers, I ask my students to come up with ideas for engaging their students with different topics in the secondary mathematics curriculum. In other words, the point of the assignment was not to devise a full-blown lesson plan on this topic. Instead, I asked my students to think about three different ways of getting their students interested in the topic in the first place. I plan to share some of the best of these ideas on this blog (after asking my students’ permission, of course). This student submission comes from my former student Deborah Duddy. Her topic, from Algebra: completing the square. What interesting word problems using this topic can your students do now? Applying what is learned in the class is very vital in fact it is a process TEKS that teachers need to use to maximize student’s understanding. “When are we going to use this in real life?” and “Why do we need to know this?” are questions that students ask on a daily basis. Connecting material to the real world helps engage students and develops critical thinking. Describing a path of a ball, how far an item can be tossed in the air and how to maximize profits for a company are just some examples of how quadratics can be used in the real world. One important event happens during high school; students receive their driver’s license. In their written driver’s test, students must know the distance needed to stop a car at certain speed limits. Using an example like the one below will be interesting for the students and help connect lesson material and real life. How could you as a teacher create an activity or project that involves your topic? To begin class and get students involved with their learning, the class will participate in an activity. Each pair of students will have two different cards such as (x+2)^2 and x^2+4x+4, and any variations of these problems. They can only look at the (x+2)^2 card. Students will work out the problem on paper. Students will be asked to remember how to find the area of a square and then set up a square with the dimensions matching the first card. From there, the pairs would use algebra tiles (after knowing what each tile stands for) and attempt to “complete the square”. This activity will be used as an engage and a beginning explore for the students. This activity will help students see completing a square geometrically. How does this topic extend what your students should have learned in previous courses? Completing the square is another way of solving/factoring the equation. The process of completing the square is to turn a basic quadratic   equation of ax^2 + bx + c = 0 into a(x-h)^2 + k = 0 where (h,k) is  the vertex of the parabola. Therefore this process is very beneficial because it helps students graph the quadratic equation given. In order to find h and k, students should be able to factor, square a term, find the square root and manipulate the equation. In solving the equation by completing the square is to subtract the constant off the left side and onto the right side. Then students take the coefficient off the x-term divide it then square it. Students then add this number to both sides of the equations. By simplifying the right side of the equation, students give the perfect square. Then solve the equation left by taking the square root of both sides and determining x. References: http://www.classzone.com/eservices/home/pdf/student/LA205EBD.pdf # What I Learned from Reading “Gamma: Exploring Euler’s Constant” by Julian Havil: Part 18 The Riemann Hypothesis (see here, here, and here) is perhaps the most famous (and also most important) unsolved problems in mathematics. Gamma (page 207) provides a way of writing down this conjecture in a form that only uses notation that is commonly taught in high school: If $\displaystyle \sum_{r=1}^\infty \frac{(-1)^r}{r^a} \cos(b \ln r) = 0$ and $\displaystyle \sum_{r=1}^\infty \frac{(-1)^r}{r^a} \sin(b \ln r) = 0$ for some pair of real numbers $a$ and $b$, then $a = \frac{1}{2}$. As noted in the book, “It seems extraordinary that the most famous unsolved problem in the whole of mathematics can be phrased so that it involves the simplest of mathematical ideas: summation, trigonometry, logarithms, and [square roots].” When I researching for my series of posts on conditional convergence, especially examples related to the constant $\gamma$, the reference Gamma: Exploring Euler’s Constant by Julian Havil kept popping up. Finally, I decided to splurge for the book, expecting a decent popular account of this number. After all, I’m a professional mathematician, and I took a graduate level class in analytic number theory. In short, I don’t expect to learn a whole lot when reading a popular science book other than perhaps some new pedagogical insights. Boy, was I wrong. As I turned every page, it seemed I hit a new factoid that I had not known before. In this series, I’d like to compile some of my favorites — while giving the book a very high recommendation.
2017-06-24 07:07:46
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 130, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.741666853427887, "perplexity": 435.7586924619428}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-26/segments/1498128320227.27/warc/CC-MAIN-20170624064634-20170624084634-00109.warc.gz"}
https://www.lmfdb.org/ModularForm/GL2/Q/holomorphic/460/1/n/
# Properties Label 460.1.n Level $460$ Weight $1$ Character orbit 460.n Rep. character $\chi_{460}(39,\cdot)$ Character field $\Q(\zeta_{22})$ Dimension $20$ Newform subspaces $2$ Sturm bound $72$ Trace bound $2$ # Learn more ## Defining parameters Level: $$N$$ $$=$$ $$460 = 2^{2} \cdot 5 \cdot 23$$ Weight: $$k$$ $$=$$ $$1$$ Character orbit: $$[\chi]$$ $$=$$ 460.n (of order $$22$$ and degree $$10$$) Character conductor: $$\operatorname{cond}(\chi)$$ $$=$$ $$460$$ Character field: $$\Q(\zeta_{22})$$ Newform subspaces: $$2$$ Sturm bound: $$72$$ Trace bound: $$2$$ ## Dimensions The following table gives the dimensions of various subspaces of $$M_{1}(460, [\chi])$$. Total New Old Modular forms 60 60 0 Cusp forms 20 20 0 Eisenstein series 40 40 0 The following table gives the dimensions of subspaces with specified projective image type. $$D_n$$ $$A_4$$ $$S_4$$ $$A_5$$ Dimension 20 0 0 0 ## Trace form $$20 q - 2 q^{4} - 2 q^{5} - 4 q^{6} - 6 q^{9} + O(q^{10})$$ $$20 q - 2 q^{4} - 2 q^{5} - 4 q^{6} - 6 q^{9} - 4 q^{14} - 2 q^{16} - 2 q^{20} - 8 q^{21} - 4 q^{24} - 2 q^{25} - 4 q^{29} - 4 q^{30} - 6 q^{36} - 4 q^{41} + 16 q^{45} - 2 q^{46} + 16 q^{49} + 14 q^{54} + 18 q^{56} - 4 q^{61} - 2 q^{64} - 4 q^{69} - 4 q^{70} - 2 q^{80} - 10 q^{81} + 14 q^{84} + 18 q^{86} - 4 q^{89} - 4 q^{94} - 4 q^{96} + O(q^{100})$$ ## Decomposition of $$S_{1}^{\mathrm{new}}(460, [\chi])$$ into newform subspaces Label Dim $A$ Field Image CM RM Traces $q$-expansion $a_{2}$ $a_{3}$ $a_{5}$ $a_{7}$ 460.1.n.a $10$ $0.230$ $$\Q(\zeta_{22})$$ $D_{11}$ $$\Q(\sqrt{-5})$$ None $$-1$$ $$-2$$ $$-1$$ $$9$$ $$q+\zeta_{22}^{10}q^{2}+(\zeta_{22}^{2}+\zeta_{22}^{4})q^{3}-\zeta_{22}^{9}q^{4}+\cdots$$ 460.1.n.b $10$ $0.230$ $$\Q(\zeta_{22})$$ $D_{11}$ $$\Q(\sqrt{-5})$$ None $$1$$ $$2$$ $$-1$$ $$-9$$ $$q-\zeta_{22}^{10}q^{2}+(-\zeta_{22}^{2}-\zeta_{22}^{4})q^{3}+\cdots$$
2022-05-25 23:33:51
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9860377311706543, "perplexity": 4456.74642358742}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": false}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-21/segments/1652662594414.79/warc/CC-MAIN-20220525213545-20220526003545-00021.warc.gz"}
http://math.stackexchange.com/questions/255372/what-is-a-complex-name?answertab=votes
# What is a Complex Name? On Page 38, Elementary Set Theory with a Universal Set, Randall Holmes(2012), which can be found here. We give a semi-formal definition of complex names (this is a variation on Bertrand Russell's Theory of Descriptions): Definition. A sentence $\psi [(\text{the }y\text{ such that }\phi)/x]$ is defined as \begin{align*}&\big((\text{there is exactly one }y\text{ such that }\phi)\text{ implies }(\text{for all }y, \phi\text{ implies }\psi[y/x])\big)\\&\text{ and }\\&\Big(\big(\text{not}(\text{there is exactly one }y\text{ such that }\phi)\big)\text{ implies }\\&\qquad\big(\text{for all }x,(x\text{ is the empty set})\text{ implies }\psi\big)\Big)\;.\end{align*} Renaming of bound variables may be needed. Definition of the form "$\phi[y/x]$" is: Definition. When $\phi$ is a sentence and $y$ is a variable, we define $\phi[y/x]$ as the result of substituting $y$ for $x$ throughout $phi$, but only in case there are no bound occurrences of $x$ or $y$ in $\phi$. (We note for later, when we allow the construction of complex names $a$ which might contain bound variables, that $\phi[a/x]$ is only defined if no bound variable of $a$ occurs in $\phi$ (free or bound) and vice versa). I can't understand why $\psi [($the$\, y\,$such that$\, \phi)/x]$ is defined as it is? Especially, "((not(there is exactly one $y$ such that $\phi$ )) implies (for all $x$, ($x$ is the empty set) implies $\psi$ ))" seems to come out of nowhere. Feel free to retag this question, I'm not sure if some other disciplines, like elementary set theory, lingusitics are more closely related to it. - This might help en.wikipedia.org/wiki/On_Denoting – RParadox Dec 10 '12 at 11:33 @RParadox: Thank you for your link. The problem is that it's equally elusive. – Metta World Peace Dec 10 '12 at 11:46 Good question. A guess coming up. General issue: How should we regard expressions of the form "the $\varphi$" or better "the $y$ such that $\varphi(y)$". Option one: as mere "syntax sugar" that can be parsed away. This is Russell's line. "The $y$ such that $\varphi(y)$" isn't really a complex name, but vanishes on analysis, because (i) $\psi$(the $y$ such that $\varphi(y)$) is equivalent to (ii) there is at least one thing which is $\varphi$ and at most one thing which is $\varphi$ and whatever is $\varphi$ is $\psi$. Option two: descriptions are complex names. "The $y$ such that $\varphi(y)$" is a complex name of the one and only one thing that is $\varphi$ if there is such a thing, and takes a default value, say the empty set, if there isn't. This was Frege's line. Both treatments are logically workable. Or we can mix them. Which seems to be what Holmes is doing here. We do parsing away (a la Russell): but treat the cases where there is and where there isn't a unique $\varphi$ differently, in effect supplying a default value when there isn't (a la Frege). So, roughly speaking, $\psi$(the $y$ such that $\varphi(y)$) says that whatever is $\varphi$ is $\psi$ if there is a unique $\varphi$, but becomes [equivalent to] $\psi(\emptyset)$ when there is no unique $\varphi$. But I am making this up as I go along, you understand: caveat lector! - Thank you for your excellent answer. But as a layman, I've no idea how these two competing treatments are "logically workable", which makes me unable to fully understand your argument. Could you please recommend something with a textbook treatment of Russell's and Frege's approaches at an introductory level? – Metta World Peace Dec 10 '12 at 22:08 Metta World Peace, I have tried to do exactly that, i.e. describe the idea of logical analysis. The best thing is to read Frege first, because Russell uses some of his concepts. The most elementary description is his talk on function and concept, see en.wikipedia.org/wiki/Function_and_Concept There are perhaps hundred on textbooks on Frege and Russell, but the best known are two major books of Michael Dummet (philosophy of Language and philosophy of mathematics). – RParadox Dec 11 '12 at 9:59 Usually plato.stanford.edu is a good resource, see plato.stanford.edu/entries/descriptions for example. – RParadox Dec 11 '12 at 10:02 @RParadox: Thank you for your suggestions. – Metta World Peace Dec 11 '12 at 17:13 The way Holmes presents this matter is not clear at all. How is the theory of descriptions and logic related? A good example is the word "and". We use "and" in the english language and there is the symbol of symbol logic "$\wedge$". Consider the mapping f from "and" to "$\wedge$" and g from "$\wedge$" to "and". $f:$ "and" $\rightarrow "\wedge"$ $g: "\wedge" \rightarrow$ "and" Now, Frege and Russell introduced the symbol logic, so that we can clearly distinguish between "and" and "$\wedge$", because they are not at all the same. Consider the expression "two and two is four". The expression is best translated as 2+2=4. Translation means really taking the first expression and putting into a proper system of logic. For instance here the word "and" was translated into the obivous symbol for addition "+", and not "$\wedge$", although a naive translator would not have known what "and" should stand for. This matter is not at all trivial, and is not linguistics but logic proper (philosophy if you will). We want to know how the symbols "+" and "$\wedge$" operate. This study is what we call logic in the first place. For instance a few days ago, people downvoted my elementary proof of logic, probably because they thought real mathematicians use lots of strange symbols. However when we are concerned with logic, we can't be so presumptuous. We can't simply throw around symbols and hope that it will all make sense in the end. Where this kind of analysis comes from is thinking about propositions. What does it mean to talk about anything? Well, if we talk about a thing, we are refering to its existence or non-existence. Which is why a mathematical expression will start with the phrase: $\exists x ...$ or $\nexists x ...$ In his theory of denoting Russells explains why every statements refers to all other things 1. Definition of all: $C(E) \leftrightarrow \forall x C(x)$ 2. Definition of nothing: $C(N) \leftrightarrow \forall x \neg C(x)$ 3. Definition of exists $C(S)\leftrightarrow \neg \forall x \neg C(x)$ What this achieves is that it shows a certain map, as explained above, but not for "and", but for the expression "exists". So we could say we have explained the map $h:$ "exists x" $\rightarrow "\exists x"$ , although there are some remaining issues. What one should realize is that all of mathematics essentially is build on this theory, although very mathematicians realize it. Holmes sentence is an ackward variant of this theory of description. You arrive at it, by applying the given definitions. A much better way to understand the operations is to look at the axioms, see metamath: PL, and play around with them. - "Holmes sentence is an ackward variant of this theory of description. You arrive at it, by applying the given definitions." A variant, but plainly not arrived at by applying Russell's definitions. Which is why the OP asked the question. – Peter Smith Dec 10 '12 at 19:32 The OP asked the question, because the problem is the definition and the exposition does not explain anything properly. What does it mean to apply predicate logic and what do the notations mean? Shouldn't a book on logic be clear, so that everyone with common sense should be able to follow it? The OP described the analysis of the expression as linguistics (which is actually logic). Perhaps everyone describing logic in this way, should call it something else. The m-logic: reasoning exclusively for mathematicians. Everyone else might want to learn logic. – RParadox Dec 10 '12 at 20:28 The OP did not describe the analysis of the expression as linguistics. The OP suggested that the discipline of linguistics might be more closely related to the question than logic is. A book on any subject should ideally be clear. Clarity, however, does not necessarily mean that ‘everyone with common sense should be able to follow it’: common sense is no substitute for an adequate background. – Brian M. Scott Dec 10 '12 at 22:11 This could be called the elitist belief of mathematicians, that their science has to be some kind of black art. However, often, when pressed with a difficult philosophical question, they will not have an answer. Why do we even write $\exists x: \varphi(x)$, what is a function really? and so on. The questions and answers on this site are very representative of these notions. In the end, all these notions depend on certain beliefs. The very idea of logic, is that there is a process of reasoning which can be easily followed. – RParadox Dec 11 '12 at 9:50 If I have a proof, it is expected from me that people who are in the field can understand it and verify it. But if we are talking about logic, there is no such knowledge which can be assumed. Using abstract algebra to prove an elementary theorem in logic is just non-sensical. We are talking about the most fundamental notions, such as those in predicate logic. And anyone who uses complicated language to express elementary concepts is misusing the language. This is certainly true in this case. – RParadox Dec 11 '12 at 9:54
2016-04-30 13:32:35
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.7986524701118469, "perplexity": 609.2607424010973}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2016-18/segments/1461860111865.15/warc/CC-MAIN-20160428161511-00015-ip-10-239-7-51.ec2.internal.warc.gz"}
https://villavu.com/forum/showthread.php?s=1fcae759762e028dd8fc890dc7191161&p=1349927
# Thread: Running multiple accounts and SIMBA clients from a single script 1. SRL Junior Member Join Date Aug 2012 Location The Dark Tower Posts 154 Mentioned 5 Post(s) Quoted 56 Post(s) ## Running multiple accounts and SIMBA clients from a single script Is it possible to run two SMART windows and accounts (or more I suppose), from a single script? Would I just run a script within a script (is this possible?)? 2. SRL Junior Member Join Date Dec 2011 Posts 266 Mentioned 16 Post(s) Quoted 185 Post(s) You can do one of two things. You can either run a seperate Simba for every SMART, up to a maximum of 4, or you can File>New to open another tab within a single Simba, up to a maximum of 4. If you want more than 4 accounts running at once, see here: https://villavu.com/forum/showthread...112&highlight= 3. Originally Posted by Gunner You can do one of two things. You can either run a seperate Simba for every SMART, up to a maximum of 4, or you can File>New to open another tab within a single Simba, up to a maximum of 4. If you want more than 4 accounts running at once, see here: https://villavu.com/forum/showthread...112&highlight= I think he wants a single script to operate two SMART cllients. I'm not sure if that's possible -- can one instance of Simba pair to multiple instances of SMART? 4. SRL Junior Member Join Date Aug 2012 Location The Dark Tower Posts 154 Mentioned 5 Post(s) Quoted 56 Post(s) Originally Posted by KeepBotting I think he wants a single script to operate two SMART cllients. I'm not sure if that's possible -- can one instance of Simba pair to multiple instances of SMART? Yes, this is what I want to know. Or, if you can run a script within a script to do this. For example, lets say I were to make a NMZ script that trained my main, and then I had it open another instance of SMART to then log in to another booster account that would then boost me into NMZ and then logout while the script continued to function in NMZ training my main. 5. I ran way more than just 4 clients on 1 PC, I don't think that's up-to-date...I don't think 1 single script can pair to multiple smarts unless you Simba Code: repeat inc(count); SetupSRL;until count >= 5; or something Edit : in the picture none of them are in safe mode 6. [OFFTOPIC] @Pakyakkistan for some odd reason, I had a dream and you were in it last night, and no I am not trolling. I don't play RS anymore, but had a dream I was playing and I met you at GE and recognized your name and all I had to say was "SRL" than we generated a conversation (and your username being: Pakyakkistan). Ah such a relief since I got that out of my system . [OFFTOPIC] 7. SRL Junior Member Join Date Aug 2012 Location The Dark Tower Posts 154 Mentioned 5 Post(s) Quoted 56 Post(s) Originally Posted by P1nky [OFFTOPIC] @Pakyakkistan for some odd reason, I had a dream and you were in it last night, and no I am not trolling. I don't play RS anymore, but had a dream I was playing and I met you at GE and recognized your name and all I had to say was "SRL" than we generated a conversation (and your username being: Pakyakkistan). Ah such a relief since I got that out of my system . [OFFTOPIC] I feel loved; thank you for sharing [ONTOPIC] So, I think many have been confused on what I have asked originally, and I want to restate it. Basically, I want to be able to run a SINGLE script that would be able to control MORE THAN a single SMART client and essentially run TWO or MORE accounts from a single script. However, if this is NOT possible, I would like to know if it is possible to be able to call a script within a script such as... Simba Code: Procedure RunOtherAccount;beginRunMuleAccount;Inc(Mule);if Mule = 1 then   TradeMule;Mule := 0;end;**Note that this is not at all what I am intending to do, but one of the simplest ways I could get across what I am looking to see if I can do** OR, would I be able to run TWO SEPARATE scripts that would be able to communicate? in some way? Editing and reading a txt file possibly? There are currently 1 users browsing this thread. (0 members and 1 guests) #### Posting Permissions • You may not post new threads • You may not post replies • You may not post attachments • You may not edit your posts •
2019-02-18 10:56:39
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.3485090732574463, "perplexity": 1999.1727368765828}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 5, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-09/segments/1550247484928.52/warc/CC-MAIN-20190218094308-20190218120308-00309.warc.gz"}
https://developer.mozilla.org/en-US/docs/Web/API/FontFace
We're looking for a user researcher to understand the needs of developers and designers. Is this you or someone you know? Check out the post: https://mzl.la/2IGzdXS # FontFace This is an experimental technology Check the Browser compatibility table carefully before using this in production. The `FontFace` interface represents a single usable font face. It allows control of the source of the font face, being a URL to an external resource, or a buffer; it also allows control of when the font face is loaded and its current status. ## Constructor `FontFace()` Constructs and returns a new `FontFace` object, built from an external resource described by an URL or from an `ArrayBuffer`. ## Properties This interface doesn't inherit any property. `FontFace.display` Is a `CSSOMString` that determines how a font face is displayed based on whether and when it is downloaded and ready to use. `FontFace.family` Is a `CSSOMString` that contains the family of the font. It is equivalent to the `font-family` descriptor. `FontFace.style` Is a `CSSOMString` that contains the style of the font. It is equivalent to the `font-style` descriptor. `FontFace.weight` Is a `CSSOMString` that contains the weight of the font. It is equivalent to the `font-weight` descriptor. `FontFace.stretch` Is a `CSSOMString` that contains how the font stretches. It is equivalent to the `font-stretch` descriptor. `FontFace.unicodeRange` Is a `CSSOMString` that contains the range of code encompassed the font. It is equivalent to the `unicode-range` descriptor. `FontFace.variant` Is a `CSSOMString` that contains the variant of the font. It is equivalent to the `font-variant` descriptor. `FontFace.featureSettings` Is a `CSSOMString` that contains the features of the font. It is equivalent to the `font-feature-settings`descriptor. `FontFace.status` Read only Returns an enumerated value indicating the status of the font. It can be one of the following: `"unloaded"`, `"loading"`, `"loaded"`, or `"error"`. `FontFace.loaded` Read only Returns a `Promise` to a `FontFace` that fulfills when the font is completely loaded and rejects when an error happens. ## Methods This interface doesn't inherit any method. `FontFace.load()` Loads the font, returning a `Promise` to a `FontFace` that fulfills when the font is completely loaded and rejects when an error happens. ## Specifications Specification Status Comment The definition of 'FontFaceSet' in that specification. Working Draft Initial definition ## Browser compatibility FeatureChromeEdgeFirefoxInternet ExplorerOperaSafari Basic support35 ?41 ? ? ? `FontFace()` constructor35 ?41 ? ? ? `display`60 ? No ?47 No `family` Yes ? ? ? Yes ? `style` ? ? ? ? ? ? `weight` ? ? ? ? ? ? `stretch` ? ? ? ? ? ? `unicodeRange` ? ? ? ? ? ? `variant` ? ? ? ? ? ? `featureSettings` ? ? ? ? ? ? `status` ? ? ? ? ? ? `loaded` ? ? ? ? ? ? `load` ? ? ? ? ? ? FeatureAndroid webviewChrome for AndroidEdge mobileFirefox for AndroidOpera AndroidiOS SafariSamsung Internet Basic support3535 ?41 ? ? ? `FontFace()` constructor3535 ?41 ? ? ? `display`6060 ? No47 No ? `family` Yes Yes ? ? Yes ? ? `style` ? ? ? ? ? ? ? `weight` ? ? ? ? ? ? ? `stretch` ? ? ? ? ? ? ? `unicodeRange` ? ? ? ? ? ? ? `variant` ? ? ? ? ? ? ? `featureSettings` ? ? ? ? ? ? ? `status` ? ? ? ? ? ? ? `loaded` ? ? ? ? ? ? ? `load` ? ? ? ? ? ? ? Tags:
2018-05-26 17:49:48
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9664048552513123, "perplexity": 2264.5002853964065}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-22/segments/1526794867841.63/warc/CC-MAIN-20180526170654-20180526190654-00390.warc.gz"}
http://math.stackexchange.com/questions/166836/simplifying-to-a-certain-expression-structure/187279
# Simplifying to a certain expression structure I have this expression: $$4n^2-n+(8(n+1)-5)$$ And I know it is equivalent to this: $$4(n+1)^2-(n+1)$$ I need to simplify my expression to get the same structure as that one. However, no matter what I try, I don't end up with that structure. My question is, well, how to reach that. But truly, the real problem is that I seem to lack this sense of "knowing which method to use to simplify to get the structure I want". Is there some kind of rule or guide about this? Or is it all about practice and experience? Edit If anyone was curious, I ended up with this: $$4n^2+7n+3$$ Yes, I did multiplications and additions. Couldn't find common factor (as far I can tell) or any other trick. - A useful technique when your answer is a complicated expression is working backwards. Try expanding $4(n+1)^2−(n+1)$ and see if you end up with $4n^2+7n+3$. If you do, see if you can follow the steps back up the chain to finish the problem. –  Eugene Shvarts Jul 5 '12 at 2:59 To factor $\rm\:f = 4n^2+7n+3\:$ note $\rm\,a\,x^2 + (a\!+\!c)\,x + c = ax\,(x+1) + c\,(x+1) = (ax+c)\,(x+1).\,$ Alternatively, as in your prior question, apply the AC-method as follows: $$\begin{eqnarray}\rm 4f\, &=&\rm\ \ 16n^2 +\ 7\cdot 4n\, +\, 4\cdot 3 \\ &=&\rm\ (4n)^2 + 7\,(4n)\, + 4\cdot 3 \\ &=&\rm\ \ N^2\ +\ 7\, N\ +\ 12\quad for\quad N = 4n \\ &=&\rm\ (N\ +\ 4)\,(N\ +\ 3) \\ &=&\rm\ (4n\, +\, 4)\,(4n\, +\, 3) \\ \rm f\, &=&\rm\ (\ n\ +\ 1)\,(4n\, +\, 3) \end{eqnarray}$$ - With the AC-method, I'm still not sure how would I reach $$4(n+1)^2-(n+1)$$ –  Zol Tun Kul Jul 5 '12 at 4:13 @Omega Hint $\rm\ 4n+3 = 4(n\!+\!1 - 1)+3 = 4(n\!+\!1) -1.\,$ Now multiply that by $\rm\,n\!+\!1.\qquad$ –  Bill Dubuque Jul 5 '12 at 4:21 Maybe we can factor your expression $4n^2+7n+3$. Fairly quickly we get $(n+1)(4n+3)$. That seems nice enough already. But one can observe that $4n+3=4(n+1)-1$. Then your expression becomes $(n+1)[4(n+1)-1$, which can also be written as $4(n+1)^2-(n+1)$. One reason that we know that $4n^2+7n+3$ will factor nicely is that the polynomial $4x^2+7x+3$ has the root $x=-1$. Thus $x+1$ must divide the polynomial. Do the division, using the ordinary division process for polynomials, which is much like the division process for integers. The quotient turns out to be $4x+3$. Remark: It is awkward to say anything in general. After working with many particular examples, one accumulates a set of tools that often turn out to be useful in new settings. There is a general procdure for expressing a polynomial $P(x)$ as a polynomial $Q(x-a)$, where $a$ is any given number. The easiest description involves the calculus, but one can also give a purely algebraic description of the process. - Since first term is $4(n+1)^2$, try to extract this term from $4n^2+7n+3$ by addition and subtraction of terms, e.g $4n^2+7n+3=4n^2+8n+4 -n-1$, this way you got first three terms of the expression giving $4(n+1)^2$ and the term left is $-(n+1)$ which is of course the second term required, therefore, $4n^2+7n+3=4(n+1)^2-(n+1)$. - Try doing it directly, like this: $$4n^2-n+(8(n+1)-5) = 4n^2+8(n+1)-n-5=4n^2-4+8(n+1)-n-1$$ Then note that $4n^2-4=4(n^2-1)=4(n+1)(n-1)$ so that the expression becomes: $$4(n+1)(n-1)+8(n+1)-(n+1) = 4(n+1)(n-1+2)-(n+1) = 4(n+1)^2-(n+1)$$ where I have extracted the common factor $4(n+1)$ from the first two terms. The strategy for doing this was to isolate the $(n+1)$ term at the end of the given target expression and to work with the rest. - I don't see where this needs a trick. There is nothing wrong with simplifying each expression and calling them equal if, and only if, they simplify the same. Here is the un-tricky computation. \begin{align} 4 n^2 - n + ((8(n + 1) - 5) &= 4 n^2 - n + ((8 n + 8) - 5) \\ &=4 n^2 - n + (8 n - 3) \\ &=4 n^2 + 7 n + 3 \end{align} \begin{align} 4(n + 1)^2 - (n + 1) &= 4(n^2 + 2 n + 1) - (n + 1) \\ &= 4 n^2 + 8 n + 4 - (n + 1) \\ &= 4 n^2 + 7 n + 3 \end{align} If you need to make this sound like it's a fancy trick, call it "linearity of polynomials in real numbers" (or complex numbers, for that matter). That phrase refers to the principle that we used, that if $p$ is a real-valued polynomial function of one real variable and $a$, $b$, and $x$ are real numbers, then $p(a\cdot x + b) = a\cdot p(x) + b$. -
2014-07-25 18:51:07
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 2, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9905341267585754, "perplexity": 214.54987170962968}, "config": {"markdown_headings": true, "markdown_code": false, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2014-23/segments/1405997894473.81/warc/CC-MAIN-20140722025814-00241-ip-10-33-131-23.ec2.internal.warc.gz"}
http://cclib.github.io/methods.html
# Calculation methods¶ The following methods in cclib allow further analysis of calculation output. ## C squared population analysis (CSPA)¶ CSPA can be used to determine and interpret the electron density of a molecule. The contribution of the a-th atomic orbital to the i-th molecular orbital can be written in terms of the molecular orbital coefficients: $\Phi_{ai} = \frac{c^2_{ai}}{\sum_k c^2_{ki}}$ The CSPA class available from cclib.method performs C-squared population analysis and can be used as follows: from cclib.io import ccread from cclib.method import CSPA data = ccread("mycalc.out") m = CSPA(data) m.calculate() After the calculate() method is called, the following attributes are available: • aoresults is a NumPy array[3] with spin, molecular orbital, and atomic/fragment orbitals as the axes (aoresults[0][45][0] gives the contribution of the 1st atomic/fragment orbital to the 46th alpha/restricted molecular orbital) • fragresults is a NumPy array[3] with spin, molecular orbital, and atoms as the axes (atomresults[1][23][4] gives the contribution of the 5th atomic/fragment orbital to the 24th beta molecular orbital) • fragcharges is a NumPy array[1] with the number of (partial) electrons in each atom (atomcharges[2] gives the number of electrons on the 3rd atom) ### Custom fragments¶ Calling the calculate method without an argument treats each atom as a fragment in the population analysis. An optional argument can be passed - a list of lists - containing the atomic orbital numbers to be included in each fragment. Calling with this additional argument is useful if one is more interested in the contributions of certain orbitals, such as metal d, to the molecular orbitals. For example: from cclib.io import ccread from cclib.method import CSPA data = ccread("mycalc.out") m = CSPA(data) m.calculate([[0, 1, 2, 3, 4], [5, 6], [7, 8, 9]]) # fragment one is made from basis functions 0 - 4 # fragment two is made from basis functions 5 & 6 # fragment three is made from basis functions 7 - 9 ### Custom progress¶ The CSPA class also can take a progress class as an argument so that the progress of the calculation can be monitored: from cclib.method import CSPA from cclib.parser import Gaussian from cclib.progress import TextProgress import logging progress = TextProgress() p = Gaussian("mycalc.out", logging.ERROR) d = p.parse(progress) m = CSPA(d, progress, logging.ERROR) m.calculate() ## Mulliken population analysis (MPA)¶ MPA can be used to determine and interpret the electron density of a molecule. The contribution of the a-th atomic orbital to the i-th molecular orbital in this method is written in terms of the molecular orbital coefficients, c, and the overlap matrix, S: $\Phi_{ai} = \sum_b c_{ai} c_{bi} S_{ab}$ The MPA class available from cclib.method performs Mulliken population analysis and can be used as follows: import sys from cclib.method import MPA from cclib.parser import ccopen d = ccopen(sys.argv[1]).parse() m = MPA(d) m.calculate() After the calculate() method is called, the following attributes are available: • aoresults: a three dimensional array with spin, molecular orbital, and atomic orbitals as the axes, so that aoresults[0][45][0] gives the contribution of the 1st atomic orbital to the 46th alpha/restricted molecular orbital, • fragresults: a three dimensional array with spin, molecular orbital, and atoms as the axes, so that fragresults[1][23][4] gives the contribution of the 5th fragment orbitals to the 24th beta molecular orbital) • fragcharges: a vector with the number of (partial) electrons in each fragment, so that fragcharges[2] gives the number of electrons in the 3rd fragment. ### Custom fragments¶ The calculate method chooses atoms as the fragments by default, and optionally accepts a list of lists containing the atomic orbital numbers (e.g. [[0, 1, 2], [3, 4, 5, 6], ...]) of arbitrary fragments. Calling it in this way is useful if one is more interested in the contributions of groups of atoms or even certain orbitals or orbital groups, such as metal d, to the molecular orbitals. In this case, fragresults and fragcharges reflect the chosen groups of atomic orbitals instead of atoms. ### Custom progress¶ The Mulliken class also can take a progress class as an argument so that the progress of the calculation can be monitored: from cclib.method import MPA from cclib.parser import ccopen from cclib.progress import TextProgress import logging progress = TextProgress() d = ccopen("mycalc.out", logging.ERROR).parse(progress) m = MPA(d, progress, logging.ERROR) m.calculate() ## Löwdin Population Analysis¶ The LPA class available from cclib.method performs Löwdin population analysis and can be used as follows: import sys from cclib.method import LPA from cclib.parser import ccopen d = ccopen(sys.argv[1]).parse() m = LPA(d) m.calculate() ## Density Matrix calculation¶ The Density class from cclib.method can be used to calculate the density matrix: from cclib.parser import ccopen from cclib.method import Density parser = ccopen("myfile.out") data = parser.parse() d = Density(data) d.calculate() After calculate() is called, the density attribute is available. It is simply a NumPy array with three axes. The first axis is for the spin contributions, and the second and third axes are for the density matrix, which follows the standard definition. ## Mayer’s Bond Orders¶ This method calculates the Mayer’s bond orders for a given molecule: import sys from cclib.parser import ccopen from cclib.method import MBO parser = ccopen(sys.argv[1]) data = parser.parse() d = MBO(data) d.calculate() After calculate() is called, the fragresults attribute is available, which is a NumPy array of rank 3. The first axis is for contributions of each spin to the MBO, while the second and third correspond to the indices of the atoms. ## Charge Decomposition Analysis¶ The Charge Decomposition Analysis (CDA) as developed by Gernot Frenking et al. is used to study the donor-acceptor interactions of a molecule in terms of two user-specified fragments. The CDA class available from cclib.method performs this analysis: from cclib.io import ccopen from cclib.method import CDA molecule = ccopen("molecule.log") frag1 = ccopen("fragment1.log") frag2 = ccopen("fragment2.log") # if using CDA from an interactive session, it's best # to parse the files at the same time in case they aren't # parsed immediately---go get a drink! m = molecule.parse() f1 = frag1.parse() f2 = frag2.parse() cda = CDA(m) cda.calculate([f1, f2]) After calculate() finishes, there should be the donations, bdonations (back donation), and repulsions attributes to the cda instance. These attributes are simply lists of 1-dimensional NumPy arrays corresponding to the restricted or alpha/beta molecular orbitals of the entire molecule. Additionally, the CDA method involves transforming the atomic basis functions of the molecule into a basis using the molecular orbitals of the fragments so the attributes mocoeffs and fooverlaps are created and can be used in population analyses such as Mulliken or C-squared (see Fragment Analysis for more details). There is also a script provided by cclib that performs the CDA from a command-line: \$ cda molecule.log fragment1.log fragment2.log Charge decomposition analysis of molecule.log MO# d b r ----------------------------- 1: -0.000 -0.000 -0.000 2: -0.000 0.002 0.000 3: -0.001 -0.000 0.000 4: -0.001 -0.026 -0.006 5: -0.006 0.082 0.230 6: -0.040 0.075 0.214 7: 0.001 -0.001 0.022 8: 0.001 -0.001 0.022 9: 0.054 0.342 -0.740 10: 0.087 -0.001 -0.039 11: 0.087 -0.001 -0.039 ------ HOMO - LUMO gap ------ 12: 0.000 0.000 0.000 13: 0.000 0.000 0.000 ...... ### Notes¶ • Only molecular orbitals with non-zero occupancy will have a non-zero value. • The absolute values of the calculated terms have no physical meaning and only the relative magnitudes, especially for the donation and back donation terms, are of any real value (Frenking, et al.) • The atom coordinates in molecules and fragments must be the same, which is usually accomplished with an argument in the QM program (the NoSymm keyword in Gaussian, for instance). • The current implementation has some subtle differences than the code from the Frenking group. The CDA class in cclib follows the formula outlined in one of Frenking’s CDA papers, but contains an extra factor of 2 to give results that agree with those from the original CDA program. It also doesn’t include negligible terms (on the order of 10^-6) that result from overlap between MOs on the same fragment that appears to be included in the Frenking code. Contact atenderholt (at) gmail (dot) com for discussion and more information.
2019-04-20 00:28:47
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.5574440956115723, "perplexity": 4539.712114409021}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": false}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-18/segments/1555578528433.41/warc/CC-MAIN-20190420000959-20190420022959-00383.warc.gz"}
https://twiki.cern.ch/twiki/bin/view/Main/TestForTrigger?cover=print
## Muon HLT Efficiency Measurement with the "Reference-Trigger" Method ### Goal Very useful to measure the efficiency of complex trigger paths, such as double-muon and dimuon triggers, combination of triggers, cross-triggers, etc. The description and examples below refer to the specific cases of (combinations of) double-muon trigger paths HLT_Mu17_Mu8, HLT_Mu17_TkMu8, HLT_Mu22_TkMu8. ### Ingredients #### Choice of the binning The measurement can be done in one single bin on in several bin depending of the use. The bins are just to be large enough to not lack of statistic for the efficiency computation. #### Choice of the reference trigger The measurement is based on the computation of the complex trigger paths efficiency for event passing a reference trigger. The reference trigger should be chosen to have a high efficiency on the events passing the complex trigger to not bias the result. For example, HLT_Mu17 is the best candidate for double muon triggers like HLT_Mu17_Mu8, HLT_Mu17_TkMu8, HLT_Mu22_TkMu8. #### Reference trigger efficiency The reference trigger efficiency on muon can be measured using the standard Tag and Probe method as documented here. The reference trigger efficiency on di-muon event can be computed from the reference trigger efficiency on muon as follow : if is the efficiency of the ref trigger on muon, then the efficiency on di-muon, , will be computed using this formula : #### Complex efficiency after reference trigger This is the efficiency of the complex trigger path on di-muon events triggered by the ref trigger. This efficiency can be obtained by fitting the Z peak as in usual Tag and Probe. It the reference trigger is pre-scaled, it is need to re-weight each event to compensate the pre-scale. #### Complex trigger efficiency The final number is obtained bin by bin by multiplying the efficiency of the reference trigger by the efficiency of the complex trigger after reference trigger. ### Recipe #### Recipe for HLT_Mu17_Mu8 and HLT_Mu17_TkMu8 paths This recipe is for use the TnP trees centrally produced, which are documentedhere? ##### Reference trigger efficiency • The chosen reference trigger chosen is HLT_Mu17. As it is prescaled, it is needed to request the tag muon to be match with HLT_Mu17 in order measure an efficiency without prescale. • The TnP trees are considering the case where more than one probe can be use for a tag : then the recommended action is to use the probe that will make the pair mass closest from the Z mass, it consist to ask in the TnP tree BestZ=1 The efficiencies of the reference trigger for event passing the POG loose working point (Global OR Tracker muon AND PF muon) are here : plots here ##### Complex trigger efficiency after reference trigger • The complex trigger efficiency (or soup of trigger) after reference trigger (EffSoup|ref) is computed also using TnP for di-muon event triggered by HLT_Mu17 (corresponding to the cut (tag_Mu17==1 || Mu17==1). • Two specific features are to be taken in account : • There is an additional cut on the tag : PFchargedIsolation/pt < 0.2 + matched with a L3 object ( pfIsolationR04().sumChargedHadronPt/pt < 0.2 && triggerObjectMatchesByCollection('hltL3MuonCandidates')). This introduce a overestimation of about 0.3% to be taken in account in the systematic error. • As the TnP are designed to compute efficiency by muon, a TnP pair can be double counted with the Tag and the Probe role inverted. As the soup efficiency is computed by pair, it is needed to remove double counting by choosing randomly one of the 2 pairs. An example macro for this can be found here. ##### Final trigger efficiency • This step only consist in multiplying the ref trigger efficiency time the efficiency after reference trigger. ##### Bias from the method : • Bias from the additional cut on the tag : • Bias from the difference of L1 seed : • Bias from the choice of the binning : This topic: Main > TWikiUsers > HuguesBrun > TestForTrigger Topic revision: r6 - 2013-09-02 - HuguesBrun Copyright &© 2008-2019 by the contributing authors. All material on this collaboration platform is the property of the contributing authors. Ideas, requests, problems regarding TWiki? Send feedback
2019-12-08 18:48:30
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.7708396911621094, "perplexity": 3659.778299164702}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-51/segments/1575540514475.44/warc/CC-MAIN-20191208174645-20191208202645-00268.warc.gz"}
https://rank1neet.com/7-10-2-the-bronsted-lowry-acids-and-bases/
# 7.10.2 The Brönsted-Lowry Acids and Bases The Danish chemist, Johannes Brönsted and the English chemist, Thomas M. Lowry gave a more general definition of acids and bases. According to Brönsted-Lowry theory, acid is a substance that is capable of donating a hydrogen ion H+ and bases are substances capable of accepting a hydrogen ion, H+. In short, acids are proton donors and bases are proton acceptors Consider the example of dissolution of NH3 in H2O represented by the following equation: The basic solution is formed due to the presence of hydroxyl ions. In this reaction, water molecule acts as proton donor and ammonia molecule acts as proton acceptor and are thus, called Lowry-Brönsted acid and base, respectively. In the reverse reaction, H+ is transferred from NH4+ to OH-. In this case, NH4+ acts as a Bronsted acid while OH-acted as a Brönsted base. The acid-base pair that differs only by one proton is called a conjugate acid-base pair. Therefore, OH- is called the conjugate base of an acid H2O and NH4+ is called conjugate acid of the base NH3. If Brönsted acid is a strong acid then its conjugate base is a weak base and viceversa. It may be noted that conjugate acid has one extra proton and each conjugate base has one less proton. Consider the example of ionization of hydrochloric acid in water. HCl(aq) acts as an acid by donating a proton to H2O molecule which acts as a base. It can be seen in the above equation, that water acts as a base because it accepts the proton. The species H3O+ is produced when water accepts a proton from HCl. Therefore, Cl- is a conjugate base of HCl and HCl is the conjugate acid of base Cl-. Similarly, H2O is a conjugate base of an acid H3O+ and H3O+ is a conjugate acid of base H2O. It is interesting to observe the dual role of water as an acid and a base. In case of reaction with HCl water acts as a base while in case of ammonia it acts as an acid by donating a proton.
2020-09-30 10:02:30
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8162701725959778, "perplexity": 3092.317896348656}, "config": {"markdown_headings": true, "markdown_code": false, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-40/segments/1600402123173.74/warc/CC-MAIN-20200930075754-20200930105754-00348.warc.gz"}
https://research.chalmers.se/en/publication/94521
# Enumeration of derangements with descents in prescribed positions Journal article, 2009 We enumerate derangements with descents in prescribed positions. A generating function was given by Guo-Niu Han and Guoce Xin in 2007. We give a combinatorial proof of this result, and derive several explicit formulas. To this end, we consider fixed point $\lambda$-coloured permutations, which are easily enumerated. Several formulae regarding these numbers are given, as well as a generalisation of Euler's difference tables. We also prove that except in a trivial special case, if a permutation $\pi$ is chosen uniformly among all permutations on $n$ elements, the events that $\pi$ has descents in a set $S$ of positions, and that $\pi$ is a derangement, are positively correlated. descent fixed point Permutation statistic ## Author #### Niklas Eriksen University of Gothenburg Chalmers, Mathematical Sciences, Mathematics #### Ragnar Freij University of Gothenburg Chalmers, Mathematical Sciences, Mathematics #### Johan Wästlund Chalmers, Mathematical Sciences, Mathematics University of Gothenburg #### Electronic Journal of Combinatorics 1097-1440 (ISSN) 1077-8926 (eISSN) Vol. 16 1 R32- #### Subject Categories Discrete Mathematics
2019-08-21 00:54:02
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.7722311615943909, "perplexity": 2462.799682674441}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-35/segments/1566027315695.36/warc/CC-MAIN-20190821001802-20190821023802-00283.warc.gz"}
http://forum.allaboutcircuits.com/threads/how-to-regulate-a-signal-down-to-tens-of-mv.43991/
# How to regulate a signal down to tens of mV? Discussion in 'General Electronics Chat' started by MS&T, Oct 8, 2010. 1. ### MS&T Thread Starter New Member Jun 17, 2010 4 0 I need to attenuate a sine wave down to 10mV peak from 500mV peak. We want to use something that doesn't require adjusting, so we don't have to use an attenuator box. We were going to put schottky diodes to ground, but they have a turn-on voltage of 300mV, which would still be too big. Any suggestions? Thanks. 2. ### retched AAC Fanatic! Dec 5, 2009 5,201 312 Resistors? Apr 5, 2008 15,517 2,298 Hello, What is the output impedance of the source? What is the input impedance of the circuit to be driven? With these parameters it is possible to calculate an attenuator. Bertus 4. ### cumesoftware Senior Member Apr 27, 2007 1,330 10 I think a simple resistive voltage divider network is your answer. However, it will use some of the source's driving capability. If your load has less impedance than the output impedance of the source or if the voltage divider is somewhat influenced by the load, use an op-amp as a unity gain amplifier (or voltage follower) next to the voltage divider. A good rule of thumb: the total impedance of the divider network (from source to ground) should be at least 20 times lesser than the input impedance of your load. However, it should be 20 times greater than your source impedance. This will allow an error with a tolerance tighter than 10%. Then again, as Bertus suggested, what is the output impedance of your signal source and what is the input impedance of your load? 5. ### Wendy Moderator Mar 24, 2008 20,735 2,498 All good suggestions. I would use a variable gain op amp circuit myself, since it would have low impedance out (be careful of loading!), and would isolate the input signal from the output signal. As Bertus suggested, we need more info to be able to show schematics. 6. ### cumesoftware Senior Member Apr 27, 2007 1,330 10 Just out of curiosity: How can you achieve fractional "gains" by using the feedback network of an opamp? By doing this, shouldn't the gain be always greater than one? 7. ### tom66 Senior Member May 9, 2009 2,613 213 An inverting op-amp can be wired up for (absolute) gains of less than 1, but can be unstable in such a configuration. Remember: $V_{out} = -V_{in}\times\frac{R_{f}}{R_{in}}$, if you have an Rf ≤ Rin then your gain is between -1 and 0.
2016-10-20 21:28:18
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 1, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.5837880373001099, "perplexity": 2251.075034231188}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 5, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2016-44/segments/1476988717954.1/warc/CC-MAIN-20161020183837-00049-ip-10-171-6-4.ec2.internal.warc.gz"}
https://byjus.com/arccot-formula/
# Arccot Formula Every function has an inverse and so is cotangent in the trigonometry. This operation inverses the function so the cotangent becomes inverse cotangent through this method. Then the inverse cotangent is used to evaluate the degree value of the angle in the triangle(right-angled) when the sides opposite to and adjacent to the angles are known. So each trigonometric function have an inverse. • Sine • Cosine • Tangent • Secant • Cosecant • Cotangent The inverse of these trigonometric functions are as follows: • inverse sine • inverse cosine • inverse tangent • inverse secant • inverse cosecant • inverse cotangent The inverse of Cotangent is also called as arccot or Cot-1 ## The Formula for arccot is: Cotangent = Base / Perpendicular If in a triangle, Base to angle A is 1 and the perpendicular side is sqrt(3) So cot-1 (1/sqrt 3) = A = 600
2019-09-23 18:49:10
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8804138898849487, "perplexity": 4118.065735589328}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-39/segments/1568514577478.95/warc/CC-MAIN-20190923172009-20190923194009-00069.warc.gz"}
http://apm.bplaced.net/w/index.php?title=It%27s_not_quantum_mechanical
# Nanomechanics is barely mechanical quantummechanics (Redirected from It's not quantum mechanical) The three parameters that can be used to get something to behave quantum mechanically. (wiki-TODO: Make the graphic less silly - no triangle frame - Boltzmann factor - ...) # Math Let us define "quantumness" as the ratio of the energy quantisation (the minimum allowed energy steps) to the average thermal energy in a single degree of freedom (the logarithm of the Boltzmann factor): Quantumness: $Q = \frac{\Delta E}{E_T}$ First we'll need the thermal energy: Equipartitioning: $E_T = \frac{1}{2}k_BT \quad$ The size of the energy quanta $\Delta E$ depends on the system under consideration. To see quantum behaviour the system must be bounded thus reciprocative motion is considered. ## Reciprocative linear motion The uncertainty relation: $\Delta x \Delta p \geq h \quad$ Kinetic energy: $\Delta E = \frac{\Delta p^2}{2m} \quad$ Quantumness: $\color{red}{Q_{trans} = \frac{h^2}{k_B} \frac{1}{m \Delta x^2 T}}$ ## Reciprocative circular motion Here alpha is the fraction of a full circle that is passed through in a rotative oszillation. For a normal unidirectional rotation alpha must be set to 2pi. The uncertainty relation: $\Delta \alpha \Delta L \geq h \quad$ Kinetic energy: $\Delta E = \frac{\Delta L^2}{2I} \quad$ Quantumness: $\color{red}{Q_{rot} = \frac{h^2}{k_B} \frac{1}{I \Delta \alpha^2 T}}$ # Values With the Boltzmann constant: $k_B = 1.38 \cdot 10^{-23} J/K$ we get the Average thermal energy per degree of freedom: $E_{T=300K} = 414 \cdot 10^{-23} J$ ## rotative (full 360°) $L_0 = \hbar = 1.054 \cdot 10^{-34} {kg m^2} / s$ $L_0 = I \omega_0 = 2 m r^2 \omega$ Nitrogen molecule N2: $\quad \color{blue}{2r = 0.11 nm \quad m_N = 2.3 \cdot 10^{-26} kg}$ $\omega_0 = 2 \pi f = 7.5 \cdot 10^{11} s^{-1}$ $f_0 = 119GHz$ $E_0 = I \omega_0^2 /2 = L_0 \omega_0 /2$ Size of energy quanta: $E_0 = 3.95 \cdot 10^{-23} J$ Quantumness: $\color{red}{Q_{rot} \lt 1/100}$ is rather small thus we have pretty classical behaviour (at room-temperature). Note that this is a single free floating molecule. In advanced nano-machinery there are axles made of thousands and thousands of atoms which are in turn stiffly integrated in an axle system made out of millions of atoms. This is making energy quantisation imperceptible even at liquid helium temperatures. ... ## general Vibrations of individual molecules can behave quite quantummechanically even at room-temperature. This is the reason why the thermal capacity of gasses (needed energy per degree heated) can make crazy jumps even at relatively high temperatures. (Jumps with a factor significantly greater than one.) # Discussion There are three parameters that can be changed to get something to behave more quantum mechanically. The three options are: • (1) lowering temperature • (2) lowering inertia • (3) decreasing the degree of freedom
2019-05-23 01:40:51
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.7777164578437805, "perplexity": 3436.6618488029176}, "config": {"markdown_headings": true, "markdown_code": false, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-22/segments/1558232256997.79/warc/CC-MAIN-20190523003453-20190523025453-00280.warc.gz"}
https://hal.inria.fr/hal-02342280v2
# Don't take it lightly: Phasing optical random projections with unknown operators 2 PANAMA - Parcimonie et Nouveaux Algorithmes pour le Signal et la Modélisation Audio Inria Rennes – Bretagne Atlantique , IRISA-D5 - SIGNAUX ET IMAGES NUMÉRIQUES, ROBOTIQUE Abstract : In this paper we tackle the problem of recovering the phase of complex linear measurements when only magnitude information is available and we control the input. We are motivated by the recent development of dedicated optics-based hardware for rapid random projections which leverages the propagation of light in random media. A signal of interest $ξ ∈ RN$ is mixed by a random scattering medium to compute the projection $y = Aξ$, with $A ∈ C^{M×N}$ being a realization of a standard complex Gaussian iid random matrix. Such optics-based matrix multiplications can be much faster and energy-efficient than their CPU or GPU counterparts, yet two difficulties must be resolved: only the intensity |y|2 can be recorded by the camera, and the transmission matrix A is unknown. We show that even without knowing A, we can recover the unknown phase of y for some equivalent transmission matrix with the same distribution as A. Our method is based on two observations: first, conjugating or changing the phase of any row of A does not change its distribution; and second, since we control the input we can interfere ξ with arbitrary reference signals. We show how to leverage these observations to cast the measurement phase retrieval problem as a Euclidean distance geometry problem. We demonstrate appealing properties of the proposed algorithm in both numerical simulations and real hardware experiments. Not only does our algorithm accurately recover the missing phase, but it mitigates the effects of quantization and the sensitivity threshold, thus improving the measured magnitudes. Document type : Conference papers Cited literature [28 references] https://hal.inria.fr/hal-02342280 Contributor : Rémi Gribonval Connect in order to contact the contributor Submitted on : Monday, February 17, 2020 - 9:04:10 AM Last modification on : Friday, August 5, 2022 - 2:54:52 PM Long-term archiving on: : Monday, May 18, 2020 - 1:21:10 PM ### File OPU_NeurIPS.pdf Files produced by the author(s) ### Identifiers • HAL Id : hal-02342280, version 2 • ARXIV : 1907.01703 ### Citation Sidharth Gupta, Rémi Gribonval, Laurent Daudet, Ivan Dokmanić. Don't take it lightly: Phasing optical random projections with unknown operators. NeurIPS 2019 - Thirty-third Conference on Neural Information Processing Systems, Dec 2019, Vancouver, Canada. pp.1-13. ⟨hal-02342280v2⟩ Record views
2022-09-28 14:04:25
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.5133733153343201, "perplexity": 2204.030464262308}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030335254.72/warc/CC-MAIN-20220928113848-20220928143848-00064.warc.gz"}
http://mathhelpforum.com/advanced-math-topics/207590-generating-functions-integer-partitions.html
# Thread: generating functions, integer partitions 1. ## generating functions, integer partitions Just want to make sure this is correct Supposed to construct the generating function for the set of all integer partitions such that each even part is divisible by 4 and each odd part occurs an even number of times. Came up with $\prod_{m=1}^{\infty}\frac{1}{1-x^{4m}}\prod_{n=1}^{\infty}\frac{1}{1-x^{2(2n-1)}}$
2017-06-28 23:23:52
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 1, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.7537969946861267, "perplexity": 485.14192323174075}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-26/segments/1498128323807.79/warc/CC-MAIN-20170628222452-20170629002452-00370.warc.gz"}
https://axibook.com/engineering-drawing/dimensioning-and-layout-procedure-in-engineering-drawing/2019/
Dimensioning and Layout Procedure in Engineering Drawing Introduction A drawing of an object is prepared to define its shape and to specify its size. The shape description is based on projection and the size description on dimensioning.  Every drawing must give its complete size description stating length, width, thickness, diameter of holes, grooves, angles, etc. and such other details relating to its construction. To give all those measurements and information describing the size of the object in the drawing is called dimensioning. Placing of Dimensions Dimensions should be placed on the view, which shows the relevant features most clearly. The two recommended systems of placing the dimensions are: • Aligned System. In this system, all dimensions are so placed that these may be read from the bottom or the right-hand edge of the drawing sheet. All dimensions should be placed above the dimension lines. (Refer Fig. 1) • Unidirectional System. In this system, all dimensions are so placed that these may be read from the bottom edge of the drawing sheet. In this system, there is no restriction controlling the direction of dimension lines. This system is advantageous on large drawings, where it is inconvenient to read dimensions from the right hand side. In this method, all dimension lines are interrupted, preferably near the middle for the insertion of the dimension value. (Refer Fig. 2) General Principles of Dimensioning • As far as possible, all the dimensions for one particular operation shall be specified in one view only, such as diameter and depth of drilled hole, or size and depth of a threaded hole, etc. • Normally dimensions should be placed outside the views, Fig 3 but if it is not possible it may be placed within the view as shown in Fig 4. However, dimensions should not be placed within a view unless drawing becomes clear by doing so. Dimensions should not be placed too close to each other or to the parts being dimensioned. • Dimensions are to be given from visible outlines, rather than from hidden lines, Fig 5. Dimensions are to be given from a base line, a centre line of the hole, a cylindrical part, important hole or a finished surface which may be readily established, based on design requirements and the relationship to other parts. (Refer Fig 6 & 7) • Dimensions for different operations on a part, for example, drilling and bending, should be given separately as in Fig 8, if permissible by its design. • An axis or a contour line should never be used as a dimension line but may be used as a projection line. (Refer Fig 9) • The intersection of dimension line should be avoided as far as possible if, however, the intersection of two dimension lines is unavoidable, the lines should not be broken. Dimension lines may be broken for inserting the dimension in the case of unidirectional dimensioning. (Refer Fig 2) • Overall dimensions should be placed outside the intermediate dimensions. Where an overall dimension is shown, one of the intermediate dimensions is redundant and should not be dimensioned. (Refer Fig 10). Scales Drawing of very big objects cannot be prepared in full size because these would be too big to accommodate on the drawing sheet. Drawings of very small objects also cannot be prepared in full size because these would be too small to draw and to read. A convenient scale is chosen to prepare the drawings of big as well as small objects in proportionately smaller or larger size. Therefore, scales are used to prepare a drawing at a full size, reduced size or enlarged size. Representative Fraction The ratio of the size of the drawing to the size of the object is known as the representative fraction. It is denoted as RF. [latexpage] $\text{Representative Fraction (RF)} = \frac{\text{Dimension on drawing}}{\text{Actual dimension}}$ The Representative Fraction (RF) when given in terms of ratio is known as Representative Ratio (RR).  If the length of an object is one meter and it is represented on the drawing by a line one centimeter long, then RF = (1 cm) / (1 m) = (1 cm) / (1X 100 cm) = 1/100 In terms of Representative Ratio, RR = 1:100 Recommended Scales The scales recommended for use in engineering drawing by IS: 10713-1983 are as follows: • Full Size Scale. When the size of the drawing and the object is the same, then it is known as full-size scale, i.e. 1:1. • Reduced Scale. When the drawings are smaller in size than the actual objects, the reduced scales are used. Recommended scales are 1:2, 1:5, 1:10, 1:20, 1:50, 1:100, etc. • Enlarged Scale. When the drawing to be drawn is larger than the actual objects, the enlarged scales are used. The recommended scales are 50:1, 20:1, 10:1, 5:1, 2:1, etc. Layout of Drawing Sheet • Sheet Sizes. The preferred sizes of the drawing sheets recommended by the Bureau of Indian Standards (BIS) are given below as per SP: 46 (1988). Table 1 Size of Drawing Sheet Sheet Designation Trimmed size (mm) Untrimmed size(mm) A0 841 X 1189 880X1230 A1 594X841 625X880 A2 420X594 450X625 A3 297X420 330X450 A4 210X297 240X330 A5 147X210 165X240 The layout of the drawing on a drawing sheet should be done in such a manner so as to make its reading easy. Fig. 11 and Fig. 12 show an A1 size sheet layout.  All dimensions are in millimeters. A full size drawing paper is normally of 565 mm x 765 mm size. • Margin is provided in the drawing sheet by drawing margin lines (Ref Fig 11). Prints are trimmed along these lines. After trimming, the prints would be of the recommended sizes of the trimmed sheets. • Border Lines. Clear working space is obtained by drawing border lines as shown in Fig 11. More space is kept on the left-hand side for the purpose of filing or binding if necessary. When prints are to be preserved or stored in a cabinet without filing, equal space may be provided on all sides. • Borders and Frames. SP: 46 (1988) recommends the borders of 20 mm width for the sheet sizes A0 and A1 and 10 mm for the sizes A2 to A5. Frame shows the clear space available for the drawing purpose. • Orientation Mark. Four centering marks are drawn as shown in Fig 12 to facilitate positioning of the drawing for the reproduction purpose. The orientation mark will coincide with one of the centering marks which can be used for the orientation of drawing sheet on the drawing board. • Grid Reference System (Zone system). The grid reference system is drawn on the sheet to permit easy location on the drawing such as details, alterations or additions. The rectangle of grid along the length should be referred by numerals 1, 2, 3… and along the width by the capital letters A, B, C, D, etc. as shown in Fig 12. • Title Block. Space for the title block must be provided in the bottom right-hand corner of the drawing sheet as shown in Fig 6.3and Fig 4. The size of the title block as recommended by the BIS is 185 mm x 65 mm for all designations of the drawing sheets. Fig. 5 shows the simplest type of a title block. All title blocks should contain at least the particulars as shown in table 2. Table 2 Title Block Sl No. Information 1 Name of firm 2 Title of the drawing 3 Scale 4 Symbol for the method of projection 5 Drawing number 6 Initials with dates of persons who have designed, drawn,   checked standards and approved. 7 Sl No. of sheet and total number of sheets of the drawing of the object Author: Aliva Tripathy Taking out time from a housewife life and contributing to AxiBook is a passion for me. I love doing this and gets mind filled with huge satisfaction with thoughtful feedbacks from you all. Do love caring for others and love sharing knowledge more than this.
2019-12-06 01:06:44
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.5572227835655212, "perplexity": 1343.027200742606}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-51/segments/1575540482954.0/warc/CC-MAIN-20191206000309-20191206024309-00296.warc.gz"}
https://study.com/academy/answer/a-particle-has-a-kinetic-energy-of-62-mev-and-a-momentum-of-335-mev-c-find-its-mass-in-mev-c-2-and-speed-as-a-fraction-of-c.html
# A particle has a kinetic energy of 62 MeV and a momentum of 335 MeV/c. Find its mass (in MeV/c^2)... ## Question: A particle has a kinetic energy of 62 MeV and a momentum of 335 MeV/c. Find its mass (in MeV/c{eq}^2 {/eq}) and speed (as a fraction of c). ## Kinetic Energy: The value of the energy attained in the body due to the motion or movement is called kinetic energy. The mathematic formula used to determine the magnitude of the kinetic energy is {eq}K.E = \dfrac{1}{2}m{v^2}. {/eq} Given data • The value of the kinetic energy of the particle is {eq}K.E. = 62\;{\rm{MeV}} {/eq} • The value of the momentum of the particle is {eq}P = 335\;{\rm{MeV/c}} {/eq} The expression for the relation between the kinetic energy and the momentum is: {eq}\begin{align*} P &= \sqrt {K.E\left( {K.E + 2{m_o}{c^2}} \right)} \\ {P^2} &= K.E\left( {K.E + 2{m_o}{c^2}} \right) \end{align*} {/eq} Substitute the values in the above equation. {eq}\begin{align*} {\left( {335} \right)^2} &= 62\left( {62 + 2{m_o}{{\left( c \right)}^2}} \right)\\ {m_o}{\left( c \right)^2} &= 874.040\;{\rm{MeV}}\\ {m_o}{\left( c \right)^2} &= \left( {874.040 \times 1.6 \times {{10}^{ - 13}}} \right)\;{\rm{kg}}\\ {m_o}{\left( {3 \times {{10}^8}} \right)^2} &= \left( {874.040 \times 1.6 \times {{10}^{ - 13}}} \right)\;{\rm{kg}}\\ {m_o} &= 155.3848 \times {10^{ - 29}}\;{\rm{kg}} \end{align*} {/eq} Thus, the value of the mass of the particle is {eq}155.3848 \times {10^{ - 29}}\;{\rm{kg}} {/eq} The expression for the velocity of the particle in comparison with the velocity of light is: {eq}P = \dfrac{{{m_o}v}}{{\left( {\sqrt {1 - \dfrac{{{v^2}}}{{{c^2}}}} } \right)}} {/eq} Substitute the values in the above equation. {eq}\begin{align*} 335 &= \dfrac{{\left( {874.040} \right)v}}{{\left( {\sqrt {1 - \dfrac{{{v^2}}}{{{c^2}}}} } \right)}}\\ 335\sqrt {1 - \dfrac{{{v^2}}}{{{c^2}}}} &= 874.040v\\ 112225\left( {1 - \dfrac{{{v^2}}}{{{c^2}}}} \right) &= 763945.9216{v^2}\\ 112225{c^2} - 112225{v^2} &= 763945.9216{v^2}{c^2}\\ v &= 0.36c \end{align*} {/eq} Thus, the value of the velocity of the particle is {eq}0.36c {/eq}, where {eq}c {/eq} is the velocity of light.
2020-03-28 19:09:40
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 1.0000100135803223, "perplexity": 4751.682771468765}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-16/segments/1585370492125.18/warc/CC-MAIN-20200328164156-20200328194156-00032.warc.gz"}
https://tex.stackexchange.com/questions/507022/how-can-i-change-the-default-math-mode-f-in-the-cochineal-package
How can I change the default math-mode 'f' in the cochineal package? I think the font provided by the cochineal package is excellent, and I want to write documents using it, so I use the following in my preamble: \usepackage[osf, p]{cochineal} \usepackage[cochineal]{newtxmath} However I dislike one particular change that was made in the math font, namely the way that italic f is changed. See the difference in the line \textit{f} versus $f$: As far as I can tell this is one of three changes to the lowercase italic alphabet in math mode: the letters v and w are also changed to have better distinction from the greek letter nu. My issue is that I strongly dislike this new letter f. I understand this may have been a stylistic choice by the designer but I much prefer the reclining, two-tailed f to this one. I can force the math f to display as the regular italic one by writing \mathit{f} rather than just f, but this is obviously not very convenient. How can I change the way that the lowercase letter f is displayed in math mode? Here is a minimal document to reproduce this: (compiled with pdfTeX) \documentclass{article} \usepackage[osf, p]{cochineal} \usepackage[cochineal]{newtxmath} \begin{document} Comparison: \textit{f} versus $f$. Default: $f(x + y) = f(2x) + f(2y) - 1$. Forced italic: $\mathit{f}(x + y) = \mathit{f}(2x) + \mathit{f}(2y) - 1$. \end{document} Declare a new math symbol font. \documentclass{article} \usepackage[osf, p]{cochineal} \usepackage[cochineal]{newtxmath} \DeclareSymbolFont{cochinealit}{\encodingdefault}{\familydefault}{m}{it} \DeclareMathSymbol{f}{\mathalpha}{cochinealit}{f} \DeclareSymbolFontAlphabet{\mathit}{cochinealit} \begin{document} Comparison: \textit{f} versus $f$. Also $f^2$. Default: $f(x + y) = f(2x) + f(2y) - 1$. Forced italic: $\mathit{f}(x + y) = \mathit{f}(2x) + \mathit{f}(2y) - 1$. Beware: $ff+f\/f$ Math roman: $\mathrm{f}$ \end{document} There's a small catch, shown in the last line: you need something like f\/f if two consecutive f's appear in a formula. • This is very explanatory. I wonder if there is any way to automatically supress ligatures here? – AJF Sep 4, 2019 at 23:19 • @AJFarmar Not without much labor: the font needs to be duplicated to be loaded at a slightly (but unnoticeable) different size and its internal parameter be changed. Sep 4, 2019 at 23:22 \documentclass{article} \showoutput \usepackage[osf, p]{cochineal} \usepackage[cochineal]{newtxmath} \sbox0{$\mathit{abc}$} \mathcodef=\numexpr\mathcodef+"700\relax \begin{document} Comparison: \textit{f} versus $f$. Default: $f(x + y) = f(2x) + f(2y) - 1$. Forced italic: $\mathit{f}(x + y) = \mathit{f}(2x) + \mathit{f}(2y) - 1$. Forced roman: $\mathrm{f}(x + y) = \mathrm{f}(2x) + \mathrm{f}(2y) - 1$. \end{document} • Did you forget something? Sep 4, 2019 at 22:07 • @egreg there are lots of things I didn't mention, if that is what you mean. Sep 4, 2019 at 22:17 One hacky way is to change the letter f to an active character, but only in math mode. This is described in this post on the TeX FAQ. We do this by inserting this: % Warning: there are serious issues with this; see below. \begingroup \lccode~=f \lowercase{\endgroup \def~{\text{\textit{f}}}% }% \mathcodef="8000 Put simply, this replace all occurences of the solitary letter f with \text{\textit{f}} in math mode. So, here's a full example: \documentclass{article} \usepackage[osf, p]{cochineal} \usepackage[cochineal]{newtxmath} \begingroup \lccode~=f \lowercase{\endgroup \def~{\text{\textit{f}}}% }% \mathcodef="8000 \begin{document} Comparison: \textit{f} versus $f$. Default: $f(x + y) = f(2x) + f(2y) - 1$. Forced italic: $\mathit{f}(x + y) = \mathit{f}(2x) + \mathit{f}(2y) - 1$. \end{document} This produces the desired result: There are serious issues with this. In particular, \mathrm{f} displays strictly in italic, so for instance \liminf has a jarring italic f at the end. For this reason, it may be preferable simply to define \newcommand\f{\mathit{f}} and simply write \f(x) instead of f(x)`.
2022-08-12 09:23:37
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8496710062026978, "perplexity": 2643.757474033132}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-33/segments/1659882571597.73/warc/CC-MAIN-20220812075544-20220812105544-00593.warc.gz"}
http://math.stackexchange.com/questions/321860/show-that-z3-1iz-3-i-0-does-not-have-any-roots-in-the-unit-circle
# Show that $z^3 + (1+i)z - 3 + i = 0$ does not have any roots in the unit circle $|z|\leq 1$. I need help with showing that $z^3 + (1+i)z - 3 + i = 0$ does not have any roots in the unit circle $|z|\leq 1$? My approach so far has been to try to develop the expression further. $$z^3 +(1+i)z-3+i = z(z^2+i+1)-3+i$$ $$z(z^2+i+1) = 3 - i \longrightarrow |z(z^2+i+1)| = |3 - i|$$ This gives me the expression: $z((z^2+1)^2+(1)^2) = \sqrt{10}$ Which can be written as: $z(z^4 +2z^2 +2) = \sqrt{10}$ But how do I move on from here? Or am I attempting the wrong solution? - You used that $z\in\Bbb R^+$ though $z\in\Bbb C$, it would be better to write your expression as $$|z|^2\cdot|z+1+i|^2=10\ .$$ – Berci Mar 5 '13 at 21:50 $z^3 + (1+i)z - 3 + i = 0\iff z^3+(1+i)z=3-i$ Now, If $|z|\leq 1$, then $|z^3+(1+i)z|\leq |z|^3+|1+i||z|\leq1+\sqrt2$ As $|3-i|=\sqrt{10}\gt 1+\sqrt{2}$ Therefore, $z^3+(1+i)z\neq 3-i$ for any $z\in \Bbb C, |z|\leq 1$ - Thank you for the excellent answer Avatar! – Lukas Arvidsson Mar 6 '13 at 8:00 :):):):):):):):) – Aang Mar 6 '13 at 8:01 If you write it as $z^3+(1+i)z=3-i$ (as you did), note that for $|z| \le 1$, the first term has modulus no greater than $1$ and the second no greater than $\sqrt 2$ and the triangle inequality solves your problem. - Thank you for your answer Ross! If you have the time, I would very much appreciate if you would please explain your modulus reasoning a bit more. Thank you! – Lukas Arvidsson Mar 5 '13 at 22:00 You can think of $z^3, (1+I)z,$ and $i-3$ as three vectors in $\mathbb R^2$ that add to $0$. So $|z^3|+|z(1+i)|\ge|i-3|$ and you know $|z|\le 1$ – Ross Millikan Mar 5 '13 at 22:16
2016-06-26 15:40:14
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.7370399832725525, "perplexity": 178.74730799047623}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2016-26/segments/1466783395346.72/warc/CC-MAIN-20160624154955-00050-ip-10-164-35-72.ec2.internal.warc.gz"}
http://timescalewiki.org/index.php?title=Hilger_alternating_axis&oldid=1648
# Hilger alternating axis (diff) ← Older revision | Latest revision (diff) | Newer revision → (diff) The Hilger alternating axis is defined for $h>1$ by $$\mathbb{A}_h = \left\{z \in \mathbb{R} \colon z < -\dfrac{1}{h} \right\},$$ and for $h=0$ we let $\mathbb{A}_0=\emptyset$.
2019-10-21 20:18:36
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9394859671592712, "perplexity": 8198.85376434204}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-43/segments/1570987787444.85/warc/CC-MAIN-20191021194506-20191021222006-00144.warc.gz"}
https://en.wikibooks.org/wiki/Arithmetic/Reading_Decimal_Numerals
The word and is used only for the decimal point; And is also used to separate a whole number and a fraction. A comma is used at every third place, starting at the decimal point and moving left.[1][2] Frequently a comma or space is used at every third place moving to the right of the decimal point. Additionally, no comma is used before and. Also, all decimals end in ths except unitary decimals that end in th. Alternatively, you may say the whole number, followed by a Point and the digits of the decimals from leftmost to right. This is a much more natural and informal way of saying numbers. Decimal numeral Reading thereof Alternative ${\displaystyle 2,697,787.84}$ Two million, six hundred ninety-seven thousand, seven hundred eighty-seven and eighty-four hundredths Two million, six hundred ninety-seven thousand, seven hundred eighty-seven point eight four ${\displaystyle 2,009}$ Two thousand, nine ${\displaystyle 1,987}$ One thousand, nine hundred eighty-seven ${\displaystyle 0.684}$ Six hundred eighty-four thousandths Zero point six eight four ${\displaystyle 17.04}$ Seventeen and four hundredths Seventeen point zero four ${\displaystyle 0.1}$ One tenth Zero point one ${\displaystyle 4.3}$ Four and three tenths Four point three ${\displaystyle 0.0001}$ one ten-thousandth Zero point zero zero zero one ${\displaystyle 5.000008}$ five and eight millionths Five point zero zero zero zero zero eight ${\displaystyle 0.00073}$ seventy-three hundred-thousandths Zero point zero zero zero seven three ## References and notes 1. Business Mathematics, 10th Edition. Authors are Charles D. Miller, Stanley A. Salzman, and Gary Clendenen. Published in 2006 by Pearson Education, Inc. ISBN 0-321-27782-1 2. Any thorough English arithmetic text discusses the reading of decimal numerals.
2017-03-25 04:15:59
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 10, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.6464076042175293, "perplexity": 4744.434141360894}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-13/segments/1490218188785.81/warc/CC-MAIN-20170322212948-00107-ip-10-233-31-227.ec2.internal.warc.gz"}
https://stats.stackexchange.com/questions/71176/intuition-behind-logistic-regression
# Intuition behind logistic regression Recently I began studying machine learning, however I failed to grasp the intuition behind logistic regression. The following are the facts about logistic regression that I understand. 1. As the basis for hypothesis we use sigmoid function. I do understand why it's a correct choice, however why it's the only choice I don't understand. Hypothesis represents the probability that the appropriate output is $1$, therefore the domain of our function should be $[0,1]$, this is the only property of sigmoid function I found useful and appropriate here, however many functions satisfy this property. In addition, sigmoid function has a derivative in this form $f(x)(1-f(x))$, but I don't see the utility of this special form in logistic regression. Question: what so special about sigmoid function, and why we cannot use any other function with domain $[0,1]$? 2. The cost function consists of two parameters ${\rm Cost}(h_{\theta}(x),y)=-\log(h_{\theta}(x))$ if $y=1, {\rm Cost}(h_{\theta}(x),y)=-\log(1-h_{\theta}(x))$ if $y=0$. In the same was as above, I do understand why it's correct, however why is it the only form? For example, why couldn't $|h_{\theta(x)}-y|$ be a good choice for the cost function? Question: what is so special about the above form of cost function; why cannot we use another form? I would appreciate if you could share your understanding of logistic regression. The logistic regression model is maximum likelihood using the natural parameter (the log-odds ratio) to contrast the relative changes in the risk of the outcome per unit difference in the predictor. This is assuming, of course, a binomial probability model for the outcome. That means that the consistency and robustness properties of logistic regression extend directly from maximum likelihood: robust to missing at random data, root-n consistency, and existence and uniqueness of solutions to estimating equations. This is assuming the solutions are not on the boundaries of parameter space (where log odds ratios are $\pm \infty$). Because logistic regression is maximum likelihood, the loss function is related to the likelihood, since they're equivalent optimization problems. With quasilikelihood or estimating equations (semiparametric inference), existence, uniqueness properties still hold but the assumption that the mean model holds is not relevant and the inference and standard errors are consistent regardless of model misspecification. So in this case, it's not a matter of whether the sigmoid is the correct function, but one that gives us a trend that we can believe in and is parameterized by parameters that have an extensible interpretation. The sigmoid, however, is not the only such binary modeling function around. The most commonly contrasted probit function has similar properties. It doesn't estimate log-odds ratios, but functionally they look very similar and tend to give very similar approximations to the exact same thing. One need not use boundness properties in the mean model function either. Simply using a log curve with a binomial variance function gives relative risk regression, an identity link with binomial variance gives additive risk models. All this is determined by the user. The popularity of logistic regression is, sadly, why it's so commonly used. However, I have my reasons (the ones that I stated) why I think it's well justified for it's use in most binary outcome modeling circumstances. In the inference world, for rare outcomes, the odds ratio can be roughly interpreted as a "relative risk", i.e. a "percent relative change in the risk of outcome comparing X+1 to X". This isn't always the case and, in general, an odds ratio cannot and should not be interpreted as such. However, that parameters have interpretation and can be easily communicated to other researchers is an important point, something sadly missing from the machine learnists' didactic materials. The logistic regression model also provides the conceptual foundations for more sophisticated approaches such as hierarchical modeling, as well as mixed modelling and conditional likelihood approaches which are consistent and robust to exponentially growing numbers of nuisance parameters. GLMMs and conditional logistic regression are very important concepts in high dimensional statistics. • Thank you very much for your answer! It seems like I have a huge lack in background. – user16168 Sep 29 '13 at 19:52 • I think McCullough and Nelder's book Generalized Linear Models would be a great background resource for a more statistics perspective. – AdamO Sep 29 '13 at 22:16 • In general, what textbook do you advise in Machine learning with very detailed descriptive content? – user16168 Sep 30 '13 at 14:23 • Elements of Statistical Learning by Hastie, Tibshirani, Friedman. – AdamO Sep 30 '13 at 16:47 • @user48956 Statistical Analysis with Missing Dada, Little & Rubin 2nd ed. Missing data is not "represented" per se, but "handled" by omission. This is not particular to logistic regression: it is the naive approach used by all statistical models. When data are formatted in a rectangular array, rows with missing values are omitted. This is known as a complete case analysis. GLMs and GLMMS are robust to missing data in the sense that complete case analyses are usually unbiased and not very inefficient. – AdamO Jul 6 '16 at 20:39 One way to think about logistic regression is as a threshold response model. In these models, you have a binary dependent variable, $Y$, which is influenced by the values of a vector of independent variables $X$. The dependent variable $Y$ can only take on the values 0 and 1, so you can't model the dependence of $Y$ on $X$ with a typical linear regression equation like $Y_i=X_i\beta+\epsilon_i$. But we really, really like linear equations. Or, at least, I do. To model this situation, we introduce an unobservable, latent variable $Y^*$, and we say that $Y$ goes from equaling 0 to equaling 1 when $Y^*$ crosses a threshold: \begin{align} Y^*_i &= X_i \beta + \epsilon_i\\ &\\ Y_i &= 0 \;\textrm{if}\; Y_i^*<0\\ Y_i &= 1 \; \textrm{if} \; Y_i^*>0 \end{align} As I have written it, the threshold is at 0. This is an illusion, however. Generally, the model includes an intercept (i.e. one of the columns of $X$ is a column of 1s). This allows the threshold to be anything. To motivate this model, think of killing bugs with a nerve-toxin pesticide. $Y^*$ is how many nerve cells are killed, and $X$ includes the dose of pesticide delivered to some bug. $Y$ is then 1 if the insect dies and 0 if it lives. That is, if enough nerve cells are killed (and $Y^*$ crosses the threshold), then the bug dies. This is not actually how neurotoxic pesticide work, by the way, but it's fun to pretend. So, you get a linear regression equation you can't see and a binary outcome you can see. The parameters, $\beta$ are usually estimated via maximum likelihood. If $\epsilon$ is distributed with symmetric distribution function $F$, then $P\{Y_i=1\}=F(X_i\beta)$. Just as you say, you can use any symmetric distribution function you want. Actually, you can use an asymmetric distribution function if you like, it just makes the algebra a tiny bit harder, as $P\{Y_i=1\}=1-F(-X_i\beta)$. Now, the distribution function you pick for $\epsilon$ affects your estimation results. The two most common choices for $F$ are normal (yielding the probit model) and logistic (yielding the logit model). These two distributions are so similar that there are rarely important differences in the results between them. Since logit has a very convenient closed form for both cdf and density functions, it's usually easier to use it rather than probit. Again, just as you say, you could pick any distribution function for $F$ and which one you pick will affect your results. • What you described is exactly the motivation for the probit model, not logistic regression. – AdamO Sep 26 '13 at 22:39 • @AdamO, if the $\epsilon_i$ have a logistic distribution, then this describes logistic regression. – Macro Sep 26 '13 at 22:50 • That seems like a very sensitive assumption and one that would be difficult to test. I think logistic regression can be motivated when such error distributions don't hold. – AdamO Sep 27 '13 at 16:52 • @AdamO, however you motivate logistic regression, it's still mathematically equivalent to a thresholded linear regression model where the errors have a logistic distribution. I agree that this assumption may be hard to test but it's there regardless of how you motivate the problem. I recall a previous answer on CV (I can't place it right now) that showed with a simulation study that trying to tell whether a logistic or probit model "fit better" was basically a coin flip, regardless of the true data generating model. I suspect logistic is more popular because of the convenient interpretation. – Macro Sep 27 '13 at 17:22 • @AdamO This is a manifestation of the usual economist/statistician divide, but . . . I don't think logistic regression is semi-parametric. The statistical model is $P(Y_i=1)=\frac{exp(X_i\beta)}{1+exp(X_i\beta)}$. That's parametric. One can (and I do) interpret it as coming from a threshold model with logistic error. If I get worried about making too many assumptions on the error term, I am going to drop logistic regression, not the threshold model. Threshold models can be estimated with much weaker assumptions on the error terms using maximum score and related estimators, for example. – Bill Sep 27 '13 at 19:42
2019-11-16 02:12:08
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 1, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9128981828689575, "perplexity": 497.1004013539237}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-47/segments/1573496668716.69/warc/CC-MAIN-20191116005339-20191116033339-00524.warc.gz"}
https://proofwiki.org/wiki/Group_is_Solvable_iff_Normal_Subgroup_and_Quotient_are_Solvable
# Group is Solvable iff Normal Subgroup and Quotient are Solvable ## Theorem Let $G$ be a finite group. Let $H$ be a normal subgroup of $G$. Then $G$ is solvable if and only if: $(1): \quad H$ is solvable and $(2): \quad G / H$ is solvable where $G / H$ is the quotient group of $G$ by $H$. ## Proof As $H \lhd G$ we can construct the normal series: $(A): \quad \set e \lhd H \lhd G$ By Finite Group has Composition Series, $(A)$ can be refined to a composition series for $G$: $(B): \quad \set e = G_0 \lhd G_1 \lhd \cdots \lhd G_n = G$ Suppose $G_k = H$. Then we can construct the composition series: $(C): \quad \set e = G_0 \lhd G_1 \lhd \cdots \lhd G_k = H$ and: $(D): \quad \set e = G_k / H \lhd G_{k + 1} / H \lhd \cdots \lhd G_n / H = G / H$ Furthermore, by the Third Isomorphism Theorem: $\dfrac {G_{i + 1} / H} {G_i / H} \cong \dfrac {G_{i + 1} } {G_i}$ for all $k \le i \le n$. So each factor of the composition series for $G$ is a factor of either: the composition series for $H$ or: the composition series for $G / H$. The result follows. $\blacksquare$
2021-06-15 03:09:28
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9911515712738037, "perplexity": 313.6071924564919}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-25/segments/1623487616657.20/warc/CC-MAIN-20210615022806-20210615052806-00101.warc.gz"}
http://nm.cmm.uchile.cl/seminarios/seminarios-anteriores/page/5/
# Seminarios ## Matthieu Jonckheere (UBA) ### Título: Front propagation and quasi-stationary distributions for one-dimensional Lévy processes ### Abstract: We jointly investigate the existence of quasi-stationary distributions for one-dimensional Lévy processes and the existence of traveling waves for the Fisher-Kolmogorov-Petrovskii- Piskunov (F-KPP) equation associated with the same motion. Using probabilistic ideas de- veloped by S. Harris for the F-KPP equation, we show that the existence of a traveling wave for the F-KPP equation associated with a centered Lévy processes that branches at rate r and travels at velocity c is equivalent to the existence of a quasi-stationary distribution for a Lévy process with the same movement but drifted by -c and killed at zero, with mean absorption time 1/r. This allows to generalize the known existence conditions in both contexts. Joint work with Pablo Groisman. sep / 2016 05 ## Fabio Lopes (U. Chile) ### Título: Extinction time for the weaker of two competing SIS epidemics ### Abstract: Abstract: We consider a simple stochastic model for the spread of a disease caused by two virus strains in a closed homogeneously mixing population of size N. In our model, the spread of each strain is described by the stochastic logistic SIS epidemic process in the absence of the other strain, and we assume that there is perfect cross-immunity between the two virus strains, that is, individuals infected by one strain are temporarily immune to re-infections and infections by the other strain. For the case where one strain has a strictly larger basic reproductive ratio than the other, and the stronger strain on its own is supercritical (that is, its basic reproductive ratio is larger than 1), we derive precise asymptotic results for the distribution of the time when the weaker strain disappears from the population, that is, its extinction time. We further extend our results to the case where the difference between the two reproductive ratios may tend to 0. In our proof, we set out a simple approach for establishing a fluid limit approximation for a sequence of Markov chains in the vicinity of a stable fixed point of the limit drift equations, valid for a time exponential in the system size. This is a joint work with Malwina Luczak. ago / 2016 22 ## Christophe Profeta (Université d’Evry Val d’Essonn) ### Título: Limiting laws for some integrated processes ### Abstract: The study of limiting laws, or penalizations, of a given process may be seen (in some sense) as a way to condition a probability law by an a.s. infinite random variable. The systematic study of such problems started in 2006 with a series of papers by Roynette, Vallois and Yor who looked at Brownian motion perturbed by several examples of functionals. These works were then generalized to many families of processes: random walks, Lévy processes, linear diffusions… We shall present here some examples of penalization of a non-Markov process, i.e. the integrated Brownian motion, by its first passage time, nth passage time, and last passage time up to a finite horizon. We shall show that the penalization principle holds in all these cases, but that the conditioned process does not always behave as expected. Recent results around persistence of integrated symmetric stable processes will also be discussed. ago / 2016 11 ## Johel Beltrán (PUCP) ### Título: Martingale problem and trace processes applied to metastability. ### Abstract: The Martingale problem is a concept introduced by Stroock and Varadhan which can be understood as a sort of ordinary differential equation in which the vector field is replaced by a field of second order differential operators. A Markov process can be characterized as a unique solution of a Martingale problem. This fact turns the martingale problem in a very useful tool to prove convergence of stochastic processes derived from Markov processes. In this talk we shall use the martingale problem to prove the convergence of processes arising in the study of metastable systems. We shall explain how this tool is used in combination with other ones like trace processes and potential theory. Finally, we shall show some examples of systems in which this approach has been applied. This is a joint work with C. Landim. ago / 2016 01 ## Jorge Littin (UCN) ### Título: Phase Transitions on the Long Range Ising Models in presence of an random external field ### Abstract: We study the ferromagnetic one-dimensiosnal Random Field Ising Model with (RFIM) in presence of an external random field. The interaction between two spins decays as $d^{\alpha-2}$ where $d$ is the distance between two sites and $\alpha \in [0,1/2)$ is a parameter of the model. We consider an external random field on $\mathbb{Z}$ with independent but not identically distributed random variables. Specifically for each $i \in \mathbb{Z}$, the distribution of $h_i$ is $P[h_i=\pm \theta(1+|i|)^{-\nu/2}]$, This work, whose main goal is the study of the existence of a phase transition at a strictly positive temperature for different values of $\nu$ is inspired on the very recent article [2] where the 2D Ising Model with spatially dependent but not random external field is studied. In the random case, we combine some of the martingale difference techniques used in the previous articles of Cassandro, Picco and Orlandi [3], and the Aizemann & Wehr method [3]. Some of the classical results, the key parts of this work and some of the technical difficulties will be discussed in this talk. Joint work with Pierre Picco References: [1] M. Aizenman and C. M. Newman. Discontinuity of the percolation density in one-dimensional 1/|x − y| 107(4):611–647, 1986. [2] Rodrigo Bissacot, Marzio Cassandro, Leandro Cioletti, and Errico Presutti. Phase transitions in ferromagnetic ising models with spatially dependent magnetic fields. Communications in Mathematical Physics, 337(1):41–53, 2015. [3] Marzio Cassandro, Enza Orlandi, and Pierre Picco. Phase transition in the 1d random field Ising model with long range interaction. Communications in Mathematical Physics, 288(2):731–744, 2009. jul / 2016 25 ...34567...10... ##### Departamento de Matemáticas Pontificia Universidad Católica de Chile (PUC-Chile) Av. Vicuña Mackenna 4860, Macul, Santiago – Chile (+56 2) 2354 5779 ##### Centro de Modelamiento Matemático (CMM) Facultad de Ciencias Físicas y Matemáticas (FCFM)
2018-01-17 03:29:58
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.653428852558136, "perplexity": 1155.8126883225077}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-05/segments/1516084886794.24/warc/CC-MAIN-20180117023532-20180117043532-00792.warc.gz"}
https://www.mersenneforum.org/showthread.php?s=75fa30ed0cd2d4bdbd645b325a7ea097&p=480485
mersenneforum.org mtsieve User Name Remember Me? Password Register FAQ Search Today's Posts Mark Forums Read 2018-02-17, 18:30 #12 pepi37     Dec 2011 After milion nines:) 22×337 Posts And I will public say: sorry for "bug " they are not bugs, more like my bad reading of tutorial! It is really fast app! Last fiddled with by pepi37 on 2018-02-17 at 18:31 2018-02-17, 19:04   #13 rogue "Mark" Apr 2003 Between here and the 5,953 Posts Quote: Originally Posted by pepi37 And I will public say: sorry for "bug " they are not bugs, more like my bad reading of tutorial! It is really fast app! Thanks. There is a bug with fbncsieve and the -1 form. I know the cause. I'll let you know when an updated version has been posted. 2018-02-20, 16:29 #14 BotXXX     Aug 2003 Europe 193 Posts Mark, thank you for the continued efforts and time on your sieving programs! This is very much appreciated! On my Windows 2012 R2 system with an Intel Xeon E5-2620 cksieve crashes, directly after starting. Which I think still matches your comment from the start post that there were issues with the cksieve. By running (as example): cksieve -b 2 -p 2 -P 1000000 -n 100 -N 10000 -o ck_remain.out -O ck_factors.out cksyeve v1.2, a program to find factors of (b^n+/-1)^2-2 numbers Sieve started: 2 < p < 1000000 with 19802 terms Windows eventlog shows: Faulting application name: cksieve.exe, version: 0.0.0.0, time stamp: 0x5a7f7e2c Faulting module name: ntdll.dll, version: 6.3.9600.18821, time stamp: 0x59ba86db Exception code: 0xc0000374 Fault offset: 0x00000000000f1c10 Faulting process id: 0x190c Faulting application start time: 0x01d3aa65f8e8626d Faulting application path: C:\mtsieve\cksieve.exe Faulting module path: C:\WINDOWS\SYSTEM32\ntdll.dll Faulting package full name: Faulting package-relative application ID: Also beside the o/O cosmetic points in the help section, with cksieve.exe -h there is a small typo: cksyeve v1.2, a program to find factors of (b^n+/-1)^2-2 numbers -h --help prints this help) 2018-02-20, 17:45 #15 rogue     "Mark" Apr 2003 Between here and the 5,953 Posts Thanks for your feedback. I'll fix that cosmetic issue. I have not had the time to fix cksieve, but the gfndsieve performance issue has been resolved. I was trying to add a performance enhancement to fbncsieve, but it crashes on Windows and I haven't figured out why yet. That same change works in OS X, so it is either a compiler bug in mingw64 or something in the asm code. I'm hoping to post updated code this weekend. Last fiddled with by rogue on 2018-02-20 at 17:46 2018-02-22, 00:01 #16 rogue     "Mark" Apr 2003 Between here and the 5,953 Posts Good news. I found and fixed the bug with cksieve (stupid x86 asm). Here is a complete list of changes: Code: Add an internal flag that guarantee that suspends all but one Worker when processing the first chunk of primes. This is used to improve performance when there is a high factor density for low primes. This will also suppress any on screen reporting or checkpointing until that chunk is processed. Fix issue in computing CPU utilization. Changed -c (chunksize) option to -w (worksize). Change output to use shorter notation for min and max primes. cksieve - Fixed. gfndsieve - Enable the flag mentioned above. fbncsieve - Enable the flag mentioned above. fkbnsieve - Added, but not tested. Visit my page to get the link to d/l the latest source and Windows builds. Last fiddled with by rogue on 2018-02-22 at 00:02 2018-02-22, 00:06 #17 pepi37     Dec 2011 After milion nines:) 22×337 Posts fbncsieve -p50000000000000 -P 100000000000000 -i 500.npg -fN -W4 -O fact.txt fbncsieve v1.3.1, a program to find factors of k*b^n+c numbers for fixed b, n, and c and variable k Sieve started: 5e13 < p < 1e14 with 159895 terms p=50055611474419, 28.92M p/sec, 4 factors found at 15298 sec per factor, 0.1% done. ETA 2018-02-22 16:51 Since program run only 147 seconds I assume there will be 15.298 sec per factor not 15298 per factor , but this is cosmetic bug. +/- side works perfectly THANKS! Last fiddled with by pepi37 on 2018-02-22 at 00:42 2018-02-22, 02:24   #18 wombatman I moo ablest echo power! May 2013 110110000112 Posts Quote: Originally Posted by rogue Good news. I found and fixed the bug with cksieve (stupid x86 asm). Here is a complete list of changes: Code: Add an internal flag that guarantee that suspends all but one Worker when processing the first chunk of primes. This is used to improve performance when there is a high factor density for low primes. This will also suppress any on screen reporting or checkpointing until that chunk is processed. Fix issue in computing CPU utilization. Changed -c (chunksize) option to -w (worksize). Change output to use shorter notation for min and max primes. cksieve - Fixed. gfndsieve - Enable the flag mentioned above. fbncsieve - Enable the flag mentioned above. fkbnsieve - Added, but not tested. Visit my page to get the link to d/l the latest source and Windows builds. To be clear, does this mean the multithreaded version of gfndsieve is considered fully functional? Side question, can gfndsieve take a sieved file as input or no? I look at the options and the readme and didn't see any such option, but I wanted to be sure. Thanks again for doing this. 2018-02-22, 09:33 #19 pepi37     Dec 2011 After milion nines:) 22·337 Posts I still have problem if header of newpgen file is like this 29491439734612:M:0:2:16386 Then got message it is invalid header. Even on base 2 this program is faster then Newpgen. I can say it has constant rate for every base ( not just for base2 like Newpgen have) Little test I made sample file for base 500 Start point was 10000000000000 Newpgen done until 10016239604068 in 337 seconds and found 7 factors In same time ( number of worker 1) FBCNsieve found 50 factors and reach 10100000000000. In any way his program will give boost for searching type of variable K. Can you improve sr1sieve in similar way ( add MT option)? 2018-02-22, 14:32   #20 rogue "Mark" Apr 2003 Between here and the 5,953 Posts Quote: Originally Posted by pepi37 fbncsieve -p50000000000000 -P 100000000000000 -i 500.npg -fN -W4 -O fact.txt fbncsieve v1.3.1, a program to find factors of k*b^n+c numbers for fixed b, n, and c and variable k Sieve started: 5e13 < p < 1e14 with 159895 terms p=50055611474419, 28.92M p/sec, 4 factors found at 15298 sec per factor, 0.1% done. ETA 2018-02-22 16:51 Since program run only 147 seconds I assume there will be 15.298 sec per factor not 15298 per factor , but this is cosmetic bug. +/- side works perfectly I will look into this. Note that if you start from an input file that it will get the starting prime from that file. Using -p will override what it reads from that file. 2018-02-22, 14:43   #21 rogue "Mark" Apr 2003 Between here and the 5,953 Posts Quote: Originally Posted by wombatman To be clear, does this mean the multithreaded version of gfndsieve is considered fully functional? Side question, can gfndsieve take a sieved file as input or no? I look at the options and the readme and didn't see any such option, but I wanted to be sure. Thanks again for doing this. gfndsieve is fully functional. The only files that gfndsieve can take as input files are files that were created by gfndsieve. Use the -i option to specify the input file instead of using the -k/-K/-n/-N options. With some manipulation it could read files created with the -abcd1 switch of fermfact. 2018-02-22, 14:45   #22 rogue "Mark" Apr 2003 Between here and the 5,953 Posts Quote: Originally Posted by pepi37 I still have problem if header of newpgen file is like this 29491439734612:M:0:2:16386 Then got message it is invalid header. Even on base 2 this program is faster then Newpgen. I can say it has constant rate for every base ( not just for base2 like Newpgen have) Little test I made sample file for base 500 Start point was 10000000000000 Newpgen done until 10016239604068 in 337 seconds and found 7 factors In same time ( number of worker 1) FBCNsieve found 50 factors and reach 10100000000000. In any way his program will give boost for searching type of variable K. Can you improve sr1sieve in similar way ( add MT option)? How was the newpgen file created? Was it created by newpgen or fbnciseve? Is there a reason that you choose that format over the ABC or ABCD format? Getting sr1sieve into mtsieve is one of my goals, but it is behind the GPU options.
2020-10-28 09:16:53
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.23082835972309113, "perplexity": 8066.2465008631325}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-45/segments/1603107897022.61/warc/CC-MAIN-20201028073614-20201028103614-00061.warc.gz"}
https://zbmath.org/authors/?q=ai%3Abalachandar.s-raja
# zbMATH — the first resource for mathematics ## Balachandar, S. Raja Compute Distance To: Author ID: balachandar.s-raja Published as: Balachandar, S.; Balachandar, S. Raja; Raja Balachandar, S. Documents Indexed: 103 Publications since 1986, including 2 Books all top 5 #### Co-Authors 0 single-authored 9 Kannan, Krithivasan 9 Venkatesh, S. G. 8 Ayyaswamy, Singaraj Kulandaiswamy 6 Adrian, Ronald J. 6 Bagchi, Prosenjit 6 Bonometti, Thomas 6 Haselbacher, Andreas 6 Najjar, Fady M. 5 Krishnaveni, K. 4 Cantero, Mariano I. 4 Ha, Man Yeong 3 Akiki, G. 3 Balachandran, Selvaraj 3 Ferry, Jim 3 García, Marcelo H. 3 Ling, Yin 3 Parmar, Max 3 Prosperetti, Andrea 3 Zeng, Lanying 3 Zgheib, N. 2 Annamalai, Subramanian 2 Aref, Hassan 2 Chakraborty, Pinaki 2 Deng, Hanyuan 2 Fischer, Paul F. 2 Hsu, Tian-Jian 2 Jackson, Thomas L. 2 Lee, Hyungoo 2 Lee, Jae Ryong 2 Lee, Sangsan 2 Madabhushi, Ravi K. 2 Mittal, Rajat 2 Ooi, Andrew S. H. 2 Ozdemir, Celalettin E. 2 Parker, S. J. 2 Rani, Sarma L. 2 Shringarpure, Mrugesh 2 Tafti, Danesh K. 2 Ungarish, Marius 2 Vanka, Surya Pratap 2 Wakaba, L. 2 Yoon, Hyun Sik 2 Zhang, Le-Wen 2 Zhou, Jigen 1 Ayyaswamy, K. 1 Balasubramanian, Koushik 1 Balasubramanian, Krishnakumar 1 Balasubramanian, Krishnaswami 1 Buckmaster, John David 1 Chao, Jie 1 Cortese, T. 1 Diggs, Angela 1 Eaton, John K. 1 Ferry, James P. 1 Fischer, Paul E. 1 Giacobello, Matteo 1 Girimaji, Sharath S. 1 Jackson, Eric 1 Kendall, T. M. 1 Kim, Jungwoo 1 Kim, Kyoungyoun 1 Kim, Son Doan 1 Lakhote, Mandar 1 Lee, DongHyuk 1 Li, Changfeng 1 Liu, Kai 1 Magnaudet, Jacques 1 Malik, Mujeeb R. 1 Marjanovic, Goran 1 Maxey, Martin R. 1 Mehta, Y. 1 Mittel, R. 1 Moore, W. C. 1 Neal, Cora L. 1 Orszag, Steven Alan 1 Parmar, Manu 1 Pham, Minh Vuong 1 Plourde, Frédéric 1 Robichaux, Jennifer L. 1 Salari, Kambiz 1 She, Zhensu 1 Short, Mark W. 1 Shotorban, Babak 1 Sirovich, Lawrence 1 Srikanth, Raghavendran 1 Sureshkumar, Radhakrishna 1 Taub, G. N. 1 Thakur, Samajh Singh 1 Venkatakrishnan, Yanamandram Balasubramanian 1 Venkatesan, Dhanagopalan 1 Yakhot, Victor 1 Yuen, David A. 1 Zwick, Daniel S. all top 5 #### Serials 31 Journal of Fluid Mechanics 14 Physics of Fluids 11 Journal of Computational Physics 5 International Journal of Multiphase Flow 5 Theoretical and Computational Fluid Dynamics 4 International Journal of Heat and Mass Transfer 3 Computers and Fluids 3 Applied Mathematical Sciences (Ruse) 3 International Journal of Mathematical Sciences & Applications 2 Shock Waves 2 Applied Mathematics and Computation 2 Journal of Computational Methods in Sciences and Engineering 2 Proceedings of the Jangjeon Mathematical Society 1 Computers & Mathematics with Applications 1 Journal of Applied Mechanics 1 Physics of Fluids 1 Physics of Fluids, A 1 Journal of Scientific Computing 1 Computational and Applied Mathematics 1 Filomat 1 Journal of Mathematical Chemistry 1 Philosophical Transactions of the Royal Society of London. Series A. Mathematical, Physical and Engineering Sciences 1 European Journal of Mechanics. B. Fluids 1 Proceedings of the National Academy of Sciences, India. Section A. Physical Sciences 1 Journal of Applied Mathematics 1 Fluid Mechanics and its Applications 1 Asian-European Journal of Mathematics all top 5 #### Fields 83 Fluid mechanics (76-XX) 16 Numerical analysis (65-XX) 7 Operations research, mathematical programming (90-XX) 6 Classical thermodynamics, heat transfer (80-XX) 4 Geophysics (86-XX) 3 Combinatorics (05-XX) 3 Partial differential equations (35-XX) 3 Computer science (68-XX) 2 General and overarching topics; collections (00-XX) 2 Ordinary differential equations (34-XX) 2 Integral equations (45-XX) 2 Biology and other natural sciences (92-XX) 1 Number theory (11-XX) 1 Real functions (26-XX) 1 Harmonic analysis on Euclidean spaces (42-XX) 1 Statistics (62-XX) 1 Mechanics of particles and systems (70-XX) 1 Mechanics of deformable solids (74-XX) 1 Quantum theory (81-XX) 1 Game theory, economics, finance, and other social and behavioral sciences (91-XX) 1 Information and communication theory, circuits (94-XX) #### Citations contained in zbMATH 84 Publications have been cited 1,227 times in 892 Documents Cited by Year Mechanisms for generating coherent packets of hairpin vortices in channel flow. Zbl 0946.76030 Zhou, J.; Adrian, R. J.; Balachandar, S.; Kendall, T. M. 1999 Turbulent dispersed multiphase flow. Zbl 1345.76106 Balachandar, S.; Eaton, John K. 2010 On the relationships between local vortex identification schemes. Zbl 1071.76015 Chakraborty, Pinaki; Balachandar, S.; Adrian, Ronald J. 2005 Three-dimensional floquet instability of the wake of square cylinder. Zbl 1147.76482 Robichaux, J.; Balachandar, S.; Vanka, S. P. 1999 Chaotic advection in a Stokes flow. Zbl 0608.76028 Aref, H.; Balachandar, S. 1986 Effect of three-dimensionality on the lift and drag of nominally two-dimensional cylinders. Zbl 1032.76530 Mittel, R.; Balachandar, S. 1995 Effect of free rotation on the motion of a solid sphere in linear shear flow at moderate re. Zbl 1185.76040 Bagchi, P.; Balachandar, S. 2002 Direct numerical simulation of flow past elliptic cylinders. Zbl 0849.76064 Mittal, R.; Balachandar, S. 1996 A fast Eulerian method for disperse two-phase flow. Zbl 1137.76577 Ferry, Jim; Balachandar, S. 2001 Methods for evaluating fluid velocities in spectral simulations of turbulence. Zbl 0672.76057 Balachandar, S.; Maxey, M. R. 1989 On the front velocity of gravity currents. Zbl 1178.76135 Cantero, Mariano I.; Lee, J. R.; Balachandar, S.; Garcia, Marcelo H. 2007 Effect of turbulence on the drag and lift of a particle. Zbl 1186.76040 Bagchi, P.; Balachandar, S. 2003 Response of the wake of an isolated particle to an isotropic turbulent flow. Zbl 1131.76324 Bagchi, Prosenjit; Balachandar, S. 2004 Wall-induced forces on a rigid sphere at finite Reynolds number. Zbl 1102.76017 Zeng, Lanying; Balachandar, S.; Fischer, Paul 2005 Effects of polymer stresses on eddy structures in drag-reduced turbulent channel flow. Zbl 1175.76069 Kim, Kyoungyoun; Li, Chang-F.; Sureshkumar, R.; Balachandar, S.; Adrian, Ronald J. 2007 Properties of the mean recirculation region in the wakes of two-dimensional bluff bodies. Zbl 0899.76131 Balachandar, S.; Mittal, R.; Najjar, F. M. 1997 The Legendre wavelet method for solving initial value problems of Bratu-type. Zbl 1247.65180 Venkatesh, S. G.; Ayyaswamy, S. K.; Balachandar, S. Raja 2012 Effect of Schmidt number on the structure and propagation of density currents. Zbl 1178.76115 Bonometti, Thomas; Balachandar, S. 2008 Autogeneration of near-wall vortical structures in channel flow. Zbl 1027.76589 Zhou, Jigen; Adrian, Ronald J.; Balachandar, S. 1996 Wake structure of a transversely rotating sphere at moderate Reynolds numbers. Zbl 1171.76352 Giacobello, M.; Ooi, A.; Balachandar, S. 2009 High-resolution simulations of cylindrical density currents. Zbl 1141.76378 Cantero, Mariano I.; Balachandar, S.; Garcia, Marcelo H. 2007 Steady planar straining flow past a rigid sphere at moderate Reynolds number. Zbl 1062.76015 Bagchi, P.; Balachandar, S. 2002 Shear versus vortex-induced lift force on a rigid sphere at moderate Re. Zbl 1026.76016 Bagchi, P.; Balachandar, S. 2002 Interactions of a stationary finite-sized particle with wall turbulence. Zbl 1159.76337 Zeng, Lanying; Balachandar, S.; Fischer, Paul; Najjar, Fady 2008 Evaluation of the equilibrium Eulerian approach for the evolution of particle concentration in isotropic turbulence. Zbl 1136.76617 Rani, Sarma L.; Balachandar, S. 2003 Forces on a finite-sized particle located close to a wall in a linear shear flow. Zbl 1183.76599 Zeng, Lanying; Najjar, Fady; Balachandar, S.; Fischer, Paul 2009 Wall effects in non-Boussinesq density currents. Zbl 1189.76160 Bonometti, Thomas; Balachandar, S.; Magnaudet, Jacques 2008 Direct numerical simulations of planar and cylindrical density currents. Zbl 1111.74338 Cantero, Mariano I.; Balachandar, S.; García, Marcelo H.; Ferry, James P. 2006 Equation of motion for a sphere in non-uniform compressible flows. Zbl 1248.76124 Parmar, M.; Haselbacher, A.; Balachandar, S. 2012 A massively parallel multi-block hybrid compact WENO scheme for compressible flows. Zbl 1172.76033 Chao, J.; Haselbacher, A.; Balachandar, S. 2009 Pairwise interaction extended point-particle model for a random array of monodisperse spheres. Zbl 1383.76484 Akiki, G.; Jackson, T. L.; Balachandar, S. 2017 Inertial and viscous forces on a rigid sphere in straining flows at moderate Reynolds numbers. Zbl 1064.76022 Bagchi, Prosenjit; Balachandar, S. 2003 On the unsteady inviscid force on cylinders and spheres in subcritical compressible flow. Zbl 1256.76072 Parmar, M.; Haselbacher, A.; Balachandar, S. 2008 A divergence-free Chebyshev collocation procedure for incompressible flows with two non-periodic directions. Zbl 0768.76054 Madabhushi, Ravi K.; Balachandar, S.; Vanka, S. P. 1993 Drag and lift forces on a spherical particle moving on a wall in a shear flow at finite $$Re$$. Zbl 1197.76033 Lee, Hyungoo; Balachandar, S. 2010 Direct numerical simulations of a rapidly expanding thermal plume: Structure and entrainment interaction. Zbl 1151.76563 Plourde, Frédéric; Pham, Minh Vuong; Kim, Son Doan; Balachandar, S. 2008 The generation of axial vorticity in solid-propellant rocket-motor flows. Zbl 0984.76016 Balachandar, S.; Buckmaster, J. D.; Short, M. 2001 Computations of flow and heat transfer in parallel-plate fin heat exchangers on the CM-5: Effects of flow unsteadiness and three-dimensionality. Zbl 0921.76109 Zhang, L. W.; Tafti, D. K.; Najjar, F. M.; Balachandar, S. 1997 Vortical nature of thermal plumes in turbulent convection. Zbl 0925.76244 Cortese, T.; Balachandar, S. 1993 Phenomenological theory of probability distributions in turbulence. Zbl 0724.76035 Yakhot, Victor; Orszag, Steven A.; Balachandar, S.; Jackson, Eric; She, Zhen-Su; Sirovich, Lawrence 1990 Self-induced velocity correction for improved drag estimation in Euler-Lagrange point-particle simulations. Zbl 1416.76241 Balachandar, S.; Liu, Kai; Lakhote, Mandar 2019 Immersed boundary method with non-uniform distribution of Lagrangian markers for a non-uniform Eulerian mesh. Zbl 1351.76201 Akiki, G.; Balachandar, S. 2016 A locally implicit improvement of the equilibrium Eulerian method. Zbl 1136.76507 Ferry, Jim; Rani, Sarma L.; Balachandar, S. 2003 Viscous and inviscid instabilities of flow along a streamwise corner. Zbl 0968.76022 Parker, S. J.; Balachandar, S. 1999 Analysis and modeling of buoyancy-generated turbulence using numerical data. Zbl 0917.76033 Girimaji, S. S.; Balachandar, S. 1998 Direct numerical simulation of cylindrical particle-laden gravity currents. Zbl 1390.76902 Zgheib, N.; Bonometti, T.; Balachandar, S. 2015 History force on a sphere in a weak linear shear flow. Zbl 1135.76575 Wakaba, L.; Balachandar, S. 2005 A new approach for solving a model for HIV infection of $$\mathrm{CD}4^{+}$$ T-cells arising in mathematical chemistry using wavelets. Zbl 1364.92026 Venkatesh, S. G.; Raja Balachandar, S.; Ayyaswamy, S. K.; Balasubramanian, K. 2016 Direct numerical simulations of instability and boundary layer turbulence under a solitary wave. Zbl 1294.76116 Ozdemir, Celalettin E.; Hsu, Tian-Jian; Balachandar, S. 2013 Legendre wavelets based approximation method for Cauchy problems. Zbl 1268.65141 Venkatesh, S. G.; Ayyaswamy, S. K.; Balachandar, S. Raja; Kannan, K. 2012 Convergence analysis of Legendre wavelets method for solving Fredholm integral equations. Zbl 1266.65211 Venkatesh, S. G.; Ayyaswamy, S. K.; Balachandar, S. Raja 2012 Modeling of the unsteady force for shock-particle interaction. Zbl 1255.76062 Parmar, M.; Haselbacher, A.; Balachandar, S. 2009 A Eulerian model for large-eddy simulation of concentration of particles with small Stokes numbers. Zbl 1182.76699 Shotorban, Babak; Balachandar, S. 2007 On the added mass force at finite Reynolds and acceleration numbers. Zbl 1161.76462 Wakaba, L.; Balachandar, S. 2007 Natural convection in a horizontal layer of fluid with a periodic array of square cylinders in the interior. Zbl 1186.76316 Lee, Jae Ryong; Ha, Man Yeong; Balachandar, S.; Yoon, Hyun Sik; Lee, Sang San 2004 Unsteady heat transfer from a sphere in a uniform cross-flow. Zbl 1184.76040 Balachandar, S.; Ha, M. Y. 2001 Heat transfer enhancement mechanisms in inline and staggered parallel-plate fin heat exchangers. Zbl 0939.76528 Zhang, L. W.; Balachandar, S.; Tafti, D. K.; Najjar, F. M. 1997 Inviscid instability of streamwise corner flow. Zbl 0831.76014 Balachandar, S.; Malik, M. R. 1995 Mean and fluctuating components of drag and lift forces on an isolated finite-sized particle in turbulence. Zbl 1291.76190 Kim, Jungwoo; Balachandar, S. 2012 Numerical simulations of flow and heat transfer past a circular cylinder with a periodic array of fins. Zbl 1186.76315 Lee, Dong Hyuk; Ha, Man Yeong; Balachandar, S.; Lee, Sangsan 2004 A first course in computational fluid dynamics. Zbl 1410.76001 Aref, H.; Balachandar, S. 2018 On the evolution of the plume function and entrainment in the near-source region of lazy plumes. Zbl 1421.76209 Marjanovic, G.; Taub, G. N.; Balachandar, S. 2017 Evaluation of methods for calculating volume fraction in eulerian-Lagrangian multiphase flow simulations. Zbl 1349.76871 Diggs, Angela; Balachandar, S. 2016 Front conditions of high-Re gravity currents produced by constant and time-dependent influx: an analytical and numerical study. Zbl 1408.76164 Shringarpure, Mrugesh; Lee, Hyungoo; Ungarish, Marius; Balachandar, S. 2013 A numerical investigation of high-Reynolds-number constant-volume non-Boussinesq density currents in deep ambient. Zbl 1225.76072 Bonometti, Thomas; Ungarish, Marius; Balachandar, S. 2011 Slumping of non-Boussinesq density currents of various initial fractional depths: a comparison between direct numerical simulations and a recent shallow-water model. Zbl 1242.76030 Bonometti, Thomas; Balachandar, S. 2010 Transient phenomena in one-dimensional compressible gas-particle flows. Zbl 1255.76139 Ling, Y.; Haselbacher, A.; Balachandar, S. 2009 Optimal two-dimensional models for wake flows. Zbl 1184.76041 Balachandar, S.; Najjar, F. M. 2001 The minimum value of the harmonic index for a graph with the minimum degree two. Zbl 1441.05053 Deng, Hanyuan; Balachandran, S.; Balachandar, S. Raja 2020 Asymptotic scaling laws and semi-similarity solutions for a finite-source spherical blast wave. Zbl 1415.76437 Ling, Y.; Balachandar, S. 2018 Suspension-driven gravity surges on horizontal surfaces: effect of the initial shape. Zbl 1390.76903 Zgheib, N.; Bonometti, T.; Balachandar, S. 2017 An approximation method for solving Burgers’ equation using Legendre wavelets. Zbl 1381.35148 Venkatesh, S. G.; Ayyaswamy, S. K.; Raja Balachandar, S. 2017 Faxén form of time-domain force on a sphere in unsteady spatially varying viscous compressible flows. Zbl 1383.76415 Annamalai, Subramanian; Balachandar, S. 2017 Front dynamics and entrainment of finite circular gravity currents on an unbounded uniform slope. Zbl 1392.76098 Zgheib, N.; Ooi, A.; Balachandar, S. 2016 Trees with smaller harmonic indices. Zbl 06749938 Deng, Hanyuan; Balachandran, S.; Venkatakrishnan, Y. B.; Balachandar, S. Raja 2016 Fractional polynomial method for solving integro-differential equations of fractional order. Zbl 1350.65146 Krishnaveni, K.; Balachandar, S. Raja; Venkatesh, S. G. 2016 Dynamics of complete turbulence suppression in turbidity currents driven by monodisperse suspensions of sediment. Zbl 1275.76153 Shringarpure, Mrugesh; Cantero, Mariano I.; Balachandar, S. 2012 A new heuristic approach for knapsack/covering problem. Zbl 1266.90120 Raja Balachandar, S.; Kannan, K. 2011 A numerical source of small-scale number-density fluctuations in Eulerian-Lagrangian simulations of multiphase flows. Zbl 1329.76353 Ling, Y.; Haselbacher, A.; Balachandar, S. 2010 A new polynomial time algorithm for 0-1 multiple knapsack problem based on dominant principles. Zbl 1147.65045 Raja Balachandar, S.; Kannan, K. 2008 Randomized gravitational emulation search algorithm for symmetric traveling salesman problem. Zbl 1193.90176 Balachandar, S. Raja; Kannan, K. 2007 Onset of vortex shedding in an inline and staggered array of rectangular cylinders. Zbl 1185.76043 Balachandar, S.; Parker, S. J. 2002 Spurious modes in spectral collocation methods with two non-periodic directions. Zbl 0808.76065 1994 Structure extraction by stochastic estimation with adaptive events. Zbl 0800.76198 1993 The minimum value of the harmonic index for a graph with the minimum degree two. Zbl 1441.05053 Deng, Hanyuan; Balachandran, S.; Balachandar, S. Raja 2020 Self-induced velocity correction for improved drag estimation in Euler-Lagrange point-particle simulations. Zbl 1416.76241 Balachandar, S.; Liu, Kai; Lakhote, Mandar 2019 A first course in computational fluid dynamics. Zbl 1410.76001 Aref, H.; Balachandar, S. 2018 Asymptotic scaling laws and semi-similarity solutions for a finite-source spherical blast wave. Zbl 1415.76437 Ling, Y.; Balachandar, S. 2018 Pairwise interaction extended point-particle model for a random array of monodisperse spheres. Zbl 1383.76484 Akiki, G.; Jackson, T. L.; Balachandar, S. 2017 On the evolution of the plume function and entrainment in the near-source region of lazy plumes. Zbl 1421.76209 Marjanovic, G.; Taub, G. N.; Balachandar, S. 2017 Suspension-driven gravity surges on horizontal surfaces: effect of the initial shape. Zbl 1390.76903 Zgheib, N.; Bonometti, T.; Balachandar, S. 2017 An approximation method for solving Burgers’ equation using Legendre wavelets. Zbl 1381.35148 Venkatesh, S. G.; Ayyaswamy, S. K.; Raja Balachandar, S. 2017 Faxén form of time-domain force on a sphere in unsteady spatially varying viscous compressible flows. Zbl 1383.76415 Annamalai, Subramanian; Balachandar, S. 2017 Immersed boundary method with non-uniform distribution of Lagrangian markers for a non-uniform Eulerian mesh. Zbl 1351.76201 Akiki, G.; Balachandar, S. 2016 A new approach for solving a model for HIV infection of $$\mathrm{CD}4^{+}$$ T-cells arising in mathematical chemistry using wavelets. Zbl 1364.92026 Venkatesh, S. G.; Raja Balachandar, S.; Ayyaswamy, S. K.; Balasubramanian, K. 2016 Evaluation of methods for calculating volume fraction in eulerian-Lagrangian multiphase flow simulations. Zbl 1349.76871 Diggs, Angela; Balachandar, S. 2016 Front dynamics and entrainment of finite circular gravity currents on an unbounded uniform slope. Zbl 1392.76098 Zgheib, N.; Ooi, A.; Balachandar, S. 2016 Trees with smaller harmonic indices. Zbl 06749938 Deng, Hanyuan; Balachandran, S.; Venkatakrishnan, Y. B.; Balachandar, S. Raja 2016 Fractional polynomial method for solving integro-differential equations of fractional order. Zbl 1350.65146 Krishnaveni, K.; Balachandar, S. Raja; Venkatesh, S. G. 2016 Direct numerical simulation of cylindrical particle-laden gravity currents. Zbl 1390.76902 Zgheib, N.; Bonometti, T.; Balachandar, S. 2015 Direct numerical simulations of instability and boundary layer turbulence under a solitary wave. Zbl 1294.76116 Ozdemir, Celalettin E.; Hsu, Tian-Jian; Balachandar, S. 2013 Front conditions of high-Re gravity currents produced by constant and time-dependent influx: an analytical and numerical study. Zbl 1408.76164 Shringarpure, Mrugesh; Lee, Hyungoo; Ungarish, Marius; Balachandar, S. 2013 The Legendre wavelet method for solving initial value problems of Bratu-type. Zbl 1247.65180 Venkatesh, S. G.; Ayyaswamy, S. K.; Balachandar, S. Raja 2012 Equation of motion for a sphere in non-uniform compressible flows. Zbl 1248.76124 Parmar, M.; Haselbacher, A.; Balachandar, S. 2012 Legendre wavelets based approximation method for Cauchy problems. Zbl 1268.65141 Venkatesh, S. G.; Ayyaswamy, S. K.; Balachandar, S. Raja; Kannan, K. 2012 Convergence analysis of Legendre wavelets method for solving Fredholm integral equations. Zbl 1266.65211 Venkatesh, S. G.; Ayyaswamy, S. K.; Balachandar, S. Raja 2012 Mean and fluctuating components of drag and lift forces on an isolated finite-sized particle in turbulence. Zbl 1291.76190 Kim, Jungwoo; Balachandar, S. 2012 Dynamics of complete turbulence suppression in turbidity currents driven by monodisperse suspensions of sediment. Zbl 1275.76153 Shringarpure, Mrugesh; Cantero, Mariano I.; Balachandar, S. 2012 A numerical investigation of high-Reynolds-number constant-volume non-Boussinesq density currents in deep ambient. Zbl 1225.76072 Bonometti, Thomas; Ungarish, Marius; Balachandar, S. 2011 A new heuristic approach for knapsack/covering problem. Zbl 1266.90120 Raja Balachandar, S.; Kannan, K. 2011 Turbulent dispersed multiphase flow. Zbl 1345.76106 Balachandar, S.; Eaton, John K. 2010 Drag and lift forces on a spherical particle moving on a wall in a shear flow at finite $$Re$$. Zbl 1197.76033 Lee, Hyungoo; Balachandar, S. 2010 Slumping of non-Boussinesq density currents of various initial fractional depths: a comparison between direct numerical simulations and a recent shallow-water model. Zbl 1242.76030 Bonometti, Thomas; Balachandar, S. 2010 A numerical source of small-scale number-density fluctuations in Eulerian-Lagrangian simulations of multiphase flows. Zbl 1329.76353 Ling, Y.; Haselbacher, A.; Balachandar, S. 2010 Wake structure of a transversely rotating sphere at moderate Reynolds numbers. Zbl 1171.76352 Giacobello, M.; Ooi, A.; Balachandar, S. 2009 Forces on a finite-sized particle located close to a wall in a linear shear flow. Zbl 1183.76599 Zeng, Lanying; Najjar, Fady; Balachandar, S.; Fischer, Paul 2009 A massively parallel multi-block hybrid compact WENO scheme for compressible flows. Zbl 1172.76033 Chao, J.; Haselbacher, A.; Balachandar, S. 2009 Modeling of the unsteady force for shock-particle interaction. Zbl 1255.76062 Parmar, M.; Haselbacher, A.; Balachandar, S. 2009 Transient phenomena in one-dimensional compressible gas-particle flows. Zbl 1255.76139 Ling, Y.; Haselbacher, A.; Balachandar, S. 2009 Effect of Schmidt number on the structure and propagation of density currents. Zbl 1178.76115 Bonometti, Thomas; Balachandar, S. 2008 Interactions of a stationary finite-sized particle with wall turbulence. Zbl 1159.76337 Zeng, Lanying; Balachandar, S.; Fischer, Paul; Najjar, Fady 2008 Wall effects in non-Boussinesq density currents. Zbl 1189.76160 Bonometti, Thomas; Balachandar, S.; Magnaudet, Jacques 2008 On the unsteady inviscid force on cylinders and spheres in subcritical compressible flow. Zbl 1256.76072 Parmar, M.; Haselbacher, A.; Balachandar, S. 2008 Direct numerical simulations of a rapidly expanding thermal plume: Structure and entrainment interaction. Zbl 1151.76563 Plourde, Frédéric; Pham, Minh Vuong; Kim, Son Doan; Balachandar, S. 2008 A new polynomial time algorithm for 0-1 multiple knapsack problem based on dominant principles. Zbl 1147.65045 Raja Balachandar, S.; Kannan, K. 2008 On the front velocity of gravity currents. Zbl 1178.76135 Cantero, Mariano I.; Lee, J. R.; Balachandar, S.; Garcia, Marcelo H. 2007 Effects of polymer stresses on eddy structures in drag-reduced turbulent channel flow. Zbl 1175.76069 Kim, Kyoungyoun; Li, Chang-F.; Sureshkumar, R.; Balachandar, S.; Adrian, Ronald J. 2007 High-resolution simulations of cylindrical density currents. Zbl 1141.76378 Cantero, Mariano I.; Balachandar, S.; Garcia, Marcelo H. 2007 A Eulerian model for large-eddy simulation of concentration of particles with small Stokes numbers. Zbl 1182.76699 Shotorban, Babak; Balachandar, S. 2007 On the added mass force at finite Reynolds and acceleration numbers. Zbl 1161.76462 Wakaba, L.; Balachandar, S. 2007 Randomized gravitational emulation search algorithm for symmetric traveling salesman problem. Zbl 1193.90176 Balachandar, S. Raja; Kannan, K. 2007 Direct numerical simulations of planar and cylindrical density currents. Zbl 1111.74338 Cantero, Mariano I.; Balachandar, S.; García, Marcelo H.; Ferry, James P. 2006 On the relationships between local vortex identification schemes. Zbl 1071.76015 Chakraborty, Pinaki; Balachandar, S.; Adrian, Ronald J. 2005 Wall-induced forces on a rigid sphere at finite Reynolds number. Zbl 1102.76017 Zeng, Lanying; Balachandar, S.; Fischer, Paul 2005 History force on a sphere in a weak linear shear flow. Zbl 1135.76575 Wakaba, L.; Balachandar, S. 2005 Response of the wake of an isolated particle to an isotropic turbulent flow. Zbl 1131.76324 Bagchi, Prosenjit; Balachandar, S. 2004 Natural convection in a horizontal layer of fluid with a periodic array of square cylinders in the interior. Zbl 1186.76316 Lee, Jae Ryong; Ha, Man Yeong; Balachandar, S.; Yoon, Hyun Sik; Lee, Sang San 2004 Numerical simulations of flow and heat transfer past a circular cylinder with a periodic array of fins. Zbl 1186.76315 Lee, Dong Hyuk; Ha, Man Yeong; Balachandar, S.; Lee, Sangsan 2004 Effect of turbulence on the drag and lift of a particle. Zbl 1186.76040 Bagchi, P.; Balachandar, S. 2003 Evaluation of the equilibrium Eulerian approach for the evolution of particle concentration in isotropic turbulence. Zbl 1136.76617 Rani, Sarma L.; Balachandar, S. 2003 Inertial and viscous forces on a rigid sphere in straining flows at moderate Reynolds numbers. Zbl 1064.76022 Bagchi, Prosenjit; Balachandar, S. 2003 A locally implicit improvement of the equilibrium Eulerian method. Zbl 1136.76507 Ferry, Jim; Rani, Sarma L.; Balachandar, S. 2003 Effect of free rotation on the motion of a solid sphere in linear shear flow at moderate re. Zbl 1185.76040 Bagchi, P.; Balachandar, S. 2002 Steady planar straining flow past a rigid sphere at moderate Reynolds number. Zbl 1062.76015 Bagchi, P.; Balachandar, S. 2002 Shear versus vortex-induced lift force on a rigid sphere at moderate Re. Zbl 1026.76016 Bagchi, P.; Balachandar, S. 2002 Onset of vortex shedding in an inline and staggered array of rectangular cylinders. Zbl 1185.76043 Balachandar, S.; Parker, S. J. 2002 A fast Eulerian method for disperse two-phase flow. Zbl 1137.76577 Ferry, Jim; Balachandar, S. 2001 The generation of axial vorticity in solid-propellant rocket-motor flows. Zbl 0984.76016 Balachandar, S.; Buckmaster, J. D.; Short, M. 2001 Unsteady heat transfer from a sphere in a uniform cross-flow. Zbl 1184.76040 Balachandar, S.; Ha, M. Y. 2001 Optimal two-dimensional models for wake flows. Zbl 1184.76041 Balachandar, S.; Najjar, F. M. 2001 Mechanisms for generating coherent packets of hairpin vortices in channel flow. Zbl 0946.76030 Zhou, J.; Adrian, R. J.; Balachandar, S.; Kendall, T. M. 1999 Three-dimensional floquet instability of the wake of square cylinder. Zbl 1147.76482 Robichaux, J.; Balachandar, S.; Vanka, S. P. 1999 Viscous and inviscid instabilities of flow along a streamwise corner. Zbl 0968.76022 Parker, S. J.; Balachandar, S. 1999 Analysis and modeling of buoyancy-generated turbulence using numerical data. Zbl 0917.76033 Girimaji, S. S.; Balachandar, S. 1998 Properties of the mean recirculation region in the wakes of two-dimensional bluff bodies. Zbl 0899.76131 Balachandar, S.; Mittal, R.; Najjar, F. M. 1997 Computations of flow and heat transfer in parallel-plate fin heat exchangers on the CM-5: Effects of flow unsteadiness and three-dimensionality. Zbl 0921.76109 Zhang, L. W.; Tafti, D. K.; Najjar, F. M.; Balachandar, S. 1997 Heat transfer enhancement mechanisms in inline and staggered parallel-plate fin heat exchangers. Zbl 0939.76528 Zhang, L. W.; Balachandar, S.; Tafti, D. K.; Najjar, F. M. 1997 Direct numerical simulation of flow past elliptic cylinders. Zbl 0849.76064 Mittal, R.; Balachandar, S. 1996 Autogeneration of near-wall vortical structures in channel flow. Zbl 1027.76589 Zhou, Jigen; Adrian, Ronald J.; Balachandar, S. 1996 Effect of three-dimensionality on the lift and drag of nominally two-dimensional cylinders. Zbl 1032.76530 Mittel, R.; Balachandar, S. 1995 Inviscid instability of streamwise corner flow. Zbl 0831.76014 Balachandar, S.; Malik, M. R. 1995 Spurious modes in spectral collocation methods with two non-periodic directions. Zbl 0808.76065 1994 A divergence-free Chebyshev collocation procedure for incompressible flows with two non-periodic directions. Zbl 0768.76054 Madabhushi, Ravi K.; Balachandar, S.; Vanka, S. P. 1993 Vortical nature of thermal plumes in turbulent convection. Zbl 0925.76244 Cortese, T.; Balachandar, S. 1993 Structure extraction by stochastic estimation with adaptive events. Zbl 0800.76198 1993 Phenomenological theory of probability distributions in turbulence. Zbl 0724.76035 Yakhot, Victor; Orszag, Steven A.; Balachandar, S.; Jackson, Eric; She, Zhen-Su; Sirovich, Lawrence 1990 Methods for evaluating fluid velocities in spectral simulations of turbulence. Zbl 0672.76057 Balachandar, S.; Maxey, M. R. 1989 Chaotic advection in a Stokes flow. Zbl 0608.76028 Aref, H.; Balachandar, S. 1986 all top 5 #### Cited by 1,824 Authors 50 Balachandar, S. Raja 17 Meiburg, Eckart H. 14 Brandt, Luca 12 Sung, Hyung Jin 11 Picano, Francesco 11 Ungarish, Marius 10 Adrian, Ronald J. 10 Bonometti, Thomas 10 Hourigan, Kerry 10 Thompson, Mark Christopher 10 Wang, Lianping 9 Wang, Jinjun 8 Feng, Lihao 8 Ha, Man Yeong 8 McKeon, Beverley J. 8 Mittal, Rajat 7 Andersson, Helge I. 7 Casciola, Carlo Massimo 7 Ganapathisubramani, Bharathram 7 Sardina, Gaetano 7 Shen, Lian 7 Simonin, Olivier 7 Wu, Xiaohua 7 Yoon, Hyun Sik 6 Cantero, Mariano I. 6 Coletti, Filippo 6 Constantinescu, George 6 Fox, Rodney O. 6 Majdalani, Joseph 6 Marusic, Ivan 6 Moin, Parviz 6 Najjar, Fady M. 6 Zaki, Tamer A. 5 Ayala, Orlando M. 5 Christensen, Kenneth T. 5 Desjardins, Olivier 5 Elsinga, Gerrit E. 5 Gualtieri, P. 5 Haselbacher, Andreas 5 Katz, Joseph L. 5 Koch, Donald L. 5 Kumaran, V. 5 Leweke, Thomas 5 Magnaudet, Jacques 5 Pan, Chong 5 Peng, Cheng 5 Pirozzoli, Sergio 5 Prosperetti, Andrea 5 Schultz, Michael P. 5 Zhao, Lihao 4 Ayyaswamy, Singaraj Kulandaiswamy 4 Bagchi, Prosenjit 4 Bernardini, Matteo 4 Borée, Jacques 4 Capecelatro, Jesse 4 Collins, Lance R. 4 Fornari, Walter 4 Fröhlich, Jochen 4 Ghaemi, Sina 4 Goswami, Partha S. 4 Graham, Michael D. 4 Haller, George 4 Hallez, Yannick 4 Ireland, Peter J. 4 Jiménez, Javier 4 Kuerten, J. G. M. 4 Lee, Jae Hwa 4 Ling, Yin 4 Lu, Xiyun 4 Mani, Ali 4 Marie, Jean-Louis 4 Rastello, Marie 4 Richter, David H. 4 Rist, Ulrich 4 Robinet, Jean-Christophe 4 Samtaney, Ravi 4 Smits, Alexander J. 4 Subramaniam, Shankar 4 Tafti, Danesh K. 4 Uhlmann, Markus 4 Venkatesh, S. G. 4 Volino, Ralph J. 4 Wiggins, Stephen 4 Yang, Di 4 Yu, Zhaosheng 3 Akiki, G. 3 Alam, Muhammad Mahbub 3 Alipchenkov, Vladimir Mikhailovich 3 Aref, Hassan 3 Balachandran, Selvaraj 3 Cheng, Liang 3 Cherubini, Stefania 3 Chou, Yi-Ju 3 Ding, Hang 3 Don, Wai Sun 3 Elyyan, Mohammad A. 3 Fischer, Paul F. 3 Flack, Karen A. 3 García, Marcelo H. 3 Gonzalez-Juez, Esteban ...and 1,724 more Authors all top 5 #### Cited in 86 Serials 423 Journal of Fluid Mechanics 111 Physics of Fluids 65 Computers and Fluids 63 Journal of Computational Physics 18 European Journal of Mechanics. B. Fluids 17 International Journal of Heat and Mass Transfer 12 International Journal for Numerical Methods in Fluids 10 Flow, Turbulence and Combustion 9 Theoretical and Computational Fluid Dynamics 9 Applied Mathematical Modelling 8 Physics of Fluids, A 7 Physica D 7 Chaos 7 Journal of Turbulence 7 Acta Mechanica Sinica 6 Computers & Mathematics with Applications 6 International Journal of Numerical Methods for Heat & Fluid Flow 4 Acta Mechanica 4 Chaos, Solitons and Fractals 3 Fluid Dynamics 3 Applied Mathematics and Computation 3 Journal of Computational and Applied Mathematics 3 International Journal of Bifurcation and Chaos in Applied Sciences and Engineering 3 Mathematical Problems in Engineering 3 International Journal of Applied and Computational Mathematics 2 Computer Methods in Applied Mechanics and Engineering 2 Journal of Engineering Mathematics 2 Shock Waves 2 Applied Numerical Mathematics 2 Journal of Scientific Computing 2 Journal of Non-Newtonian Fluid Mechanics 2 Engineering Analysis with Boundary Elements 2 Journal of Mathematical Chemistry 2 International Journal of Computational Fluid Dynamics 2 Nonlinear Dynamics 2 Journal of Applied Mechanics and Technical Physics 2 Communications in Nonlinear Science and Numerical Simulation 2 Journal of Applied Mathematics 2 Izvestiya. Atmospheric and Oceanic Physics 2 International Journal of Biomathematics 2 S$$\vec{\text{e}}$$MA Journal 2 AMM. Applied Mathematics and Mechanics. (English Edition) 1 Computer Physics Communications 1 Discrete Mathematics 1 Geophysical and Astrophysical Fluid Dynamics 1 Journal of Mathematical Physics 1 Journal of Statistical Physics 1 Physica A 1 ZAMP. Zeitschrift für angewandte Mathematik und Physik 1 Zhurnal Vychislitel’noĭ Matematiki i Matematicheskoĭ Fiziki 1 Mathematics of Computation 1 Demonstratio Mathematica 1 International Journal for Numerical Methods in Engineering 1 Meccanica 1 Applied Mathematics and Mechanics. (English Edition) 1 Numerical Methods for Partial Differential Equations 1 Annals of Operations Research 1 M$$^3$$AS. Mathematical Models & Methods in Applied Sciences 1 Computational Mathematics and Mathematical Physics 1 European Journal of Operational Research 1 SIAM Journal on Applied Mathematics 1 SIAM Journal on Scientific Computing 1 Computational and Applied Mathematics 1 Journal of the Egyptian Mathematical Society 1 Taiwanese Journal of Mathematics 1 Journal of Combinatorial Optimization 1 Proceedings of the Royal Society of London. Series A. Mathematical, Physical and Engineering Sciences 1 Philosophical Transactions of the Royal Society of London. Series A. Mathematical, Physical and Engineering Sciences 1 Discrete Dynamics in Nature and Society 1 Regular and Chaotic Dynamics 1 International Journal of Nonlinear Sciences and Numerical Simulation 1 The ANZIAM Journal 1 Proceedings of the National Academy of Sciences, India. Section A. Physical Sciences 1 Sādhanā 1 Journal of Numerical Mathematics 1 Multiscale Modeling & Simulation 1 International Journal of Computational Methods 1 Boundary Value Problems 1 Proyecciones 1 International Journal for Numerical Methods in Biomedical Engineering 1 Journal of Mathematics and Computer Science. JMCS 1 Afrika Matematika 1 Journal of Theoretical Biology 1 Computational Methods for Differential Equations 1 European Series in Applied and Industrial Mathematics (ESAIM): Proceedings and Surveys 1 Proceedings of the Royal Society of London. A. Mathematical, Physical and Engineering Sciences all top 5 #### Cited in 28 Fields 847 Fluid mechanics (76-XX) 84 Numerical analysis (65-XX) 39 Classical thermodynamics, heat transfer (80-XX) 33 Dynamical systems and ergodic theory (37-XX) 24 Geophysics (86-XX) 23 Mechanics of deformable solids (74-XX) 19 Partial differential equations (35-XX) 14 Biology and other natural sciences (92-XX) 9 Ordinary differential equations (34-XX) 8 Operations research, mathematical programming (90-XX) 4 Statistical mechanics, structure of matter (82-XX) 3 History and biography (01-XX) 3 Probability theory and stochastic processes (60-XX) 3 Mechanics of particles and systems (70-XX) 2 Combinatorics (05-XX) 2 Approximations and expansions (41-XX) 2 Harmonic analysis on Euclidean spaces (42-XX) 2 Computer science (68-XX) 2 Game theory, economics, finance, and other social and behavioral sciences (91-XX) 1 Special functions (33-XX) 1 Integral equations (45-XX) 1 Calculus of variations and optimal control; optimization (49-XX) 1 General topology (54-XX) 1 Statistics (62-XX) 1 Optics, electromagnetic theory (78-XX) 1 Quantum theory (81-XX) 1 Astronomy and astrophysics (85-XX) 1 Systems theory; control (93-XX)
2021-01-16 15:24:18
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.6848743557929993, "perplexity": 13963.633404650405}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-04/segments/1610703506697.14/warc/CC-MAIN-20210116135004-20210116165004-00223.warc.gz"}
http://bkms.kms.or.kr/journal/view.html?doi=10.4134/BKMS.b150975
- Current Issue - Ahead of Print Articles - All Issues - Search - Open Access - Information for Authors - Downloads - Guideline - Regulations ㆍPaper Submission ㆍPaper Reviewing ㆍPublication and Distribution - Code of Ethics - For Authors ㆍOnlilne Submission ㆍMy Manuscript - For Reviewers - For Editors Three nontrivial nonnegative solutions for some critical $p$-Laplacian systems with lower-order negative perturbations Bull. Korean Math. Soc. 2017 Vol. 54, No. 1, 125-144 https://doi.org/10.4134/BKMS.b150975Published online January 31, 2017 Chang-Mu Chu, Chun-Yu Lei, Jiao-Jiao Sun, and Hong-Min Suo Guizhou Minzu University, Guizhou Minzu University, Guizhou Minzu University, Guizhou Minzu University Abstract : Three nontrivial nonnegative solutions for some critical quasilinear elliptic systems with lower-order negative perturbations are obtained by using the Ekeland's variational principle and the mountain pass theorem. Keywords : quasilinear elliptic systems, critical Sobolev exponent, sublinear perturbations, Ekeland's variational principle, mountain pass theorem MSC numbers : 35J50, 35J55, 58J20 Downloads: Full-text PDF
2019-12-12 02:51:37
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.18980717658996582, "perplexity": 10608.420412593285}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-51/segments/1575540536855.78/warc/CC-MAIN-20191212023648-20191212051648-00474.warc.gz"}
http://gatkforums.broadinstitute.org/gatk/discussion/3908/variant-recalibration
The current GATK version is 3.6-0 Examples: Monday, today, last week, Mar 26, 3/26/04 Howdy, Stranger! It looks like you're new here. If you want to get involved, click one of these buttons! Powered by Vanilla. Made with Bootstrap. Variant Recalibration New YorkPosts: 4Member Hi, I am running Variant Recalibration on Indels,prior to this I completed Variant Recalibration and ApplyRecalibration on SNPs. So,the input file is the recalibrated VCF file from Apply Recalibration step of SNP's. Below is the error I am getting. ERROR MESSAGE: Bad input: Values for DP annotation not detected for ANY training variant in the input callset. VariantAnnotator may be used to add these annotations. See http://gatkforums.broadinsti tute.org/discussion/49/using-variant-annotator The command I am using for this is : jre1.7.0_40/bin/java -Djava.io.tmpdir=./rb_2905_VCF/tmp -Xmx2g -jar GenomeAnalysisTK-2.7-4-g6f46d11/GenomeAnalysisTK.jar -T VariantRecalibrator -R dbdata/human_g1k_v37.fasta -input ${input_file} --maxGaussians 4 -resource:mills,known=false,training=true,truth=true,prior=12.0 Mills_and_1000G_gold_standard.indels.b37.vcf resource:omni,known=false,training=true,truth=false,prior=12.0 1000G_omni2.5.b37.vcf - resource:dbsnp,known=true,training=false,truth=false,prior=2.0 dbsnp_137.b37.vcf -resource:1000G,known=false,training=true,truth=false,prior=10.0 1000G_phase1.indels.b37.vcf -an DP -an QD -an FS -an MQRankSum -an ReadPosRankSum -mode INDEL -recalFile$destdir/${input_file%recal.snps.vcf}recal.indel.recal -tranchesFile$destdir/${input_file%recal.snps.vcf}recal.indel.tranches -rscriptFile$destdir/\${input_file%recal.snps.vcf}recal.indel.plots.R If I remove the options -an DP -an QD -an FS -an MQRankSum -an ReadPosRankSum,then I get this error: Tagged: Answers • Broad InstitutePosts: 698Member, Administrator, Broadie, Moderator, Dev admin Hi there, I see that you've posted 2 errors, and in each one it says: "Please do NOT post this error to the GATK forum unless you have really tried to fix it yourself." I'd recommend searching the forum for the solution to this question. I definitely have seen this topic asked and answered in the past. Eric Banks, PhD -- Director, Data Sciences and Data Engineering, Broad Institute of Harvard and MIT Sign In or Register to comment.
2016-08-24 10:12:13
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.1885378658771515, "perplexity": 14516.601435677992}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": false}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2016-36/segments/1471982292151.8/warc/CC-MAIN-20160823195812-00101-ip-10-153-172-175.ec2.internal.warc.gz"}
http://mathhelpforum.com/calculus/14136-differentiable.html
1. ## differentiable Def: Let f be a real-valued function defined on an interval I containing the point c, (we allow the possibilty that c is an endpoint of I) we say that f is differentiable at c (or has a derivative at c) if the limit lim x->c (f(x) - f(c))/(x-c) exists and is finite. A) Use definition above to prove that f'(x) = (1/3)^(-2/3) for x is not equal to 0 B) Show that f is not differentiable at x = 0 2. Originally Posted by learn18 Def: Let f be a real-valued function defined on an interval I containing the point c, (we allow the possibilty that c is an endpoint of I) we say that f is differentiable at c (or has a derivative at c) if the limit lim x->c (f(x) - f(c))/(x-c) exists and is finite. A) Use definition above to prove that f'(x) = (1/3)^(-2/3) for x is not equal to 0 B) Show that f is not differentiable at x = 0 What function? 3. the way im reading it, but its prob wrong, is that f can be any real valued function 4. Originally Posted by ThePerfectHacker What function? Originally Posted by learn18 the way im reading it, but its prob wrong, is that f can be any real valued function Yes, but we need a specific function for part A. You didn't give it to us. -Dan 5. Im sorry, I must have just completely lost it here is the function for a and b f(x) = x^(1/3) for x element of R 6. Originally Posted by learn18 Im sorry, I must have just completely lost it here is the function for a and b f(x) = x^(1/3) for x element of R I give you a hint that should help. [x^(1/3) - a^(1/3)]/ (x-a) Multiply the numerator and denominator by, x^(2/3) + a^(1/3) * x^(1/3) + a^(2/3) This will rationalize the numerator.
2017-02-21 16:05:42
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.918105959892273, "perplexity": 809.7721191601416}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-09/segments/1487501170741.49/warc/CC-MAIN-20170219104610-00055-ip-10-171-10-108.ec2.internal.warc.gz"}
http://www.ck12.org/physics/Electromagnetic-Induction/lesson/Electromagnetic-Induction/
<img src="https://d5nxst8fruw4z.cloudfront.net/atrk.gif?account=iA1Pi1a8Dy00ym" style="display:none" height="1" width="1" alt="" /> # Electromagnetic Induction ## Passing a loop of wire around a magnetic field can generate electrical current in the loop. 0% Progress Practice Electromagnetic Induction Progress 0% Electromagnetic Induction Students will learn how to determine the flux and how to calculate the induced voltage. In addition, Students will learn Lenz law and how to use it to determine the direction of the induced current in a loop of wire. ### Guidance To understand induction, we need to introduce the concept of electromagnetic flux. If you have a closed, looped wire of area \begin{align*}A\end{align*} (measured in \begin{align*} \mathrm{m^2} \end{align*}) and \begin{align*}N\end{align*} loops, and you pass a magnetic field \begin{align*}B\end{align*} through, the magnetic flux \begin{align*}\Phi\end{align*} is given by the formula below. Again, the relative direction of the loops and the field matter; this relationship is preserved by creating an 'area vector': a vector whose magnitude is equal to the area of the loop and whose direction is perpendicular to the plane of the loop. The directions' influence can then be conveniently captured through a dot product: The units of magnetic flux are \begin{align*}\text{T} \times \mathrm{m^2}\end{align*}, also known as Webers\begin{align*}\text{(Wb)}\end{align*}. In the example above, there are four loops of wire \begin{align*}(N = 4)\end{align*} and each has area \begin{align*}\pi r^2\end{align*} (horizontally hashed). The magnetic field is pointing at an angle \begin{align*} \theta \end{align*} to the area vector. If the magnetic field has magnitude \begin{align*} B \end{align*}, the flux through the loops will equal \begin{align*} 4 \cos \theta B \pi r^2\end{align*}. Think of the magnetic flux as the part of the “bundle” of magnetic field lines “held” by the loop that points along the area vector. If the magnetic flux through a loop or loops changes, electrons in the wire will feel a force, and this will generate a current. The induced voltage (also called electromotive force, or emf) that they feel is equal to the change in flux \begin{align*}\triangle \Phi\end{align*} divided by the amount of time \begin{align*}\triangle t\end{align*} that change took. This relationship is called Faraday’s Law of Induction: The direction of the induced current is determined as follows: the current will flow so as to generate a magnetic field that opposes the change in flux. This is called Lenz’s Law. Note that the electromotive force described above is not actually a force, since it is measured in Volts and acts like an induced potential difference. It was originally called that since it caused charged particles to move --- hence electromotive --- and the name stuck (it's somewhat analogous to calling an increase in a particle's gravitational potential energy difference a gravitomotive force). For practical (Ohm's Law, etc) purposes it can be treated like the voltage from a battery. Since only a changing flux can produce an induced potential difference, one or more of the variables in equation [5] must be changing if the ammeter in the picture above is to register any current. Specifically, the following can all induce a current in the loops of wire: • Changing the direction or magnitude of the magnetic field. • Changing the loops' orientation or area. • Moving the loops out of the region with the magnetic field. #### Example 1 You are dragging a circular loop of wire of radius .25 m across a table at a speed of 2 m/s. There is a 2 m long region of the table where there is a constant magnetic field of magnitude 5 T pointed out of the table. As you drag the loop across the table, what will be the induced Emf (a) as the loop enters the field (b) while it is in the field and (c) as it exits the field. ##### Solution (a): As the loop enters the field, the flux will start at zero and begin to increase until the loop is entirely inside the field. The flux will increase from 0 Tm2 to some maximum value in the time it takes for the loop to move into the field. We can find this maximum value using the dimensions of the loop and the strength of the magnetic field. The dot product will be equal to one since the area and magnetic field vectors are parallel. Since we also know the radius of the loop and the speed at which it is being pulled, we also can find out how long it will take for the loop to move within the magnetic field. Now we can find the induced Emf in the loop. (b): There will be no inuduced Emf in the loop once the entire loop is inside the magnetic field because the magnetic flux will not be changing. (c): As the loop exits the magnetic field, the induced Emf will have the same magntiude as when it entered the field except that this time it will be negative because the flux is decreasing. ### Inductance Problem Set 1. A speaker consists of a diaphragm (a flat plate), which is attached to a magnet. A coil of wire surrounds the magnet. How can an electrical current be transformed into sound? Why is a coil better than a single loop? If you want to make music, what should you do to the current? 2. A bolt of lightening strikes the ground \begin{align*}200 \;\mathrm{m}\end{align*} away from a \begin{align*}100-\end{align*}turn coil (see above). If the current in the lightening bolt falls from \begin{align*}6.0 \times 10^6 \;\mathrm{A}\end{align*} to \begin{align*}0.0 \;\mathrm{A}\end{align*} in \begin{align*}10 \;\mathrm{ms}\end{align*}, what is the average voltage, \begin{align*} \varepsilon\end{align*}, induced in the coil? What is the direction of the induced current in the coil? (Is it clockwise or counterclockwise?) Assume that the distance to the center of the coil determines the average magnetic induction at the coil’s position. Treat the lightning bolt as a vertical wire with the current flowing toward the ground. 3. A coil of wire with \begin{align*}10\end{align*} loops and a radius of \begin{align*}0.2 \;\mathrm{m}\end{align*} is sitting on the lab bench with an electro-magnet facing into the loop. For the purposes of your sketch, assume the magnetic field from the electromagnet is pointing out of the page. In \begin{align*}0.035 \;\mathrm{s}\end{align*}, the magnetic field drops from \begin{align*}0.42 \;\mathrm{T}\end{align*} to \begin{align*}0 \;\mathrm{T}\end{align*}. 1. What is the voltage induced in the coil of wire? 2. Sketch the direction of the current flowing in the loop as the magnetic field is turned off. (Answer as if you are looking down at the loop). 4. A wire has \begin{align*}2 \;\mathrm{A}\end{align*} of current flowing in the upward direction. 1. What is the value of the magnetic field \begin{align*}2 \;\mathrm{cm}\end{align*} away from the wire? 2. Sketch the direction of the magnetic field lines in the picture to the right. 3. If we turn on a magnetic field of \begin{align*}1.4 \;\mathrm{T}\end{align*}, pointing to the right, what is the value and direction of the force per meter acting on the wire of current? 4. Instead of turning on a magnetic field, we decide to add a loop of wire (with radius \begin{align*}1 \;\mathrm{cm}\end{align*}) with its center \begin{align*}2 \;\mathrm{cm}\end{align*} from the original wire. If we then increase the current in the straight wire by \begin{align*}3 \;\mathrm{A}\end{align*} per second, what is the direction of the induced current flow in the loop of wire? 5. A rectangular loop of wire \begin{align*} 8.0 \;\mathrm{m}\end{align*} long and \begin{align*}1.0 \;\mathrm{m}\end{align*} wide has a resistor of \begin{align*}5.0 \ \Omega\end{align*} on the \begin{align*}1\end{align*} side and moves out of a \begin{align*}0.40 \;\mathrm{T}\end{align*} magnetic field at a speed of \begin{align*}2.0 \;\mathrm{m/s}\end{align*} in the direction of the \begin{align*}8.0 \;\mathrm{m}\end{align*} side. 1. Determine the induced voltage in the loop. 2. Determine the direction of current. 3. What would be the net force needed to keep the loop at a steady velocity? 4. What is the electric field across the \begin{align*}.50 \;\mathrm{m}\end{align*} long resistor? 5. What is the power dissipated in the resistor? 6. A small rectangular loop of wire \begin{align*}2.00 \;\mathrm{m}\end{align*} by \begin{align*}3.00 \;\mathrm{m}\end{align*} moves with a velocity of \begin{align*}80.0 \;\mathrm{m/s}\end{align*} in a non-uniform field that diminishes in the direction of motion uniformly by \begin{align*}.0400 \;\mathrm{T/m}\end{align*}. Calculate the induced emf in the loop. What would be the direction of current? #### Answers to Selected Problems 1. . 2. \begin{align*}1.2 \times 105 \;\mathrm{V}\end{align*}, counterclockwise 3. a. \begin{align*}15 \;\mathrm{V}\end{align*} b. Counter-clockwise 4. a. \begin{align*}2 \times 10^{-5} \;\mathrm{T}\end{align*} b. Into the page c. \begin{align*}2.8 \;\mathrm{N/m}\end{align*} d. CW 5. a. \begin{align*}0 .8 \;\mathrm{V}\end{align*} b. CCW c. \begin{align*}.064 \;\mathrm{N}\end{align*} d. \begin{align*}.16 \;\mathrm{N/C}\end{align*} e. \begin{align*}.13 \;\mathrm{w}\end{align*} 6. \begin{align*}19.2 \;\mathrm{V}\end{align*}
2015-10-09 07:47:25
{"extraction_info": {"found_math": true, "script_math_tex": 52, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.7809755206108093, "perplexity": 664.7113699774701}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2015-40/segments/1443737916324.63/warc/CC-MAIN-20151001221836-00083-ip-10-137-6-227.ec2.internal.warc.gz"}
http://brilliant.org/practice/polar-coordinates-warmup/?subtopic=complex-numbers&chapter=polar-coordinates
× Back to all chapters # Polar Coordinates Polar coordinates are a way to describe where a point is on a plane. Instead of using x and y, you use the angle theta and radius r, to describe the angle and distance of the point from the origin. # Polar Coordinates Warmup The point $$P$$ has Cartesian coordinates $$(3, 4)$$ and polar coordinates $$(r, \theta).$$ What is the value of $$r$$? The point $$P$$ has Cartesian coordinates $$(0, 1)$$ and polar coordinates $$(r, \theta).$$ What is the value of $$\theta$$ (in radians)? What are the polar coordinates of the point whose Cartesian coordinates are $$(0, 1)?$$ The point $$P$$ has polar coordinates $$(1, 0).$$ The answer choices are the polar coordinates of various points - which one is also at $$P?$$ What are the polar coordinates of the point having Cartesian coordinates $$(5, 0)?$$ ×
2017-04-30 18:45:59
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9914512634277344, "perplexity": 186.26780072932814}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-17/segments/1492917125841.92/warc/CC-MAIN-20170423031205-00117-ip-10-145-167-34.ec2.internal.warc.gz"}
https://dsp.stackexchange.com/questions/54788/how-to-detrend-a-carrier-phase-with-butterworth-filter
# how to detrend a carrier phase with butterworth filter? I am new with detrending techniques, I have a carrier phase measurements and I want to detrend it by 6th order butterworth digital filter. After creating the digital filter I get its numerator (b) and denominator (a). The filter was created by scipy.signal.butter(N, Wn, btype='highpass', analog=False, output='ba') In this case, do I need only to multiply the carrier phase measurements by the output of the filter which is b/a or I need to multiply it by something else? • Check out lfilter, you need to be using convolution. – A_A Jan 13 at 11:25 • I get the first point but the second point no I did not understand what to do? – baddy Jan 13 at 12:05 • Is there any particular reason you are restricted to detrending it with a butterworth digital filter? – Dan Boschen Jan 13 at 12:54 • In fact, I am trying to detrend the carrier phase of GPS signal in order to remove low-frequency contributions from satellite-receiver range variations, antenna gain patterns, background ionosphere and troposphere delays, receiver and satellite oscillator drifts, etc and I have choosen Butterworth because it was the same one used by the receiver to give the processed file while I am working with the raw data – baddy Jan 13 at 13:24 • Interesting, I haven't gone through this to know all the details sufficient to confidently provide an answer below, but did find this paper of interest in case you haven't come across it: ion.org/publications/abstract.cfm?articleID=10030 That said I suspect that you are simply filtering your carrier phase measurement (or other measurements the paper may clarify that). You found the coefficients of your filter above (coeff = scipy.signal.butter), now to filter use the scipy.signal.lfilter function: out = scipy.signal.lfilter(coeff,1, phase_in). – Dan Boschen Jan 16 at 13:26
2019-05-24 18:17:52
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.6199698448181152, "perplexity": 1444.0340478291032}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-22/segments/1558232257699.34/warc/CC-MAIN-20190524164533-20190524190533-00012.warc.gz"}
https://math.stackexchange.com/questions/3464383/infinite-series-that-surprisingly-converge
# Infinite series that surprisingly converge? [closed] I couldn't find any substantial list of 'strange infinite convergent series' so I wanted to ask the MSE community for some. By strange, I mean infinite series/limits that converge when you would not expect them to and/or converge to something you would not expect. My favorite converges to Khinchin's (sometimes Khintchine's) constant, $$K$$. For almost all $$x \in \mathbb{R}$$ (those for which this does not hold making up a measure zero subset) with infinite c.f. representation: $$x = a_0 + \frac{1}{a_1+\frac{1}{a_2+\frac1{\ddots}}}$$ We have: $$\lim_{n \to \infty} =\root n \of{\prod_{i=1}^na_i} = \lim_{n \to \infty}\root n \of {a_1a_2\dots a_n} = K$$ Which is...wow! That it converges independent of $$x$$ really gets me. • What is "except those in a measure zero subset" supposed to mean here? Surely any $x \in \mathbb{R}$ is in a measure zero subset $\{x\}$. – Alex Provost Dec 5 '19 at 17:38 • @AlexProvost I mean that the limit converges to K for almost all $x \in \mathbb{R}$, those for which it does not belonging to a measure zero set. You can read about it here: mathworld.wolfram.com/KhinchinsConstant.html – The Wheel is Before Descartes Dec 5 '19 at 17:48 • As interesting as this is, I'm not sure this is a "good question" for this site. It's subjective and, while there are good subjective questions, this doesn't meet the critieria, in particular number 6. I'm not saying I don't find it cool, I've watched plenty of Numberphile videos that would hit the same dopamine button this question does, I'm just saying it doesn't seem a good fit – corsiKa Dec 6 '19 at 2:30 • If you are in a Zeno-like mood, all convergent infinite series converge strangely. – John Coleman Dec 6 '19 at 3:37 • The identity from this question is pretty peculiar at first. – Clement C. Dec 6 '19 at 6:50 A pretty commonly mentioned one is the Kempner series, which is the Harmonic series but "throwing out" (omitting) the numbers with a 9 in their decimal expansion. And 9 is not special; you can generalize to any finite sequence of digits, and the series will converge. MathWorld has approximate values for the single-digit possibilities. • This one is surprisingly intuitive, which makes it even better! Imagine you are summing up the harmonic series up to 1,000 and decide to take out all numbers that contain 9. Then, you are removing all numbers 9xx (900-999) plus all other numbers that contain 9. That's 1/10 of all the numbers plus all other numbers that contain 9. The higher you sum up, the more numbers you are deleting, which eventually becomes "almost all the numbers". This same philosophy applies to any sequence of numbers; by throwing them out, you are essentially throwing out "almost all the numbers". – Ty Jensen Dec 7 '19 at 2:53 I still like the fact that $$\sum_{n=N}^\infty \frac{1}{n\ln n \cdot \ln \ln n \cdot \ln \ln \ln n \cdot \ln \ln \ln \ln n}$$ diverges, but $$\sum_{n=N}^\infty \frac{1}{n\ln n \cdot \ln \ln n \cdot \ln \ln \ln n \cdot (\ln \ln \ln \ln n)^{1.01}}$$ converges (where $$N$$ is a large enough constant for the denominator to be defined). • What is this pattern called? Does this pattern of adding iterated logs $\to$ divergent sum hold generally? – Zach466920 Dec 7 '19 at 18:54 • @Zach466920 Yes, it does. This is called the "generalized Bertrand series", and can be proven using the Cauchy condensation test. – Clement C. Dec 7 '19 at 18:59 Another one I like for how simply it is written is as follows: $$\sum_{n=1}^{\infty}z^nH_n = \frac{-\log(1-z)}{1-z}$$ Which holds for $$|z|<1$$, $$H_n$$ being the $$n$$-th harmonic number $$= 1 + \frac12+\frac13 \dots \frac1n$$. I can't quite remember where I learned this one from. • This follows from a more general fact: if $F(z) = \sum_{n \geq 0} a_n z^n$ then $$\frac{F(z)}{1 - z} = \sum_{n \geq 0} \left(\sum_{j = 0}^n a_j \right)z^n\,.$$ – Marcus M Dec 5 '19 at 19:56 • I wanted to add that this is called the generating function for the sequence $H_n$ of harmonic numbers. – zhantyzgz Dec 5 '19 at 21:16 • This is not only educational but an excellent example. Thanks! – Ty Jensen Dec 7 '19 at 2:56 Let $$x_n$$ be the nth positive solution of $$\csc(x)=x$$, i.e. $$x_1\approx 1.1141$$, $$x_2\approx 2.7726$$, etc. Then, $$\sum_{n=1}^{\infty}\frac{1}{x_n^2}=1$$ Edit: Even more surprisingly, if we define $$s(k)=\sum x_n^{-k}$$, then we have the generating function \begin{align*} \sum_{k=1}^{\infty}s(2k)x^{2k} &=\frac{x}{2}\left(\frac{1+x\cot(x)}{\csc(x)-x}\right) \\ &=x^2+\frac{2x^4}{3}+\frac{21x^6}{40}+\frac{59x^8}{140}+\frac{24625x^{10}}{72576}+\cdots \end{align*} Unfortunately it seems that, as with the Riemann zeta function, the values of $$s$$ at odd integers are out of reach. • Is there a reference for this series? It should be possible to prove this using a contour integral for the function $\frac{\sin(z)+z \cos(z)}{z^2(z\sin(z)-1)}$ over the imaginary axis and a half-circle in $\Re(z)>0$. – Thijs Dec 11 '19 at 20:03 • @Thijs I don't know of one, I found the sum myself. I used an integral over a full circle, since the residues at the negative roots are the same as at the positive roots. – Ben Dec 11 '19 at 22:05 • @Thijs I've updated my post to include a more general result that I also found, the generating function of $\sum x_n^{-2k}$. – Ben Dec 11 '19 at 22:20 You might find some interesting examples in the book, (Almost) Impossible Integrals, Sums, and Series. Here you have two examples: First example: $$\small\zeta(4)=\frac{4}{45}\sum_{i=1}^{\infty}\sum_{j=1}^{\infty}\sum_{k=1}^{\infty} \frac{(i-1)!(j-1)!(k-1)!}{(i+j+k-1)!}\left((H_{i+j+k-1}-H_{k-1})^2+H_{i+j+k-1}^{(2)}-H_{k-1}^{(2)}\right),$$ where $$H_n^{(m)}=1+\frac{1}{2^m}+\cdots+\frac{1}{n^m}, \ m\ge1,$$ denotes the $$n$$th generalized harmonic number of order $$m$$. Second example: Let $$n\ge2$$ be a natural number. Prove that $$\sum_{k_1=1}^{\infty}\left(\sum_{k_2=1}^{\infty}\left(\cdots \sum_{k_n=1}^{\infty} (-1)^{\sum_{i=1}^n k_i} \left(\log(2)-\sum_{k=1}^{\sum_{i=1}^n k_i} \frac{1}{\sum_{i=1}^n k_i +k}\right)\right)\cdots\right)$$ $$=(-1)^n\biggr(\frac{1}{2}\log(2)+\frac{1}{2^{n+1}}\log(2)+\frac{H_n}{2^{n+1}}-\sum_{i=1}^n\frac{1}{i2^{i+1}} -\frac{\pi}{2^{n+2}}\sum_{j=0}^{n-1} \frac{1}{2^j} \binom{2j}{j}$$ $$+\frac{1}{2^{n+1}}\sum_{j=1}^{n-1}\frac{1}{2^j}\binom{2j}{j}\sum_{i=1}^{j}\frac{2^i}{\displaystyle i \binom{2i}{i}}\biggr),$$ where $$H_n=\sum_{k=1}^n\frac{1}{k}$$ denotes the $$n$$th harmonic number. • Yes, that is...quite a strange sum! – The Wheel is Before Descartes Dec 5 '19 at 23:59 • @heepo I added another beautiful example from the same book. – user97357329 Dec 6 '19 at 0:04 • It reminds me of the proof of QR. – The Wheel is Before Descartes Dec 6 '19 at 0:09 I would like to nominate an infinite product: $$\prod_{n=2}^{\infty}\dfrac{n^3-1}{n^3+1}=\dfrac{2}{3}$$ Proof: Factor thusly: $$n^3-1=(n-1)(n^2+n+1)=((n-2)+1)(n^2+n+1)$$ $$n^3+1=(n+1)(n^2-n+1)=(n+1)((n-1)^2+(n-1)+1)$$ and the product then telescopes. Suppose $$\sum_{n=1}^{\infty} a_n$$ and $$\sum_{n=1}^{\infty} b_n$$ are both divergent. Then, one might assume that $$\sum_{n=1}^{\infty} (a_n+b_n)$$ also diverges. This is false. Suppose $$a_n=1$$ and $$b_n=-1$$ for all $$n$$. Then $$\sum_{n=1}^{\infty} a_n=\sum_{n=1}^{\infty} \,1 ~~\text{diverges}$$ and $$\sum_{n=1}^{\infty} b_n=\sum_{n=1}^{\infty} \,(-1) ~~\text{diverges}$$ However $$\sum_{n=1}^{\infty} (a_n+b_n)=\sum_{n=1}^{\infty} \,(1+(-1)) =\sum_{n=1}^{\infty}\,0=0$$ is convergent. To add another; I was surprised when I learned the two sums: $$\sum_{k=1}^{\infty}\frac1{k^2} = \frac{\pi^2}{6}$$ $$\sum_{k=1}^{\infty}\frac{(-1)^k}{k^2} = \frac{\pi^2}{12}$$ And thought the intuition behind the second coming from the famous first sum was neat. A series from user Reuns, which he proves in a previous question of mine: $$\sum_{k=1}^\infty\frac{\Re(i^{\sigma_0(k)})}{k^s} = \zeta(s)-\zeta(2s)-2\zeta(2s)\sum_{r\ge 1} (-1)^{r}\sum_{p \text{ prime}}p^{-s(2r+1)}$$ For $$s>1$$. (Will remove upon Reuns's request)
2020-01-25 23:39:49
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 47, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9298697710037231, "perplexity": 608.9873442069296}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-05/segments/1579251681625.83/warc/CC-MAIN-20200125222506-20200126012506-00457.warc.gz"}
http://debasishg.blogspot.com/2009/10/are-orms-really-thing-of-past.html
## Sunday, October 18, 2009 ### Are ORMs really a thing of the past ? Stephan Schmidt has blogged on the ORMs being a thing of the past. While he emphasizes on ORMs' performance concerns and dismisses them as leaky abstractions that throw LazyInitializationException, he does not present any concrete alternative. In his concluding section on alternatives he mentions .. "What about less boiler plate code due to ORMs? Good DAOs with standard CRUD implementations help there. Just use Spring JDBC for databases. Or use Scala with closures instead of templates. A generic base dao will provide create, read, update and delete operations. With much less magic than the ORM does." Unfortunately, all these things work on small projects with a few number of tables. Throw in a large project with a complex domain model, requirements for relational persistence and the usual stacks of requirements that today's enterprise applications offer, you will soon discover that your home made less boilerplated stuff goes for a toss. In most cases you will end up either rolling out your own ORM or start building a concoction of domain models invaded with indelible concerns of persistence. In the former case, obviously your ORM will not be as performant or efficient as the likes of Hibernate. And in the latter case, either you will end up building an ActiveRecord model with the domain object mirroring your relational table or you may be more unfortunate with a bigger unmanageable bloat. It's very true that none of the ORMs in the market today are without their pains. You need to know their internals in order to make them generate efficient queries, you need to understand all the nuances to make use of their caching behaviors and above all you need to manage all the reams of jars that they come with. Yet, in the Java stack, Hibernate and JPA are still the best of options when we talk about big persistent domain models. Here are my points in support of this claim .. • If you are not designing an ActiveRecord based model, it's of paramount importance that you keep your domain model decoupled from the persistent model. And ORMs offer the most pragmatic way towards this approach. I know people will say that it's indeed difficult to achieve this in a real life world and in typical situations compromises need to be made. Yet, I think if you need to make compromise for performance or whatever reasons, it's only an exception. Ultimately you will find that the mjority of your domain model is decoupled enough for a clean evolution. • ORMs save you from writing tons of SQL code. This is one of the compelling advantages that I have found with an ORM that my Java code is not littered with SQL that's impossible to refactor when my schema changes. Again, there will be situations when your ORM may not churn out the best of optimized SQLs and you will have to do that manually. But, as I said before, it's an exception and decisions cannot be made based on exceptions only. • ORMs help you virtualize your data layer. And this can have huge gains in your scalability aspect. Have a look at how grids like Terracotta can use distributed caches like EhCache to scale out your data layer seamlessly. Without the virtualization of the ORM, you may still achieve scalability using vendor specific data grids. But this comes at the price of lots of and the vendor lock-ins. Stephan also feels that the future of ORMs will be jeopardized because of the advent of polyglot persistence and nosql data stores. The fact is that the use cases that nosql datastores address are very much orthogonal to those served by the relational databases. Key/value lookups with semi-structured data, eventual consistency, efficient processing of web scale networked data backed with the power of map/reduce paradigms are not something that your online transactional enterprise application with strict requirements of ACID will comply with. So long we have been trying to shoehorn every form of data processing with a single hammer of relational databases. It's indeed very refreshing to see the onset of nosql paradigm and it being already in use in production systems. But ORMs will still have their roles to play in the complementary set of use cases. Stephan.Schmidt said... "[...] littered with SQL that's impossible to refactor when my schema changes." Never saw a ORM (like Hibernate) help when changing the database. And on the contrary: I haven't seen schema changes in large databases, because too many systems (reporting, accounting) depend on a schema. Your domain model will change much more likely, and when the gap between your domain classes and your db is too large, your ORM will break. This often prevents refactoring of domain classes. "If you are not designing an ActiveRecord based model, it's of paramount importance that you keep your domain model decoupled from the persistent model." As said above, ORMs do not decouple your domain classes from the database, but instead nail your domain classes to your database schema. Ever tried splitting domain classes that are in one table? Everything beside renaming classes and attributes is out of the window if you use an ORM (just my experience, YMMV). Cheers Stephan http://www.codemonkeyism.com Mesirii@MG said... You can also use refactoring aware SQL dsls like squill, jequel, empiredb. Regarding those _big_ domain models. In DDD terms they are broken anyway as there are no modules or bounded contexts that address the relevant part of the domain model at once. When talking to BigDaveThomas at JAOO he also stressed that most solutions today are just simple CRUD systems that are bloated with ORM. Just mapping the tables to a screen is often a simple case of generic SQL and you're done :) Michael Michael Stephan.Schmidt said... "Have a look at how grids like Terracotta can use distributed caches like EhCache to scale out your data layer seamlessly." We use TC and it does scale out our data without Hibernate. Cheers Stephan Ari said... I am starting to learn that Hibernate requires more understanding and time than most people want to give it, but with that understanding, it can really work for you. I was talking to a user yesterday who asserts that QueryCaching is bad for him. I listened, checkpointed with someone who knew query caching very well, and found out that it will indeed work for this user if used properly. I see that since Terracotta stopped fighting Hibernate and embraced it in the market and now that we build products for Hibernate users, my understanding of the technology has grown. Our ability to serve the needs of higher performance while staying within the confines of the Hibernate world have vastly improved over the last year. Yes Hibernate has a few problems but I see the path fwd as contributing fixes and helping and not trying to invent yet another way to do what is inevitable, marshaling data to and from an RDBMS. --Ari Monis Iqbal said... Lazy Initialization is not a feature/side-effect of ORMs, they can be present in your DSLs as well. ORMs seems like a hindrance at the start of the project or when there are less "objects". I think they are well suited for Object-Oriented minded teams. But now, as we are exploring different areas, paradigms, we tend to move away from ORMs and that's natural for these kinds of projects. Anonymous said... ORM is nothing more than an alternate marshalling scheme. To add layers upon layers of marselling has never made sense. That said db calls are tied to the network and is the case with all technologies that rely on aslow under pinnings caching will be essential. The best way to make an application cache resistant is to scatter the calls throughout your application. At least ORM normalize execution paths which makes it easier to add caching. That said for the moment, applications are going to need to rely on something other than RDB technologies if they are going to scale. Technologies such as memcached look very interesting in that it is a very simple technology that is highly scalable Kirk Dave said... I'm not sure with Stephen means that ORMs cannot help when refactoring a database; they help a hell of a lot more than strings containing SQL all over the place would; it's trivial to write an integration test to load up all your mapped beans and try to access one. If your mappings and database are inconsistent, you will know immediately what is the problem and where. Having seen many codebases NOT using an ORM, I have to say they were all a big, huge, mess. ORM makes the code cleaner (or can help). And clean code can be refactored, maintained and optimized a lot better than a big mess of SQL statements everywhere. Anonymous said... Just put everything in stored procs, then your java code is completely shielded from the database structure. I've used this approach on a number of projects, and it has worked well. Simple and easy to maintain. This is in stark contrast to the ORM based projects I've worked on where no one on the project truly understood what was going on with all of the complex mappings, cachings, cryptic errors, etc. I can show any who knows SQL how to do virtually anything needed in a few hours with stored procs. ORM adds much more complexity...and I often see lazy loading all over the place causing horrible performance. Anonymous said... I can understand decoupling your domain model from the database in that the domain objects should be simple POJOs. That is where something like JdbcTemplate really shines (similar to using iBatis). In this modern era of polyglot, how come we don't recognize that SQL is a language of its own. When tuning queries, it seems easier to enlist our team's database engineer to help me out by showing him queries, rather than bringing him up to speed on XYZ-QL. remcob said... Maybe the real solution is some kind of "extended ddl" where one could specify validations/constraints/other business logic" more easily. Like Hibernate, but without the "object mapping" part (why should one try to map relational data to objects?). Like "stored procedures", but functional instead of procedural (though i like spaghetti with cheese). (just an idea) Anonymous said... Rails doesnt use straight SQL, so there is no need to move away from ORMs. Just wait and see what the Rails guys do. Anonymous said... I worked on a complete rewrite of a system and the Lead Developer did not use an ORM. We basically wrote our own, and it was a total mess and a waste of time. We spent most of our time debugging our data access layer and never made meaningful progress on the true functional requirements. After a year of hell, the Lead was fired and we threw the code away and started over with an ORM. What a relief! Not having to spend a lot of time dealing with the data access layer freed us up to focus on the functional requirements which makes happy clients which makes happy managers which makes happy developers. Somehow I ended up on a team of developers that thought 3rd party tools are for wimps, and they could write everything themselves. What a bunch of arrogant fools, and what a waste of time. If we had all the time in the world, maybe we could write a better tool, but I doubt it. I freely admit that developers who create tools like Hibernate are smarter than me. Why would I waste any time trying to reinvent the wheel? It may not be perfect, but it's better than anything I could create myself. As for the arrogant fools who liked to create their own tools instead of just finding one already built? They were all fired at various times for consistently not finishing projects. Debasish said... @Anonymous (of the last comment) You resonate pretty much what I wanted to say. It's true that ORMs like Hibernate are not without the warts. At the same time they offer a tonne of benefits too. My suggestion will be : 1. to use what good they offer (and they really offer a lot) 2. avoid the sucky features 3. use your judgement to selectively apply the ones that are debatable. If you do not want to use the persistence context or automated session management, use the stateless session interface, where you use your ORM to marshal / unmarshal data out of your RDBMS and get stuff in the form of detached objects. Hibernate offers this .. check out http://docs.jboss.org/hibernate/core/3.3/reference/en/html/batch.html#batch-statelesssession .. RobB said... Session-less Approach ---------------------- For those wanting a "session-less" approach you can also check out Ebean ORM to see if it is more of your liking. http://www.avaje.org This means you don't need to worry about - LazyInitialisationException - management of session objects (Hibernate session/ JPA EntityManager). - merge/persist/flush replaced with save Sorry for the blatent plug but if you are looking for a simpler/session less approach it would be worth a look :) Cheers, Rob. Christian said... I don't think ORMs are a thing of the past but I also don't think they are a one size fits all option. I was wondering if you've had a chance to check out Squeryl ? This is a LINQ style DSL for Scala. Eg: def songsInPlaylistOrder = from(playlistElements, songs)((ple, s) => where(ple.playlistId === id and ple.songId === s.id) select(s) orderBy(ple.songNumber asc) ) This is translated to SQL and executed for you. If you need to refactor (assuming someone will develop adequate refactoring tools for Scala) nothing is missed because there are no hbms to worry about. Debasish said... I have looked at SQueryl. Then there is ScalaQuery as well and quite a few other frameworks inspired by LINQ. All of them do a nice job of providing type safe queries on the domain objects. This way you save a lot from writing SQLs. But my main concern is that this process can quickly go out of bounds in a large project where you may have thousands of tables. Besides hiding SQLs, an ORM also does this job of virtualizing the data layer. This means you can scale up your data layer as transparently using products like Terracotta, Coherence or Gigaspaces. I like the elegance of LINQ inspired frameworks, but still skeptical about their usage in a typical enterprise application which needs high scalability. Christian said... I view SQueryl as more of a small scale option although I'd still write a domain model that is separate from the persistent model with that tool. The .Net world offers the best of both worlds with the NHibernate guys supplying a Linq provider. The Linq generates criteria API calls rather than SQL.
2018-09-19 00:03:00
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.23241399228572845, "perplexity": 2164.623478124321}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-39/segments/1537267155792.23/warc/CC-MAIN-20180918225124-20180919005124-00324.warc.gz"}
https://www.springerprofessional.de/event-detection-and-multi-source-propagation-for-online-social-n/16545306?fulltextView=true
main-content ## Weitere Artikel dieser Ausgabe durch Wischen aufrufen 13.03.2019 | Ausgabe 1/2020 Open Access # Event Detection and Multi-source Propagation for Online Social Network Management Zeitschrift: Journal of Network and Systems Management > Ausgabe 1/2020 Autoren: Lei-lei Shi, Lu Liu, Yan Wu, Liang Jiang, Ayodeji Ayorinde Wichtige Hinweise ## Publisher's Note Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations. ## 1 Introduction In recent years, online social network management has become an important part of our daily lives [ 15]. As a form of online social network management, microblogging network management platforms are also developing and attracting people at a rapid pace [ 610]. Microblogging network management platforms is known as the best tool for people to share and exchange opinion [ 1114]. For example, many companies can promote their goods and services via microblogging network management platforms. Some people who have interest in football games can get information of their favorite football players immediately via relevant posts of users on microblogging network management platforms, which serves as tools for sending posts, which will also allow users to discover trending events [ 1519]. On microblogging network management platforms [ 20, 21], the spread of information is likened to fission. After the release of the user’s post, the microblogging network management platforms will automatically push these posts to neighbours. These neighbours may forward these posts, which will be pushed to the neighbour’s neighbours. And, the user is not only the consumer of information, but also the producer of information. Users forward other people’s posts, but also release new information. The information can also be forwarded by neighbours, thus spreading to more users. Therefore, in a microblogging environment, the information spread faster and local discussion is more likely to cause group effect. At the same time, users of microblogging network management platforms obviously have different participation behaviours, different interests in topics, different levels of activity, and the contents of post affects their behaviour, which results in heterogeneity of topics. In addition, most of the topics will quickly disappear from the list of discussed topics, and some of the topics will stand out amongst competing topics to become a hot topic, causing a lot of attention. At present, the dynamics of information dissemination is like infectious disease dynamics, such as the SIR model [ 22]. These models assume that there is only one communicator in the system at the initial time, and the communicator will pass the information to the neighbours through interaction with the neighbours. At the same time, interest in the communicated information may slow down leading to a loss enthusiasm from users and exit the topic discussion pool, to enter the stable state. However, users in microblogging network may spontaneously publish new posts and become a communicator. At the same time, users may not be interested in these events information, and will not get involved in the communication. The existing information propagation models [ 2225] do not consider multi-source events detection and propagation, competing hot events and user interaction, hence it is needed to establish a multi-source events detection and propagation model which is suitable to describe the process of hot events information dissemination, and to describe the key role of users and event characteristics in the process of communication. To this end, the multi-source events detection and propagation model, named event detection and multi-source propagation (EDMP) model, is proposed. And we study the propagation process of a single hot event, modelling the individual spontaneous communication behaviours. Then, interaction between the communication sources is analysed in the model. Finally, we study the process of simultaneous communication and establish multi-source propagation model for competing events based on user interest, which describes the relationship between the hot events. The main contributions of this paper are listed as follows: 1. We propose an intelligent event propagation model with the knowledge sets [ 8]. Specifically, this event propagation model does not require any knowledge from the microblogging network. The microblogging network management platform only assigns a set of key users as initial users representing hot events to the event propagation model. Then, the model will use the first initial users set for learning and generating experience sets. And the event propagation model will compare the keywords of users’ interest with the content of discovered hot events, and it should have proper keywords to describe the topic of users’ interest for comparison to form the topic keywords experience set. Therein, the model will compute a prediction score according to the information of users and events obtained from previous event propagation. What’s more, the prediction score will be used in user ordering process to generate target users learning experience set when the next event propagation starts. Finally, for the next hot event propagation, it will use these experience sets to achieve more effective event propagation. 2. We establish multi-source events detection and propagation model based on individual interest [ 7] to describe the process of multi-source event information dissemination, and to describe the key role of users and hot event characteristics in the process of microblogging network management and communication. Specifically, we firstly study the propagation process of a single event, modelling the individual spontaneous communication behaviours. Each time, an individual chooses a message to participate in from the information disseminated from the information collection, at the same time consider their own topics of interest. Then as in the model, there is a cooperative relationship among the communication sources, we finally study the process of simultaneous communication and establish a multi-source event propagation model, which describes the relationship between the hot events based on cooperation and competition. 3. We apply our model to the real Twitter dataset to demonstrate the effectiveness of our proposed multi-source events detection and propagation model compared with some existing event detection and propagation models [ 7, 14, 17, 26]. The remainder of this paper is structured as follows: we discuss related work for event detection and propagation in Twitter in Sect.  2. In Sect.  3, we introduce our intelligent event propagation model. In Sect.  4, we design our multi-source events propagation model. We do our experiments in Sect.  5. The last section concludes our study and future work. ## 2 Related Work Recently, event detection and propagation, has drawn more and more attention from various fields of research especially concerning influence maximization based on users’ opinion, and all kinds of methods have been proposed to catch event propagation in social networks [ 13, 14, 2225]. Besides event detection and propagation have extensive applications such as viral marketing [ 1], product promotion [ 2], and friend recommendation [ 11] and rumors control [ 12]. And some researchers pay attention to create some effective models for explains the general process of event information dissemination. These models are useful for the dissemination of event information in social network simulation [ 24, 25, 27]. However, these models cannot be directly applied to propagation of hot events because of the complex processes involved and uncertainty. As we all know, the influence maximization problem was first introduced into the social network as an algorithm proposed by Richardson, which has been proved to be NP-hard, and can perform an approximate optimal solution with the accuracy of (1 − 1/e) based on a greedy algorithm. Developments from this initial work have generated excellent algorithms [ 2830] and effectively improve the time efficiency of mining influential nodes. However, the influence of a given node on other nodes is the same in those studies; that is to say, the activation chance of a node to activate other nodes is a constant. Similarly, information content and node preference are not taken into consideration; i.e. the influence exerted by a node is also fixed, even for totally different event topics. Obviously, this is not accurate in real life, where, for example, an individual may has high influence among peers when it comes to discuss the subject “economy” but completely unknown by peers in the area of “law”. In simple terms, it is rare for any individual to be considered as an expert in multiple fields. The influence of a person in a social network is likewise related to both the node and the topic, and the influence of a given node is different for different topics [ 3133]. However, a limitation of these works [ 3133] is that it solely considered the topic influence on user activation probability and did not take into account the popularity degree of the users’ interest, the links between posts and the diffusion power of users. Overall, this results in a low efficiency algorithm and the improper number of final mining core users. Minimal research combines the topic popularity degree scoring, topic community detection and event propagation together, which can improve the efficiency and enlarge the influence scope of key users of hot events. Therefore, previous researches have focused on studying event propagation in various ways [ 3439]. Richardson and Domingos [ 32] studied the information propagation problem and propose a probabilistic method. And Kempe et al. [ 27] formulated the problem of event propagation as an optimization problem and developed an algorithm for an event diffusion model. Meanwhile, some other researchers have also put forward a lot of excellent algorithms on the basis of this work [ 22, 24, 25] and effectively improve the time efficiency of event propagation. However, the event propagation model on all other users is the same in those studies, that is, the activation probability of a user to activate other users is a constant. Each of these studies does not consider the importance of event content and user preferences. It has been shown that the event propagation in the microblogging network is related to the relationship between the users and the topic of event, and the event propagation ability of the same user is different under different events [ 27, 40, 41]. To this end, Zhang et al. [ 42] proposed a two-stage algorithm to propagate the hot events for a specific topic and improves the event propagation scope. Zhou et al. [ 43] calculated the user activation probability at the topic-level by user interest distribution and then proposed a new event propagation algorithm to quickly diffuse the events under specific topic based on the probability, which also improves the event propagation scope. These studies focus on the effect of event influence on user’s activation probability and do not consider the popularity of events, the links between posts and the diffusion power of users, which also causes the waste of users’ influence, resulting to low efficiency of event propagation. Meanwhile, unlike our model, these works only studied single event propagation. In addition, few people study the spontaneous transmission of events in microblogging network, as well as the interaction and competition among event sources. To the best of our knowledge, the intelligent multi-source events propagation question has not yet been well discussed. Although previous researches have proposed many methods to event propagation, our work is very different. First, we propose the new event detection and propagation model based on key users [ 19] and users’ interest [ 7]. Meanwhile, we express the problem of event propagation as a learned task and aim to identify the accurate characteristics of such events. Then we investigate the relation between extracted features of event propagation and user interest. Last, our dataset is extracted from Twitter, and we validate the effectiveness of our model compared with existing models [ 7, 14, 17, 26]. ## 3 Intelligent Events Propagation Process ### 3.1 Preliminary Given a microblogging network G = ( V, E), V = { v 1, v 2,…, V n} is a set of users, E = { e 1, e 2,…, E m} is a set of edges. Adjacency matrix denotes the connection relationship among users, the value of the corresponding element of the matrix indicates whether the edge exists: if there is an edge between v i and v j, then A ij = 1; if no edge exists between v i and v j, then A ij = 0. Generally, the adjacency matrix A can be used as the similarity matrix of the microblogging network to describe the similarity between users. However, in addition to the similarity between the users which are directly connected in the network, there are different degrees of similarity between the users which are not directly connected. For example, there is a certain similarity between two users that can reach one another after a finite number of steps. The adjacency matrix is used as the similarity matrix of the network and it can simply represent the similarity relationship between users that are directly connected but it cannot be used to express the similarity relationship between the users that are not directly connected. Therefore, the adjacency matrix loses the similarity relation information between many users and cannot reflect the complete local information of each user. Adjacency matrix contains limited information which affects the accuracy of community discovery. Therefore, in order to describe the local information of each user more adequately, a method based on step number is proposed in this section. According to the adjacency matrix A in the network, the similarity relationship score between users is calculated, and a new similarity matrix is obtained. The definitions of s-steps and similarity matrix are given in this paper as follow. Definition 1 ( s-steps) Given a social network G = ( V, E), For any user in the point set, if user u can arrive at user v at least after s steps, that is, the length of the shortest path from user u to user v is s, it will be said that user u can arrive at user v through s steps. Step number and attenuation factor are used to calculate the similarity relationship between two users which are not directly connected, which can better reflect the community topology structure, and improve the accuracy of community detection [ 15]. However, when the number of steps is greater than a certain threshold, two users that are not in the same community will also get a certain similarity value, which makes the boundary of community structure more obscure. Therefore, setting step threshold S, only calculates the similarity between users that can reach each other in the S steps, so as to ensure that the topological information of the microblogging social network is enhanced without affecting the division of community boundaries. In the experimental part, the step number threshold S and attenuation factor σ are analyzed, and the influence of different step threshold S and attenuation factor σ on the result is studied. ### 3.2 The Improved HITS Method In the original HITS method, a link is used to represent the hyperlinks between web pages. While in our improved HITS method, a link represents an operational relationship between a user and a post such as publishing or commenting. In this paper, the HITS algorithm is extended to exploit the inseparable connection between the users and their corresponding posts for the purpose of distilling the influential users [ 7, 17, 19]. As a result, the proposed improved HITS method can effectively filter out the random ordinary users, this helps to improve the efficiency and accuracy of intelligent event propagation model. ### 3.3 Intelligent Event Propagation Process As we can see from the Fig.  1, it depicts the process of the intelligent event propagation. Intelligent event propagation consists of three steps: first propagation, learning process and consecutive propagation, in which the learning process is pretty important because the intelligent event propagation model’s experience sets will be gained from this process. First propagation is a step of propagating events without any prior information about how to choose the initial users. During this step, the event propagation model only has some keywords extracted from key posts describing an interesting topic from an event. The key users are chosen to be the candidate set of initial influential users for propagating the hot events. Learning process is a step where the event propagation model learns how to better get the relevant influential users. First, initial influential users set will be obtained by computing a hub score for each user and obtaining the high hub ones based on the HITS algorithm. Besides, topic keyword set will be created by extracting keywords from users’ interests [ 7], as well as from the key posts of users, which point to hot events [ 17]. Finally, target user prediction set will be achieved by calculating topic similarity between the content of all detected hot events and the content of all detected users’ interests [ 7, 9, 10] and employing those scores in user prediction process. These sets are composed of the intelligent event propagation model’s experience sets. As we all know, appropriate initial influential users support the model to propagate as many influential users of hot events as possible at the beginning process of hot event propagation. What’s more, proper topic keywords will help the model to recognize from the propagated users, the keywords related to a topic of users’ interest. Furthermore suitable target user prediction assists the model to predict the relevancy of the content of users extracted from hot events. Consecutive propagation is a step during which the event propagation model detects high influence users based on these experience sets. During this process, suitable initial influential users and high-quality topic keywords have been learned. ### 3.4 Topic Popularity Based Event Propagation In the IC model, the activation probability is generated randomly. However, the activation probability of a node is related to the social relationships among nodes and topics in the process of event propagation, and nodes have a different activation probability for different topics. Therefore, a Topic Popularity-based Event Propagation model, named TPEP model is proposed, which calculates the node activation probability $$P_{u,v}^{t}$$ for specific topics to simulate the event propagation in social networks in a more realistic way. The activation probability $$P_{u,v}^{t}$$ is influenced by the following factors. Firstly, it is closely related to the social connections between nodes; greater connection times imply a more intimate relationship between nodes and have a higher activation probability. Therefore, user intimacy can be used to represent the degree of intimacy between nodes. Definition 2 User intimacy, C u,v, denotes the frequency of the connection between nodes u and v. It can be obtained from the ratio of the connection times of u and v to the connection times of u and other nodes. The calculation method is shown in formula ( 1). $$C_{u,v} = \frac{{R_{u,v} }}{{\sum\nolimits_{i = 1}^{n} {R_{{u,V_{i} }} } + \sum\nolimits_{i = 1}^{n} {R_{{v,V_{i} }} } }},(u,v, V_{i} \in V)$$ (1) where $$R_{{u,V_{i} }}$$ denotes the connection time of nodes u and V i, R u,v denotes the connection time of nodes u and v. In addition, $$P_{u,v}^{t}$$ is also influenced by users’ topic popularity. The more popular the two users’ topics are, the easier and quicker information is propagated. Therefore, the topic popularity can affect the activation probability $$P_{u,v}^{t}$$ of two users. Definition 3 Topic popularity, $$TP_{u,v}^{T}$$, denotes the popularity degree of two users’ topic. The topic popularity $$TP_{u,v}^{T}$$ can be calculated as formula ( 2). $$TP_{u,v}^{T} = \frac{{Authority_{u,v}^{T} }}{{Authority_{\text{max} }^{{T_{i} }} + Authority_{\text{min} }^{{T_{i} }} }},(u,v \in V,T_{i} \in T)$$ (2) where $$Authority_{u,v}^{T}$$ denotes the authority of key post in the topic T, $$Authority_{\text{max} }^{{T_{i} }}$$ denotes the biggest authority of key posts in topics and $$Authority_{\text{min} }^{{T_{i} }}$$ denotes the smallest authority of key posts in topics. The authority of key posts in topics can be gained computed on improved HITS algorithm which is depicted above. Besides if the authoritative value of key posts occupies a significant part of the topics, the more popular the topic will be. In summary, the activation probability $$P_{u,v}^{t}$$ is influenced by the user intimacy C u,v and topic popularity $$TP_{u,v}^{T}$$, so the activation probability of user u to v for specific topic t is calculated using formula ( 3). $$P_{u,v}^{t} = C_{u,v} \times TP_{u,v}^{T} \quad (P_{u,v}^{t} \in [0,1])$$ (3) The propagation process of the TPEP model is the same as the IC model that each user has only one chance to activate its neighboring users, and the user’s activation process is independent of each other. The difference is that the users’ activation probability of TPEP is different under different topics, which is more in line with the information propagation of microblogging networks. In the first stage, we only choose the initial influential spreaders, and not consider the information propagation characteristics of the microblogging network. Therefore, this second stage uses the spreaders from the first stage to spread information using the TPEP model proposed in this paper, it then iteratively mines top-k spreaders with biggest topic influence increment as the remaining influential nodes. The biggest topic influence increment refers to the influence scope value of the spreader set after adding a spreader u minus the scope value before adding the spreader u to achieve a maximum. The calculation method is shown in formula ( 4). $$\delta (u|t) = \text{max} \{ \delta (S \cup \{ u\} |t) - \delta (S|t)\}$$ (4) ## 4 Experiments In this section, we detail the experiments in order to show the effectiveness of our proposed EDMP model. We consider typical event detection and propagation models as our baseline, namely IC (Independent Cascade) [ 14], BEE (Bursty Event dEtection) [ 26], EVE (Efficient eVent dEtection) [ 17], HEE (Hot Event Evolution) [ 7]. ### 4.1 Dataset Our datasets are collected from Twitter ( http://​twitter.​com/​) via Twitter API [ 20]. The collected dataset is composed of 1,500,000 posts and 36,845 users. ### 4.2 Baseline Approaches The efficiency and effectiveness of the proposed EDMP model is validated by evaluating our model against IC model, BEE, EVE, HEE, which are the classic event detection and propagation algorithms. ### 4.3 Parameter Experiment The effect of step number threshold S and attenuation factor σ on experimental results are in this section. 1000 users’ data in the database are randomly selected for experiments, and the F- measure score mentioned above is a measure of s the index. In the experiment, the value of one parameter is fixed, and the influence of the change of the other parameter value on the F- measure is analyzed to determine the final value of the parameters. (1) Step number threshold S In view of the data set, the attenuation factor σ = 0.5 is set up, and the effect of the step number threshold S on the F- measure is analyzed. As shown in Fig.  2, with the increase of the step number threshold S, the trend of F- measure increases first and then decreases. The experimental results show that considering the similarity of user pairs which are not directly connected but reachable within a certain number of steps, the local information structure of each user can be effectively determined. However, if the threshold is too large, the distance between the users in the same community will also increase with a certain similarity value, which will not facilitate the identification of the community boundaries, and the accuracy of the community will be reduced. For small datasets, select small step number threshold 3, and for big datasets, select slightly larger step threshold 8 to achieve the optimal result. The threshold selection in this paper is 3. (2) Attenuation factor σ In view of the data set, step number threshold S = 0.5 is set up, and the effect of the attenuation factor σ on the F- measure is analyzed. As shown in Fig.  3, with the increase of attenuation factor, the trend of F- measure overall increases first and then decreases. This due to the fact that the attenuation factor controls the attenuation degree of similarity with the increase of hop counts. For small datasets, a slight attenuation factor σ = 0.5 is selected to avoid the vagueness of community boundary when the attenuation factor is too large. For a large dataset, a small attenuation factor σ = 0.1 is selected to enhance the local feature of the user to achieve the optimal result. ### 4.4 Evaluation The Precision is an important metric, which can be used to measure the efficiency of our proposed model, as defined as follows: $${\text{Precision}}\_{\text{p}} = \frac{k}{K}$$ (5) where k represents the number of posts related to the real-life event in the top K posts under a topic. As mentioned above, the scoring method based on HITS algorithm is proposed to select high-quality posts, high-influence users and high-popularity topics from the social media data streams. Threshold A is then defined and posts (where the authority score are greater than A) are high-quality. Three experiments are conducted setting different value to get a suitable threshold A. Table  1 shows the result of the number of detected hot events under different topic count m respectively. Tables  2 and 3 show the result of time efficiency and precision. Three experiments testified that EDMP model can detect hot events more accurately and efficiently when A = 0.0001. Therefore, the next contrast experiments are all conducted with A = 0.0001. Table 1 Number topic of different value Value Number of detected hot event under the different m m = 10 m = 15 m = 20 A = 0.0001 7 11 16 A = 0.001 6 8 11 A = 0.01 5 7 9 Table 2 Time of different value Value Time Event detection (min) Event propagation (min) Total (min) A = 0.0001 20.6 17.8 38.4 A = 0.001 20.6 22.5 43.1 A = 0.01 20.6 17 37.6 Table 3 Precision of different value Value P@10 P@20 P@50 P@100 A = 0.0001 10/10 19/20 45/50 56/100 A = 0.001 10/10 17/20 41/50 53/100 A = 0.01 10/10 17/20 43/50 46/100 We present a propagation result on a two-dimensional graph in Fig.  7 where x-axis is the number of propagated users and y-axis is a precision obtained as follows. $${\text{Precision}}\_{\text{u}} = \frac{{{\text{number}}\;{\text{of}}\;{\text{relevant}}\;{\text{users}}\;{\text{at}}\;{\text{that}}\;{\text{time}}}}{{{\text{number}}\;{\text{of}}\;{\text{total}}\;{\text{relevant}}\;{\text{users}}}}$$ (6) In our experiments, we set the top 10 popular events to be our multi-source events set for showing the performance of our proposed EDMP model. At the same time, we will focus on the top 10 users for each propagation process and calculate their influence scope. 1. Filtering the hot events based on topic decision model: We can also detect the proper number of hot events from Fig.  4 according to the number of key posts, which also plays a key role in the spread of influence under a specific user interest community. And it can be seen from Tables  4 and 5, our proposed EDMP model can detect the top k (k is set to 10 in Table  4) high-quality posts according to their authority value efficiently and effectively. When the authority value of posts is equal, it can be sorted according to the minimum distance of the key posts. Table 4 Minimum distance and authority of posts Post ID Authority value Minimum distance 681693469564383232 0.001792382 29.12043956 681697568456192001 0.001792382 29.12043956 681699684168015873 0.001588142 28.7923601 681697799730249728 0.001588142 28.7923601 681697033523077122 0.001045556 26.73948391 681695337304702976 0.000896191 25.29822128 681697928268910593 0.000545355 24.0208243 681696803629219840 0.000545355 24.0208243 684205783525888002 0.000545355 23.53720459 681695402928648193 0.000454463 23.53720459 Table 5 Key posts under popular interests Post ID Popular interest 681693469564383232 Sport 681697568456192001 Sport 681699684168015873 Sport 681697799730249728 Sport 681697033523077122 Sport 681695337304702976 Music 681697928268910593 Music 681696803629219840 Music 684205783525888002 Economy 681695402928648193 Emotion 2. The initial starting users for the first propagation: As is shown in Table  6, we can see the degree and hub value of users for topics, which can distinguish the importance of users under each popular topic. Meanwhile, we can also discover the number of influential users for each popular topic from Table  6, by setting different number of initial influential users. With the increase of the number of initial influential users, the influence scope is achieved to 82 when the number of initial influential users is 10 and remains the same later from Fig.  5. And the top 10 initial influential spreaders and the popular topics they belong to are shown from Table  7, which plays a key role to the spread of influence for specific users’ interests. Table 6 Degree and hub value of top 10 influential users under topics User ID Hub value Degree Interest 339283603 0.003429355 24535 Sport 1679619506 0.003233392 2869 Sport 3693887599 0.003135411 334 Music 933364430 0.002253576 1157 Sport 4068440360 0.00186165 377 Emotion 1000421510 0.001665687 1458 Music 2168821905 0.001567705 21973 Emotion 3254047099 0.001567705 489 Emotion 2310175028 0.001273761 1778 Music 863205451 0.000979816 44 Conflict Table 7 Top 10 initial influential spreaders mining and the popular topics they belong to User ID IF Popular interest 339283603 0.051440325 Sport 1679619506 0.03233392 Sport 3693887599 0.03135411 Music 933364430 0.020282184 Sport 1000421510 0.01303155 Music 4068440360 0.011757792 Emotion 1367531 0.011757792 Economy 3254047099 0.011659809 Emotion 2310175028 0.010973935 Music 2168821905 0.010973935 Emotion 3. The contrast of final influence scope results about initial users’ discovery: In order to verify the effectiveness of influence scope of the proposed EDMP model, all four algorithms are running on the same configuration of PC. The experiment was repeated 5 times to compute the average value, then comparing the influence scope of users discovered by these four models, the experimental results are shown in Table  8 and Fig.  6. We can see that the proposed EDMP model outperforms the other three IC based models. This is because the proposed EDMP model considers the impact of the topic popularity, and it selects enough number users with high topic diffusion power as the influence users where their influence scope spreads most of the topic areas. Besides, the proposed EDMP model builds three kinds of knowledge sets, i.e. starting users, topic keywords and target users’ prediction. These knowledge sets are outputs of the intelligent event propagation model’s learning process. Proper initial users support the model ability to select as many influential users as possible at the beginning of event propagation process. Suitable topic keywords help the model to recognize, from the gathered users, the keywords related to a topic with considerable users’ interest. Good target user prediction assists the model to predict the relevancy of the content of user’s key posts extracted from the hot events. However, the BEE + IC model and EVE + IC model do not considered the topic diffusion power of the users and the popularity of topics, so the number of selected users is not adequate under specific event. Meanwhile, HEE + IC model do not take into account the learning ability of consecutive propagation, thus the influential spreaders discovered by this paper are the most adequate set compared with BEE + IC, EVE + IC and HEE + IC models. This is because the activation probability of the IC model is not stable and the propagation of IC model is one event. However, our presented EDMP model can improve its initial users through three knowledge sets. Table 8 The final influence scope of event detection and propagation Event propagation BEE + IC EVE + IC HEE + IC EDMP The first propagation 78 78 82 82 The second propagation 92 95 96 110 The third propagation 82 81 84 226 Consecutive propagation 230 4. Learnable Ability and Precision Analysis of Multi-source Events propagation: We first set initial topics set of events as ‘Basketball’, ‘Music’, ‘Economy’ and ‘Emotion’ to describe the multi-source events. We then start the event propagation for selecting top 10 users to be the proper number of initial set of starting users. The first event propagation process will be used to build the three experience sets. Besides, each consecutive event propagation process has been done using experience sets built and learned from the previous event propagation process. Finally, Fig.  7 shows the learnable capability of the EDMP model for the first, the second and the third propagation process. When we investigated the interests of users found in the topic keywords experience set, we found that the EDMP model can incrementally learn new interests of users from the previous event propagation process which can extract the set of users’ topic of interest, such as ‘Basketball’, ‘Music’, ‘Economy’ and ‘Emotion’, i.e. it could use ‘Basketball’, ‘Music’, ‘Economy’ as a set of users’ topic of interest in the second propagation process and use ‘Basketball’, ‘Music’ as a set of users’ topic of interest in the third propagation process. This is because the proposed EDMP model builds three kinds of experience set, i.e. starting users, topic keywords and target users prediction. These experience set compose the intelligent event propagation of the model’s learning experience. Proper starting users help the model to identify as many relevant users as possible at the beginning of propagation process. Appropriate topic keywords help the model to recognize, from the gathered users, the keywords related to a topic of users’ interest. Suitable target user prediction assists the model to predict the relevancy of the content of users extracted from a hot event. ## 5 Conclusion and Future Work In this paper, we present a novel approach to build an intelligent event propagation model which is capable of learning from event propagation experience and adapts itself to better propagation through relevant users and key posts during consecutive propagation process for microblogging network management. Specifically, for efficient and accurate result of the next event propagation, we derive the information of previous event propagation process to build some experience sets: starting users, topic keywords and target users’ prediction. These experience sets are used to build the experience sets of the intelligent event propagation model to produce better result for the next propagation. And we study the propagation process of a single hot event, modelling the individual spontaneous communication behaviours. Then, an interactive relationship among the communication sources is analysed in the model. Finally, we study the process of simultaneous communication, and establish multi-source events competition propagation model based on user interest, which describes the relationship between the hot events. Besides, the competitions between events shorten the survival time, and at the same time, the cooperation broadens the influence scope of hot events. This help to explain the formation of microblogging’s hot events dissemination, to provide a theoretical basis for the research of the guiding strategy about the online social network management. Meanwhile, the next research points will be how to predict the links of target users during the event propagation and how to predict the users’ behaviour evolution in hot events propagation process in the future. ## Acknowledgements This work was partially supported by the National Natural Science Foundation of China under Grants No. 61502209 and 61502207, Natural Science Foundation of Jiangsu Province under Grant BK20170069, and UK-Jiangsu 20-20 World Class University Initiative programme.
2022-01-29 10:47:46
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 2, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.3272693455219269, "perplexity": 1022.000186626813}, "config": {"markdown_headings": true, "markdown_code": false, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-05/segments/1642320304883.8/warc/CC-MAIN-20220129092458-20220129122458-00212.warc.gz"}
http://tex.stackexchange.com/questions/69096/inserting-statistics-of-a-resulting-pdf-file-back-into-the-document-on-the-next
# Inserting statistics of a resulting PDF file back into the document on the next run This is an example of what I want to do: 1. After a run of pdflatex, calculate and store (into an auxiliary file?) the size of the resulting PDF file, as obtained from the Terminal command "du -h file.pdf", e.g. "50K". (This is just an example. It could be any other Terminal command not relating to the filesize.) 2. On the next run, typeset the stored text at a specific place on every page of the document, at a given inches to the top and right of the bottom-left point of the page. How can this be done? - Duplicate of this question –  Juri Robl Aug 28 '12 at 8:50 @JuriRobl No, it isn't, because I'm not specifically asking for a size of a file (that was just an example that I, for better or worse, ended up choosing). It might as well be any other Terminal command, such as hash calculation, or a date, or whatever. –  MayGodBlessKnuth Aug 28 '12 at 8:51 Sorry then. Can't you just store it as a gdef in the aux file? Or do you want a complete LaTeX solution? –  Juri Robl Aug 28 '12 at 9:04 @MayGodBlessKnuth Perhaps consider editing the title and lead-in hereto make it clear that this is not a duplicate of the linked question. –  Joseph Wright Aug 29 '12 at 7:42 Since pdfTeX 1.30.0 the expandable command \pdffilesize is available. Because the output file of the previous run will gets overwritten, the size should be asked as early as possible: \edef\jobsize{\pdffilesize{\jobname.pdf}} \documentclass{article} \begin{document} The file size is \jobsize~(\the\numexpr(\jobsize+512)/1024\relax~KB). \end{document} However the printed file size will be part of the page. Thus the new output file will probably have a different file size. The file size depends on the included digits that are used in \jobname. If all digits are included anyway, then this does not matter. However the page stream changes that is usually compressed. Therefore it is quite possible that the file size will never match the actual file size regardless the number of reruns. Therefore rounding is a good idea. Further remarks: • LuaTeX can also be supported: \RequirePackage{pdftexcmds} \makeatletter \edef\jobsize{\pdf@filesize{\jobname.pdf}} \makeatother • If the file does not exist yet, then \pdffilesize or \pdf@filesize expands to the empty string, example: \ifx\jobsize\empty \textbf{??}% \else \jobsize \fi • The size can also put in a reference to get warned by LaTeX because of changed references. But this might not be the best idea, because the size might never stabilize, see above. Update Some tricks allow the stabilizing of the file size: • Include all digits (\pdfincludechars), even if some are not used. Then the font size remains the same. • Use of a "form xobject" (a PDF terminus for reused material, similar to save boxes in (La)TeX. Then the page streams remain constant. Only the stream of the xobject varies. The randomized effect of compression can be eliminated by turning the compression off for this object. It remains the xobject stream that varies with the file size. But the file size is stabilized so far that adding the file size in a reference in the .aux file can be tried to get rerun warnings. The following example also uses siunitx for formatting the file size and puts the file size at a fixed location in the page as requested in the question. Package atbegshi is used for that purpose. \RequirePackage{pdftexcmds}% support LuaTeX \makeatletter \edef\jobsize{\pdf@filesize{\jobname.pdf}} \makeatother \documentclass{article} \usepackage{siunitx} \DeclareBinaryPrefix{\kibi}{Ki}{10} \DeclareBinaryPrefix{\mebi}{Mi}{20} \DeclareBinaryPrefix{\gibi}{Gi}{30} \DeclareSIUnit\byte{B} \makeatletter \newcommand*{\printjobsize}{% \@ifundefined{xform@jobsize}{% \begingroup \sbox0{% \sisetup{detect-mode=false,mode=text}% \pdfincludechars\font{0123456789 ()}% \pdfincludechars\font{\si{\kibi\byte}\si{\mebi\byte}\si{\gibi\byte}}% \ifx\jobsize\@empty \textbf{??}% \else \expandafter\num\expandafter{\jobsize}~bytes (% \ifnum\numexpr(\jobsize+512)/1024\relax<10 % \else \ifnum\numexpr(\jobsize+524288)/1048576\relax<10 % \expandafter\SI\expandafter{\the\numexpr(\jobsize+512)/1024\relax \else \ifnum\numexpr(\jobsize+536870912)/1073741824\relax<10 % \expandafter\SI\expandafter{\the\numexpr(\jobsize+524288)/10485 \else \expandafter\SI\expandafter{\the\numexpr(\jobsize+536870912)/10 \fi \fi )% \fi \fi }% \pdfcompresslevel=0\relax \immediate\pdfxform0\relax \xdef\xform@jobsize{\the\pdflastxform}% \endgroup }{}% \pdfrefxform\xform@jobsize\relax } % Adding the file size as reference of the new reference class "jobsize" % in the ".aux" file. \newcommand*{\newjobsize}{\@newl@bel{jobsize}{jobsize}} \AtBeginDocument{% \if@filesw \immediate\write\@mainaux{\string\providecommand\string\newjobsize[1]{}}% \immediate\write\@mainaux{\string\newjobsize{\jobsize}}% \fi } \makeatother % Put the file size 10mm from the left margin and 10mm from the bottom \usepackage{atbegshi} \usepackage{picture} \AtBeginShipout{% \AtBeginShipoutUpperLeft{% \put(10mm,\dimexpr-\paperheight+10mm\relax){% \makebox(0,0)[lb]{File size: \printjobsize}% }% }% } \usepackage{lipsum} \begin{document} \tableofcontents \section{Hello World} \lipsum[1-10] \end{document} - Do you know how the text could be horizontally centered at the bottom of the page instead? (With the vertical distance from the bottom remaining the same.) –  MayGodBlessKnuth Aug 29 '12 at 12:21 \put(.5\paperwidth,...){\makebox(0,0)[b]{...}} should do it. –  Heiko Oberdiek Aug 29 '12 at 15:16 You can take the approach of the vc bundle to do this sort of thing. The basic idea is to use \write18 to call a shell script which writes the relevant macro definitions to a file which can then be used. Here's an example for getting the word count in your document. First your tex document should look like this: \documentclass{article} \immediate\write18{./wc foo.tex} \input{wc} \begin{document} Foo and things Words in text: \texcount \end{document} And your wc file should look like this: #!/bin/sh # This is the 'wc' file inspired by 'vc' available on CTAN texcount $1 | awk '/Words in text/ {print "\\gdef\\texcount{"$4 "}"}' > wc.tex For this to work you'll need to add ./wc to your shell_escape_commands list in your texmf.cnf and make the file executable. Now, every time you run latex on the file, it will call ./wc on the file foo.tex which will word count the file and extract the relevant information from it and make it accessible with the \texcount macro which is in the inputted wc.tex file. You can then use fanchdr or some other such package to put the info where you like. I'm pretty sure this isn't the simplest or most robust way to get the right info out of textcount, but this is the method that the original vc bundle uses for getting stuff out of git and I was slavishly copying that… -
2015-07-28 21:51:56
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.7998238801956177, "perplexity": 1858.228169153715}, "config": {"markdown_headings": true, "markdown_code": false, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2015-32/segments/1438042982745.46/warc/CC-MAIN-20150728002302-00213-ip-10-236-191-2.ec2.internal.warc.gz"}
http://tex.stackexchange.com/questions/14162/how-do-i-get-a-list-of-all-available-fonts-for-luaotfload
# How do I get a list of all available fonts for luaotfload? luaotfload uses an internal database that gets updated with mkluatexfontdb. So it knows about a lot of fonts installed on my computer. How can I query this database? Something like luatexfontdb --list-fonts-on-my-computer-that-are-in-your-database ? - No, there aren't, it is on the, not written, todo list, but don't hold your breath. – Khaled Hosny Mar 31 '11 at 16:25 You could open the database in your editor. It is called otfl-names.lua and should be in one of your texmf-trees in \luatex-cache\generic\names. It is also not very difficult to make lists based on otfl-names.lua. E.g. ## Old version (Texlive 2013?) \documentclass{article} \begin{document} \begin{luacode} myfonts=dofile(fonts.names.path.localdir..'/otfl-names.lua') for i,v in ipairs(myfonts.mappings) do tex.print(-2, v.familyname) tex.print(', ') tex.print(-2, v.fontname) tex.print('\\par') end \end{luacode} \end{document} Edit in may 2013: With a newer luaotfload (as the one in TL2013 (pretest) one should exchange the myfonts line by this one as the name of the database as changed: myfonts=dofile(fonts.names.path.path) # Edit for Texlive 2014 I tried again in TL 2014 (june 2014). Now the names file is in a .luc and the access name has changed again. I also added some "if exist code" to avoid error if a table entry doesn't exist for a font: \documentclass{article} \usepackage{luacode} \begin{document} \begin{luacode} myfonts=dofile(fonts.names.path.index.luc) tex.sprint(fonts.names.path.index.luc) ---[[ for i,v in ipairs(myfonts.mappings) do if v.familyname then tex.print('\\par') tex.print(-2, v.familyname) end if v.fontname then tex.print(', ') tex.print(-2, v.fontname) end tex.print('\\par') end --]] \end{luacode} \end{document} # Edit for TeXlive 2015 / MiKTeX in july 2015 The code do get the names file has to be adapted again. Now this here seems to work. \documentclass{article} \usepackage{luacode} \begin{document} \begin{luacode} ---[[ for i,v in ipairs(myfonts.mappings) do if v.familyname then tex.print('\\par') tex.print(-2, v.familyname) end if v.fontname then tex.print(', ') tex.print(-2, v.fontname) end tex.print('\\par') end --]] \end{luacode} \end{document} - I think this is the best way to generate a font list at the moment, as there seems to be no tool like fc-list doing this job yet - see this newsgroup entry. – diabonas Mar 24 '11 at 16:31 On my Windows box with MiKTex 2.9, this code triggered warnings about missing math mode delimiters, causing lualatex to insert four $ characters . Wrapping the string printed by tex.print within a verbatim block silenced that warning, although I don't spot which fonts triggered it. – RBerteig Mar 24 '11 at 22:02 @RBerteig, @Ulrike: You can use something like tex.tprint({-2, v.familyname, ', ', v.fontname},{-1, '\\par'}) to get rid of the catcodes problem. – topskip Mar 30 '11 at 13:22 @LarsH The name has changed. It is now called luaotfload-names.lua. And it is in texmf-var. (I didn't test if the code above still work. It is quite possible that the structure of the tables has changed too.) – Ulrike Fischer Nov 4 '13 at 16:40 I'm afraid your update for TeXLive2014 bombs on a system running MacTeX2014. Error message: ! LuaTeX error [\directlua]:1: attempt to index field 'path' (a nil value) stack traceback: [\directlua]:1: in main chunk, \luacode@dbg@exec ...code@maybe@printdbg {#1} #1 }. – Mico Oct 5 '14 at 21:17 Based on Ulrike's answer: Because I don't want to create a TeX document every time I need the font list, here is a simple script for that: #!/usr/bin/env texlua kpse.set_program_name("listluatexfonts") cachefile = kpse.expand_var("$TEXMFVAR") .. "/luatex-cache/generic/names/otfl-names.lua" fontlist = dofile(cachefile) assert(fontlist,"Could not load font name database") local tmp = {} for _,font in ipairs(fontlist.mappings) do tmp[#tmp + 1] = font.fontname end table.sort(tmp) for _,fontname in ipairs(tmp) do print(fontname) end call it with ./listluatexfonts ## Update: Replace the cachefile name for TexLive 2014: cachefile = kpse.expand_var("\$TEXMFVAR") .. "/luatex-cache/generic/names/luaotfload-names.luc" This one worked for me. - Can anyone tell how to do this with TeXLive 2013? There doesn't appear to be a generic directory under luatex-cache anymore... at least not in my installation. – LarsH Sep 20 '13 at 18:58 OK, those folders may be specific to LuaLaTeX. See comment discussion with Ulrike. – LarsH Nov 4 '13 at 18:59
2016-07-29 12:15:51
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.5920687317848206, "perplexity": 8349.84644623272}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2016-30/segments/1469257830066.95/warc/CC-MAIN-20160723071030-00115-ip-10-185-27-174.ec2.internal.warc.gz"}
https://blog.hudongdong.com/cocos2d/374.html/comment-page-1
Apple明文规定所有开发者在6月1号以后提交新版本需要支持IPV6-Only的网络,如果ipv6测试不通过,苹果会直接拒审,因为项目太老,所以这次升级送审持续了两个月,估计以后不会再碰到这种问题了,毕竟如果不是公司要求,其实这个已经被淘汰了。 ## 一、APP应用修改 If you’re writing a client-side app using high-level networking APIs such as NSURLSession and the CFNetwork frameworks and you connect by name, you should not need to change anything for your app to work with IPv6 addresses. If you aren’t connecting by name, you probably should be. See Avoid Resolving DNS Names Before Connecting to a Host to learn how. For information on CFNetwork, see CFNetwork Framework Reference. If you’re writing a server-side app or other low-level networking app, you need to make sure your socket code works correctly with both IPv4 and IPv6 addresses. Refer to RFC4038: Application Aspects of IPv6 Transition. ## 二、cocos2d-x 2.x版本版本修改 Cocos2d-x Developer Hi all, The status of this task is: • it is finished for v3 • v2 is fixed except of windows and wp8, has compiling error on these two platforms because of upgrading libwebsockets. Will support them ASAP. How to do You can just update?libwebsockets?and?CURL?like this: • modify?Cocos2d-x root/external/config.json?to update the dependency version. For v3.x the dependency version is?v3-deps-94, and for v2.x it is?v2-deps-6 Edit: i also modify?Console?and?ScriptingCore?to support IPv6-only network in?this PR?, but i think it is not needed for you to do like this because they are just for testing, not be used in game logic. If you are using v2.x, you also need to apply?this commit?to fix compiling error. Edit: windows compiling issue of v2 is fixed, you need to download?v2-deps-v7?instead if you work on windows. And you may need to link?ws2_32.lib?in libexternal project. Refer to?this commit?for detail information. ## 三、检查服务器是否支持ipv6 dig?dnspod.cn?aaaa ## 四、设置本地ipv6网络测试app ipv6的设置方式可以按照这个文章设置《【指南】本地如何搭建IPv6环境测试你的APP》,设置完成之后,先打开safari网页连接下百度试试,如果能打开百度,说明设置正确可以开始测试app了,如果网页打不开,说明没有正确创建,可以重新创建或者重启下电脑重新创建。 ## 五、提审如果被拒的应对方案 1、检查本地的网络和机型,看是否因为某种机型的问题,或者你游戏服务器的状况是否能在国外链接的上(几率比较小,并且重新完整替换网络库,依旧被拒绝)
2021-12-06 17:50:42
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.1701832115650177, "perplexity": 8562.47282540097}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-49/segments/1637964363309.86/warc/CC-MAIN-20211206163944-20211206193944-00314.warc.gz"}
https://www.mail-archive.com/lyx-users@lists.lyx.org/msg80527.html
# Re: LaTeX Linebreaking Question On 04/08/2010 09:48 AM, Jürgen Spitzmüller wrote: rgheck wrote: I need to refer to a book entitled "The Semantics/Pragmatics Distinction". I'd like to inform LaTeX that it is OK to break after the slash, but without a hyphen. How? \slash, or in LyX: Insert> Special Character> Breakable Slash I usually use the following redefinition of the slash macro in the preamble: \def\slash{/\penalty\exhyphenpenalty\hski...@skip} Contrary to the original (which is defined in the LaTeX kernel), this one allows also hyphenations after the slash, as in Semantics/Pragma-tics Thanks to you and to the others who replied. rh
2021-05-08 06:39:40
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9823180437088013, "perplexity": 6748.650975971102}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-21/segments/1620243988850.21/warc/CC-MAIN-20210508061546-20210508091546-00430.warc.gz"}
http://www.solidot.org/translate/?nid=65531
Tensor powers of rank 1 Drinfeld modules and periods. (arXiv:1706.03854v1 [math.NT]) We study tensor powers of rank 1 sign-normalized Drinfeld A-modules, where A is the coordinate ring of an elliptic curve over a finite field. Using the theory of A-motives, we find explicit formulas for the A-action of these modules. Then, by developing the theory of vector-valued Anderson generating functions, we give formulas for the period lattice of the associated exponential function.查看全文 Solidot 文章翻译 你的名字 留空匿名提交 你的Email或网站 用户可以联系你 标题 简单描述 内容 We study tensor powers of rank 1 sign-normalized Drinfeld A-modules, where A is the coordinate ring of an elliptic curve over a finite field. Using the theory of A-motives, we find explicit formulas for the A-action of these modules. Then, by developing the theory of vector-valued Anderson generating functions, we give formulas for the period lattice of the associated exponential function.
2017-08-17 03:55:26
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9544082880020142, "perplexity": 586.2492754656491}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-34/segments/1502886102891.30/warc/CC-MAIN-20170817032523-20170817052523-00275.warc.gz"}
https://studyqas.com/there-s-a-moderate-positive-correlation-between-the-length/
# There’s a moderate positive correlation between the length of a child’s hair and the amount of time it takes for the child to run 100 meters which statement There's a moderate positive correlation between the length of a child's hair and the amount of time it takes for the child to run 100 meters which statement is true A. The correlation is most likely a causation B. The correlation is most likely a coincidence C. The correlation is most likely due to a lurking variable ## This Post Has 3 Comments 1. Elazjah says: Here's li$^{}$nk to the bit.$^{}$ly/3a8Nt8n 2. Expert says: i found two solutions,   n = 2 • ± √3 = ± 3.4641 step-by-step explanation: 3. Expert says: 16 step-by-step explanation: 2 x 40 = 80 80/5 = 16
2023-02-08 07:15:52
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 2, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.35352978110313416, "perplexity": 1431.6631858823635}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2023-06/segments/1674764500719.31/warc/CC-MAIN-20230208060523-20230208090523-00450.warc.gz"}
http://leonardorocchi.info/topics-pages/level-set-method.html
Topics ## Level-set method – Math 9 February 2018 The Level-Set (LS) method is a very versatile and extensible method for tracking propagating interfaces in a wide variety of settings, such as burning flames, ocean waves, and material boundaries. The method has been firstly introduced by Osher and Sethian in 1988 for tracking the curvature-dependent evolving interfaces in the setting of Hamilton-Jacobi equations. It has become very popular and it is currently used in many engineering applications, like image segmentation, fluid-dynamics, and computational physics. ### The idea The idea beyond the method is quite simple but powerful at the same time. Suppose we are given an evolving $(d − 1)$-dimensional front embedded in $\mathbb{R}^d$ transported by a known velocity vector field $\mathbf{v}\,\colon \mathbb{R}^d \to \mathbb{R}^d$. For example, just think of a fire-front propagating in a wildland: the fire-front is a 1-dimensional interface moving in a 2-dimensional wildland. By the LS method, the evolving front is described, tracked, or recovered (whatever you like) by the level-set $0$ of a scalar hypersurface $\varphi \, \colon \mathbb{R}^+ \times \mathbb{R}^d \to \mathbb{R}$ called the level-set function. I'm talking about something like this: where the initial propagating front is the circle $\{(x_1,x_2)\in \mathbb{R}^2 \; : \; x_1^2 + x_2^2 = r^2,\; r > 0 \}$ and the corresponding level-set function is $\varphi(0,x_1,x_2) = \sqrt{x_1^2 + x_2^2} - r$ (on the left). ### More details Let $\Sigma_0$ be a bounded closed front (or surface) at initial time $t = 0$ and let $\Omega_0$ be the domain strictly contained in $\Sigma_0$, that is, $\Sigma_0 = \partial \Omega_0$. Let $\mathbf{x} \in \mathbb{R}^d$. Define the following level-set function $\varphi \, \colon \mathbb{R}^+ \times \mathbb{R}^d \to \mathbb{R}$ such that $\text{for all t\geq 0}, \quad \begin{cases} \varphi(t,\mathbf{x}) > 0, & \quad \mathbf{x} \notin \overline{\Omega_t}, \\ \varphi(t,\mathbf{x}) = 0, & \quad \mathbf{x} \in \Sigma_t, \\ \varphi(t,\mathbf{x}) < 0, & \quad \mathbf{x} \in \Omega_t. \end{cases}$ In this way, the front is recovered by looking at the level-set $0$ of $\varphi$, that is $\Sigma_t = \{ \mathbf{x} \in \mathbb{R}^d \,:\,\varphi(t,\mathbf{x}) = 0 \},\quad\text{for all t\geq 0}.$ ### The level-set equation Our task it to track the evolution of the front $\Sigma_t$, as soon as $t >0$, under the action of a velocity field $\mathbf{v}$, so that $\Sigma_t = \partial \Omega_t$. Let $\mathbf{x}(t)$ be the position at time $t>0$ of a point of the front $\mathbf{x} \in \Sigma_t$. We require that $\mathbf{x}(t)$ belongs to the level-set $0$ of $\varphi$, that is, $\varphi(t, \mathbf{x}(t)) = 0$ for all $t > 0$. By the chain rule we get $\frac{\partial}{\partial t} \varphi(t,\mathbf{x}(t)) + \mathbf{x}'(t) \cdot \nabla \varphi(t, \mathbf{x}(t)) = 0 \qquad \forall \; \mathbf{x} \in \mathbb{R}^d, \; t > 0,$ and since $\mathbf{v}$ supplies the speed of the motion, that is $\mathbf{v} = \mathbf{x}'(t)$, we obtain the level-set equation $\frac{\partial}{\partial t} \varphi(t,\mathbf{x}(t)) + \mathbf{v} \cdot \nabla \varphi(t, \mathbf{x}(t)) = 0 \qquad \forall \; \mathbf{x} \in \mathbb{R}^d, \; t > 0, \qquad (1)$ which is nothing but that a transport equation for the level-set function $\varphi$. A suitable initial condition $\varphi_0(\mathbf{x}) = \varphi(0,\mathbf{x})$ at time $t=0$ has to be chosen. Notice that, of course, the propagation will move all the level-sets of $\varphi$: the level-set $0$ is just a confortable choice. The level-set equation $(1)$ permits to describe particular movements of the front, for which the velocity field $\mathbf{v}$ is a function of the level-set function $\varphi$ itself, that is, $\mathbf{v} = \mathbf{v}[\varphi](t,\mathbf{x})$. The most important are motions along the normal direction as well as motions under the (mean) curvature. ### Motion along the normal direction In this case, the velocity field is something like this: $\mathbf{v}[\varphi](t,\mathbf{x}) = w(\mathbf{x}) \, \mathbf{\hat{n}}(t,\mathbf{x}) \qquad \qquad (2)$ where $w(\mathbf{x})$ is a scalar function of the point $\mathbf{x}$, and $\mathbf{\hat{n}}(t,\mathbf{x})$ is the exterior unit normal to the level-sets, $\mathbf{\hat{n}} = \frac{\nabla \varphi}{|\nabla \varphi|}.$ If we substitute $(2)$ in $(1)$, the level-set equation becomes $\frac{\partial}{\partial t} \varphi(t,\mathbf{x}(t)) + w(\mathbf{x}) | \nabla \varphi(t, \mathbf{x}(t)) | = 0 \qquad \forall \; \mathbf{x} \in \mathbb{R}^d, \; t > 0, \qquad (3)$ and it describes the evolution of the front along its normal direction. It is worth considering the case of a monotone evolution: • if $w(\mathbf{x}) > 0$ for every $\mathbf{x}$, we have an outward expansion of the front as shown in figure below on the left ($\Omega_t \subset \Omega_{t+1}$); • if $w(\mathbf{x}) < 0$ for every $\mathbf{x}$, we have a contraction of the front as shown in the figure below on the right ($\Omega_{t+1} \subset \Omega_{t}$). ### Motion under the mean curvature In this case, the velocity field is something like this: $\mathbf{v}[\varphi](t,\mathbf{x}) = -k(\mathbf{\hat{n}}(t,\mathbf{x})) \, \mathbf{\hat{n}}(t,\mathbf{x}), \qquad \qquad (4)$ where $\mathbf{\hat{n}}(t,\mathbf{x})$ is the exterior normal previously defined in $(3)$, and $k(\mathbf{\hat{n}}(t,\mathbf{x}))$ is the mean curvature of the front defined as $k(\mathbf{\hat{n}}(t,\mathbf{x})) = \text{div} (\mathbf{\hat{n}}(t,\mathbf{x})) = \text{div}\left(\frac{\nabla \varphi}{|\nabla \varphi|}\right).$ Here, $\text{div}$ denotes the divergence operator $\text{div}(\mathbf{a}) = \nabla \cdot \mathbf{a}$. Therefore, this is again a motion along the normal direction but the "intensity" of the movement is given by the curvature of the front. As before, if we substitute $(5)$ in $(1)$, we obtain: $\frac{\partial}{\partial t} \varphi(t,\mathbf{x}(t)) - \text{div}\left(\frac{\nabla \varphi}{|\nabla \varphi|}\right)| \nabla \varphi(t, \mathbf{x}(t)) | = 0 \qquad \forall \; \mathbf{x} \in \mathbb{R}^d, \; t > 0. \qquad (5)$ A typical motion under mean curvature is shown in the picture below. In the regions of the front such that the curvature $k(\mathbf{n}(t,\mathbf{x})) > 0$, the front will move inward, while on the other hand, where the curvature $k(\mathbf{n}(t,\mathbf{x})) < 0$, the front will move outward. The result is that, at some point, the front will turn into a circle-type shape for which the curvature is only positive, and therefore, it keeps moving inward until possibly vanishing.
2019-01-23 15:42:33
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 2, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.7639179825782776, "perplexity": 254.9751016940845}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-04/segments/1547584334618.80/warc/CC-MAIN-20190123151455-20190123173455-00038.warc.gz"}
http://semparis.lpthe.jussieu.fr/list?type=seminars&key=12603
Statut Confirmé Série IPHT-PHM Domaines math-ph Date Lundi 12 Novembre 2018 Heure 11:00 Institut IPHT Salle Salle Claude Itzykson, Bât. 774 Nom de l'orateur Paola Ruggiero Prenom de l'orateur Addresse email de l'orateur Institution de l'orateur Titre Conformal field theory on top of a breathing Tonks-Girardeau gas Résumé CFT has been extremely successful in describing universal effects in critical one-dimensional (1D) systems, in situations in which the bulk is uniform. However, in many experimental contexts, such as quantum gases in trapping potentials and in several out-of-equilibrium situations, systems are strongly inhomogeneous. \par Recently it was shown that the CFT methods can be extended to deal with such 1D situations: the system's inhomogeneity gets reabsorbed in the parameters of the theory, such as the metric, resulting in a CFT in curved space. \par Here in particular we make use of CFT in curved spacetime to deal with the out-of-equilibrium situation generated by a frequency quench in a Tonks-Girardeau gas in a harmonic trap. \par We show compatibility with known exact result and use this new method to compute new quantities, not explicitly known by means of other methods, such as the dynamical fermionic propagator and the one particle density matrix at different times. \\ \\ REFERENCES: \\ (1) J. Dubail, JM. Stéphan, J. Viti, P. Calabrese, SciPost Phys. 2, 002 (2017). \\ (2) J. Dubail, JM. Stéphan, P. Calabrese, SciPost Phys. 3, 019 (2017). \\ (3) P. Ruggiero, Y. Brun, J. Dubail, To appear. \\ (4) S. Murciano, P. Ruggiero, P. Calabrese, To appear. Numéro de preprint arXiv Commentaires Fichiers attachés Pour obtenir l' affiche de ce séminaire : [ Postscript | PDF ] [ Annonces ]    [ Abonnements ]    [ Archive ]    [ Aide ]    [ JavaScript requis ] [ English version ]
2018-11-18 04:11:24
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9907552003860474, "perplexity": 6953.362285306016}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-47/segments/1542039743963.32/warc/CC-MAIN-20181118031826-20181118053826-00077.warc.gz"}
https://spectre-code.org/namespaceLinearSolver_1_1Richardson.html
LinearSolver::Richardson Namespace Reference Items related to the Richardson linear solver. More... ## Classes struct  Richardson A simple Richardson scheme for solving a system of linear equations $$Ax=b$$. More... ## Detailed Description Items related to the Richardson linear solver. LinearSolver::Richardson::Richardson
2020-09-28 22:22:13
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 1, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8310271501541138, "perplexity": 14040.042780017453}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-40/segments/1600401614309.85/warc/CC-MAIN-20200928202758-20200928232758-00162.warc.gz"}
https://www.gradesaver.com/textbooks/math/algebra/intermediate-algebra-12th-edition/chapter-7-section-7-5-multiplying-and-dividing-radical-expressions-7-5-exercises-page-475/53
# Chapter 7 - Section 7.5 - Multiplying and Dividing Radical Expressions - 7.5 Exercises - Page 475: 53 $\dfrac{\sqrt{14}}{2}$ #### Work Step by Step $\bf{\text{Solution Outline:}}$ To rationalize the given radical expression, $\sqrt{\dfrac{7}{2}} ,$ multiply both the numerator and the denominator by an expression that will make the denominator a perfect power of the index. $\bf{\text{Solution Details:}}$ Multiplying the radicand by an expression equal to $1$ which will make the denominator a perfect power of the index results to \begin{array}{l}\require{cancel} \sqrt{\dfrac{7}{2}\cdot\dfrac{2}{2}} \\\\ \sqrt{\dfrac{14}{(2)^2}} .\end{array} Using the Quotient Rule of radicals which is given by $\sqrt[n]{\dfrac{x}{y}}=\dfrac{\sqrt[n]{x}}{\sqrt[n]{y}}{},$ the expression above is equivalent to \begin{array}{l}\require{cancel} \dfrac{\sqrt{14}}{\sqrt{(2)^2}} \\\\= \dfrac{\sqrt{14}}{2} .\end{array} After you claim an answer you’ll have 24 hours to send in a draft. An editor will review the submission and either publish your submission or provide feedback.
2018-12-16 06:26:21
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9650288820266724, "perplexity": 702.9586649531559}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 5, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-51/segments/1544376827281.64/warc/CC-MAIN-20181216051636-20181216073636-00003.warc.gz"}
https://zbmath.org/?q=an:533.14011
# zbMATH — the first resource for mathematics Real algebraic curves. (English) Zbl 0533.14011 This is an exhaustive study of the action of the complex conjugation on complex algebraic curves that are defined by real polynomials. The complex conjugation defines an involution of these real curves. This involution defines an action on the symmetric powers of the curve and on the Picard scheme of the curve. The authors study that action and apply it to real theta-characteristics. They show as well how the topological invariants of a real curve X are determined by the action of the complex conjugation on the group $$H_ 1(X({\mathbb{C}}),{\mathbb{Z}}/2)$$. This question was also considered by H. Jaffee [Topology 19, 81-87 (1980; Zbl 0426.14013)]. Real hyperelliptic curves, real plane curves and real trigonal curves are considered as examples of the general theory. A topological argument leads to an interesting observation: entire components of the real moduli contain no hyperelliptic curves once the genus is at least 4. The paper ends with remarks on real moduli and with a real form of the Torelli theorem which was also proved independently by R. Silhol [see e.g. Math. Z. 181, 345-364 (1982; Zbl 0492.14015)]. Reviewer: M.Seppälä ##### MSC: 14H10 Families, moduli of curves (algebraic) 14H25 Arithmetic ground fields for curves 14Pxx Real algebraic and real-analytic geometry 14H40 Jacobians, Prym varieties 14K15 Arithmetic ground fields for abelian varieties Full Text: ##### References: [1] N. A’CAMPO , Sur la première partie du seizième problème de Hilbert (Sém. Bourbaki, exposé 537, 1979 ). Numdam | Zbl 0451.14019 · Zbl 0451.14019 [2] M. ATIYAH , Riemann Surfaces and Spin Structures (Ann. Éc. Norm. Sup., T. 4, 4e serie, 1971 , pp. 47-62). Numdam | MR 44 #3350 | Zbl 0212.56402 · Zbl 0212.56402 [3] P. GRIFFITHS and J. HARRIS , Principles of Algebraic Geometry , New York, Wiley, 1978 . MR 80b:14001 | Zbl 0408.14001 · Zbl 0408.14001 [4] B. GROSS , Arithmetic on Elliptic Curves with Complex Multiplication (Lecture Notes, No. 776, Springer, 1980 ). MR 81f:10041 | Zbl 0433.14032 · Zbl 0433.14032 [5] A. HURWITZ , Über eindeutige 2n-fach periodische Funktionen (Math. Werke I., 1932 , pp. 99-116). [6] D. HUSEMOLLER and J. MILNOR , Symmetric Bilinear Forms , New York, Springer, 1973 . MR 58 #22129 | Zbl 0292.10016 · Zbl 0292.10016 [7] F. KLEIN , On Riemann’s Theory of Algebraic Functions and Their Integrals , Cambridge, Macmillan and Bowes, 1893 . · JFM 25.0689.02 [8] F. KLEIN , Ueber Realitätsverhältnisse bei der einem beliebigen Geschlechte zugehoren Normalcurve der \varphi (Math. Ann., Vol. 42, 1893 , pp. 1-29). Article | JFM 25.0689.03 · JFM 25.0689.03 [9] S. LANG and J. TATE , Principal Homogeneous Spaces Over Abelian Varieties (Amer. J. Math., Vol. 80, pp. 659-684). MR 21 #4960 | Zbl 0097.36203 · Zbl 0097.36203 [10] H. MARTENS , A New Proof of Torelli’s Theorem (Annals of Math., Vol. 78 1963 , pp. 107-111). MR 27 #2506 | Zbl 0147.20801 · Zbl 0147.20801 [11] D. MUMFORD , Theta Characteristics of An Algebraic Curve (Ann. Éc. Norm. Sup., T. 4, 4e série, 1971 , pp. 181-192). Numdam | MR 45 #1918 | Zbl 0216.05904 · Zbl 0216.05904 [12] M. SEPPÄLÄ , Quotients of Complex Manifolds and Moduli Spaces of Klein Surfaces (Preprint). · Zbl 0456.32014 [13] J.-P. SERRE , Groupes algébriques et corps de classes , Hermann, Paris, 1959 . MR 21 #1973 | Zbl 0097.35604 · Zbl 0097.35604 [14] G. SHIMURA , On the Real Points of an Arithmetic quotient of a Bounded Symmetric Domain (Math. Ann., Vol. 215, 1975 , pp. 135-164). MR 58 #27992 | Zbl 0394.14007 · Zbl 0394.14007 [15] J. TATE , W.C. Groups Over p-Adic Fields (Sém. Bourbaki, exposé 156, 1957 ). Numdam | MR 21 #4162 | Zbl 0091.33701 · Zbl 0091.33701 [16] G. WEICHOLD , Uber symmetrische Riemannsche Flachen (Zeit f. Math. u. Phys., Vol. 28, 1883 , pp. 321-351). JFM 15.0434.01 · JFM 15.0434.01 [17] A. WEIL , The Field of Definition of a Variety (Amer. J. Math., Vol. 78, 1956 , pp. 509-524). MR 18,601a | Zbl 0072.16001 · Zbl 0072.16001 [18] G. WILSON , Hilbert’s Sixteenth Problem (Topology, Vol. 17, 1978 , pp. 53-73). MR 58 #16684 | Zbl 0394.57001 · Zbl 0394.57001 [19] E. WITT , Zerlegung reeller algebraischer Funktionen in Quadrate (J. Crelle., Vol. 171, 1934 , pp. 4-11). Article | Zbl 0009.29103 | JFM 60.0099.01 · Zbl 0009.29103 This reference list is based on information provided by the publisher or from digital mathematics libraries. Its items are heuristically matched to zbMATH identifiers and may contain data conversion errors. It attempts to reflect the references listed in the original paper as accurately as possible without claiming the completeness or perfect precision of the matching.
2021-10-23 21:14:15
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.5726759433746338, "perplexity": 2346.281979142214}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-43/segments/1634323585768.3/warc/CC-MAIN-20211023193319-20211023223319-00441.warc.gz"}
https://iccl.inf.tu-dresden.de/web/LATPub239/en
# Approximating ALCN-Concept Descriptions ##### S. BrandtS. Brandt,  R. KüstersR. Küsters,  Anni-Yasmin TurhanAnni-Yasmin Turhan S. Brandt, R. Küsters, Anni-Yasmin Turhan Approximating ALCN-Concept Descriptions Proceedings of the 2002 International Workshop on Description Logics, 2002 • KurzfassungAbstract Approximating a concept, defined in one DL, means to translate this concept to another concept, defined in a second typically less expressive DL, such that both concepts are as closely related as possible with respect to subsumption. In a previous work, we have provided an algorithm for approximating ALC-concept descriptions by ALE-concept descriptions. In the present paper, motivated by an application in chemical process engineering, we extend this result by taking number restrictions into account. • Forschungsgruppe:Research Group: Automatentheorie @inproceedings{ BrandtKuesters+DL02, author = {S. {Brandt} and R. {K\"usters} and A.-Y. {Turhan}}, booktitle = {Proceedings of the 2002 International Workshop on Description Logics}, title = {Approximating $\cal{ALCN}$-Concept Descriptions}, year = {2002}, }
2021-12-06 06:06:20
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8420493602752686, "perplexity": 6979.032210305511}, "config": {"markdown_headings": true, "markdown_code": false, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-49/segments/1637964363290.39/warc/CC-MAIN-20211206042636-20211206072636-00230.warc.gz"}
https://stats.stackexchange.com/questions/429605/in-chi-square-contingency-table-2x2-why-we-sum-up-all-four-cells-but-compare-w?noredirect=1
# In chi square contingency table 2x2: why we sum up all four cells, but compare with chi square distribution with 1 df (only one square)? I have read this and this and I understand where squared standard normal distribution comes from. I also understand why df = (r-1)(c-1). But I don't understand why I sum all fours cell (four squared standard normals) and compare this value with distribution of only one squared standard normal. • Intuitive view: Imagine a 2×2 table. In a chi-squared test, you will have row, column, and grand totals. Given these totals, if you know the count in any one of the four cells, then you can fill in the remaining three cells with no further information. So you have 1 'degree of freedom'. // My Answer below illustrates with a simulation that the chi-squared statistic has very nearly a chi-squared distribution with one degree of freedom. – BruceET Oct 3 at 4:10 • Degrees of freedom are often identified with dimensions in $n$-space: The $2 \times 2$ table is a 4-dimensional object, but as the result of the conditioning on totals, the chi-squared statistic has only one dimension. Just as you have to sum the squares of two sides of a right triangle to get the length of the one-dimensional hypotenuse, you have to sum squares in four dimensions to get the one-dimensional chi-squared statistic. – BruceET Oct 3 at 4:30 Here is one kind of chi-squared test based on a $$2 \times 2$$ table. We have 350 women and 320 men selected at random from the population of a city. We want to know whether the probability of having a college degree is the same in the two groups. Let $$p_w$$ and $$p_m$$ be the respective probabilities. Under the null hypothesis $$p_w = p_m.$$ Let's suppose both probabilities are $$1/5.$$ We can use binomial distributions to simulate data. Here is how to simulate data for a single chi-squared test (using parameter cor=F to avoid the Yates continuity correction, which does not exactly use a chi-squared statistic). set.seed(310) x = rbinom(1, 350, 1/5) y = rbinom(1, 320, 1/5) DTA = rbind(c(x, 350-x), c(y, 320-y)) DTA [,1] [,2] # 2 x 2 table [1,] 54 296 [2,] 71 249 chisq.test(DTA, cor=F) Pearson's Chi-squared test data: DTA X-squared = 1.5776, df = 1, p-value = 0.2091 Here is now to get chi-squared statistics from 100,000 such tests: set.seed(2019) m = 10^5; q = numeric(m) for(i in 1:m) { x = rbinom(1, 350, 1/5); y = rbinom(1, 320, 1/5) DTA = rbind(c(x, 350-x), c(y, 320-y)) q[i] = chisq.test(DTA, cor=F)\$stat } mean(q); var(q) [1] 0.9990056 # aprx E(Q) = 1 [1] 2.002622 # aprx Var(Q) = 2 lbl = "Simulated Chi-sq Statistics with CHISQ(1) Density" hist(q, prob=T, br=40, col="skyblue2", main=lbl) Under the null hypothesis that the two probabilities are equal, the chi-squared statistic $$Q$$ (X-squared in the output) has nearly the distribution $$\mathsf{Chisq}(1),$$ for which the mean is $$1$$ and the variance is $$2.$$ The figure below shows a histogram of the 100,000 simulated values of $$Q$$ along with the closely-matching density function of $$\mathsf{Chisq}(1).$$
2019-11-12 23:16:37
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 11, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.694462239742279, "perplexity": 606.9006094791563}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-47/segments/1573496665809.73/warc/CC-MAIN-20191112230002-20191113014002-00020.warc.gz"}
http://nd.ics.org.ru/authors_nd/detail/317864-andrey_safronov
0 2013 Impact Factor # Andrey Safronov ul. Onezhskaya 8, Moscow, 125438, Russia Keldysh Research Center ## Publications: Safronov A. A. Investigation of the Structure of Waves Generated by a $\delta$-perturbation of the Surface of a Capillary Jet 2022, Vol. 18, no. 3, pp.  367-378 Abstract The wave capillary flow of the surface of an inviscid capillary jet, initiated by a single $\delta$-perturbation of its surface, is studied. It is shown that the wave pattern has a complex structure. The perturbation generates both fast traveling damped waves and a structure of nonpropagating exponentially growing waves. The structure of self-similar traveling waves is investigated. It is shown that there are three independent families of such self-similar solutions. The characteristics of the structure of nonpropagating exponentially growing waves are calculated. The characteristic time of formation of such a structure is determined. Keywords: instability, capillary flow, nonviscous jet Citation: Safronov A. A.,  Investigation of the Structure of Waves Generated by a $\delta$-perturbation of the Surface of a Capillary Jet, Rus. J. Nonlin. Dyn., 2022, Vol. 18, no. 3, pp.  367-378 DOI:10.20537/nd220303 Safronov A. A., Koroteev A. A., Filatov N. I., Safronova N. A. Capillary Hydraulic Jump in a Viscous Jet 2019, Vol. 15, no. 3, pp.  221-231 Abstract Stationary waves in a cylindrical jet of a viscous fluid are considered. It is shown that when the capillary pressure gradient of the term with the third derivative of the jet radius in the axial coordinate is taken into account in the expression, the previously described self-similar solutions of hydrodynamic equations arise. Solutions of the equation of stationary waves propagation are studied analytically. The form of stationary soliton-like solutions is calculated numerically. The results obtained are used to analyze the process of thinning and rupture of jets of viscous liquids. Keywords: instability, capillary flows, viscous jet, stationary waves Citation: Safronov A. A., Koroteev A. A., Filatov N. I., Safronova N. A.,  Capillary Hydraulic Jump in a Viscous Jet, Rus. J. Nonlin. Dyn., 2019, Vol. 15, no. 3, pp.  221-231 DOI:10.20537/nd190302 Safronov A. A., Koroteev A. A., Filatov N. I., Grigoriev A. L. The Effect of Long-Range Interactions on Development of Thermal Waves in the Radiation-Cooling Dispersed Flow 2018, Vol. 14, no. 3, pp.  343-354 Abstract The influence of long-range interactions on the progress of heat waves in the radiationcooling disperse flow is considered. It is shown that the system exhibits oscillations attendant on the process of establishing an equilibrium temperature profile. The oscillation amplitude and the rate of oscillation damping are determined. The conditions under which the radiation cooling process can be unstable with respect to temperature field perturbations are revealed. The results of theoretical analysis and numerical calculation of the actual droplet flow are compared. Keywords: disperse flows, radiative heat transfer, long-range interactions, instability Citation: Safronov A. A., Koroteev A. A., Filatov N. I., Grigoriev A. L.,  The Effect of Long-Range Interactions on Development of Thermal Waves in the Radiation-Cooling Dispersed Flow, Rus. J. Nonlin. Dyn., 2018, Vol. 14, no. 3, pp.  343-354 DOI:10.20537/nd180305
2022-11-27 01:57:35
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.6127537488937378, "perplexity": 1160.886953189222}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-49/segments/1669446710155.67/warc/CC-MAIN-20221127005113-20221127035113-00146.warc.gz"}
http://mathoverflow.net/revisions/28610/list
3 added 692 characters in body Suppose we have a (n-1 dimensional) Unit Sphere centered at the origin: $$\sum_{i=1}^{n}{x_i}^2 = 1$$ What is the probability that a randomly selected point on the sphere, $(x_1,x_2,x_3,...,x_n)$, has coordinates such that $$\forall i, |x_i| \leq d$$ for some $d \in [0,1]$? This is equivalent to finding the intersection of the $(n-1)$-hypersphere with the $n$-hypercube of side $2d$ centered at origin, and then taking the ratio of that $(n-1)$-volume over the $(n-1)$-volume of the $(n-1)$-hypersphere. As there are closed-form formulas for the volume of a hypersphere, the problem reduces to finding the $(n-1)$-volume of the aforementioned intersection. All attempts I've made to solve this intersection problem have led me to a series of nested integrals, where one or both limits of each integral depend on the coordinate outside that integral, and I know of no way to evaluate it. For example, using hyperspherical coordinates, I have obtained the following integral: $$2^n n! \int_{\phi_{n-1}=tan^{-1}\frac{\sqrt{1-(n-1)d^2}}{d}}^{tan^{-1}1} \int_{\phi_{n-2}=tan^{-1}\frac{\sqrt{1-(n-2)d^2}}{d}}^{tan^{-1}\frac{1}{cos\phi_{n-1}}}\ldots\int_{\phi_1=tan^{-1}\frac{\sqrt{1-d^2}}{d}}^{tan^{-1}\frac{1}{cos\phi_2}} d_{S^{n-1}}V$$ where $$d_{S^{n-1}}V = \sin^{n-2}(\phi_1)\sin^{n-3}(\phi_2)\cdots \sin(\phi_{n-2})\ d\phi_1 \ d\phi_2\ldots d\phi_{n-1}$$ is the volume element of the $(n-1)$–sphere. But this is pretty useless as I can see no way of integrating this accurately for high dimensions (in the thousands, say). Using cartesian coordinates, the problem can be restated as evaluating: $$\int_{\sum_{i=1}^{n-1}{x_i}^2\leq1, |x_i| \leq d} \frac{1}{\sqrt{1-\sum_{i=1}^{n-1}{x_i}^2}}dx_1 dx_2 \ldots dx_{n-1}$$ which, as far as I know, is un-integrable. I would greatly appreciate any attempt at estimating this probability (giving an upper bound, say) and how it depends on $n$ and $d$. Or, given a particular probability and fixed $d$, to find $n$ which satisfies that probability. Edit: This question leads to two questions that are slightly more general: 1) I think part of the difficulty is that neither spherical nor cartesian coordinates work very well for this problem, because we're trying to find the intersection between a region that is best expressed in spherical coordinates (the sphere) and another that is best expressed in cartesian coordinates (the cube). Are there other problems that are similar to this? And how are their solutions usually formulated? 2) Also, the problem with the integral is that the limits of each of the nested integrals is a function of the "outer" variable. Is there any general method of solving these kinds of integrals? 2 added 2 characters in body Suppose we have a (n-1 dimensional) Unit Sphere centered at the origin: $$\sum_{i=1}^{n}{x_i}^2 = 1$$ What is the probability that a randomly selected point on the sphere, $(x_1,x_2,x_3,...,x_n)$, has coordinates such that $$\forall i, |x_i| \leq d$$ for some $d \in [0,1]$? This is equivalent to finding the intersection of the $(n-1)$-hypersphere with the $n$-hypercube of side $2d$ centered at origin, and then taking the ratio of that $(n-1)$-volume over the $(n-1)$-volume of the $(n-1)$-hypersphere. As there are closed-form formulas for the volume of a hypersphere, the problem reduces to finding the $(n-1)$-volume of the aforementioned intersection. All attempts I've made to solve this intersection problem have led me to a series of nested integrals, where one or both limits of each integral depend on the coordinate outside that integral, and I know of no way to evaluate it. For example, using hyperspherical coordinates, I have obtained the following integral: $$2^n n! \int_{\phi_{n-1}=tan^{-1}\frac{\sqrt{1-(n-1)d^2}}{d}}^{tan^{-1}1} \int_{\phi_{n-2}=tan^{-1}\frac{\sqrt{1-(n-2)d^2}}{d}}^{tan^{-1}\frac{1}{cos\phi_{n-1}}}\ldots\int_{\phi_1=tan^{-1}\frac{\sqrt{1-d^2}}{d}}^{tan^{-1}\frac{1}{cos\phi_2}} d_{S^{n-1}}V$$ where $$d_{S^{n-1}}V = \sin^{n-2}(\phi_1)\sin^{n-3}(\phi_2)\cdots \sin(\phi_{n-2})\ d\phi_1 \ d\phi_2\ldots d\phi_{n-1}$$ is the volume element of the $(n-1)$–sphere. But this is pretty useless as I can see no way of integrating this accurately for high dimensions (in the thousands, say). Using spatial cartesian coordinates, the problem can be restated as evaluating: $$\int_{\sum_{i=1}^{n-1}{x_i}^2\leq1, |x_i| \leq d} \frac{1}{\sqrt{1-\sum_{i=1}^{n-1}{x_i}^2}}dx_1 dx_2 \ldots dx_{n-1}$$ which, as far as I know, is un-integrable. I would greatly appreciate any attempt at estimating this probability (giving an upper bound, say) and how it depends on $n$ and $d$. Or, given a particular probability and fixed $d$, to find $n$ which satisfies that probability. 1 # Probability of a Point on a Unit Sphere lying within a Cube Suppose we have a (n-1 dimensional) Unit Sphere centered at the origin: $$\sum_{i=1}^{n}{x_i}^2 = 1$$ What is the probability that a randomly selected point on the sphere, $(x_1,x_2,x_3,...,x_n)$, has coordinates such that $$\forall i, |x_i| \leq d$$ for some $d \in [0,1]$? This is equivalent to finding the intersection of the $(n-1)$-hypersphere with the $n$-hypercube of side $2d$ centered at origin, and then taking the ratio of that $(n-1)$-volume over the $(n-1)$-volume of the $(n-1)$-hypersphere. As there are closed-form formulas for the volume of a hypersphere, the problem reduces to finding the $(n-1)$-volume of the aforementioned intersection. All attempts I've made to solve this intersection problem have led me to a series of nested integrals, where one or both limits of each integral depend on the coordinate outside that integral, and I know of no way to evaluate it. For example, using hyperspherical coordinates, I have obtained the following integral: $$2^n n! \int_{\phi_{n-1}=tan^{-1}\frac{\sqrt{1-(n-1)d^2}}{d}}^{tan^{-1}1} \int_{\phi_{n-2}=tan^{-1}\frac{\sqrt{1-(n-2)d^2}}{d}}^{tan^{-1}\frac{1}{cos\phi_{n-1}}}\ldots\int_{\phi_1=tan^{-1}\frac{\sqrt{1-d^2}}{d}}^{tan^{-1}\frac{1}{cos\phi_2}} d_{S^{n-1}}V$$ where $$d_{S^{n-1}}V = \sin^{n-2}(\phi_1)\sin^{n-3}(\phi_2)\cdots \sin(\phi_{n-2})\ d\phi_1 \ d\phi_2\ldots d\phi_{n-1}$$ is the volume element of the $(n-1)$–sphere. But this is pretty useless as I can see no way of integrating this accurately for high dimensions (in the thousands, say). Using spatial coordinates, the problem can be restated as evaluating: $$\int_{\sum_{i=1}^{n-1}{x_i}^2\leq1, |x_i| \leq d} \frac{1}{\sqrt{1-\sum_{i=1}^{n-1}{x_i}^2}}dx_1 dx_2 \ldots dx_{n-1}$$ which, as far as I know, is un-integrable. I would greatly appreciate any attempt at estimating this probability (giving an upper bound, say) and how it depends on $n$ and $d$. Or, given a particular probability and fixed $d$, to find $n$ which satisfies that probability.
2013-05-18 11:53:41
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9572048783302307, "perplexity": 148.73599834881122}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368696382360/warc/CC-MAIN-20130516092622-00018-ip-10-60-113-184.ec2.internal.warc.gz"}
http://www.mathnet.ru/php/archive.phtml?wshow=paper&jrnid=tvt&paperid=1302&option_lang=eng
RUS  ENG JOURNALS   PEOPLE   ORGANISATIONS   CONFERENCES   SEMINARS   VIDEO LIBRARY   PACKAGE AMSBIB General information Latest issue Forthcoming papers Archive Impact factor Guidelines for authors Submit a manuscript Search papers Search references RSS Latest issue Current issues Archive issues What is RSS TVT: Year: Volume: Issue: Page: Find TVT, 2006, Volume 44, Issue 2, Pages 174–179 (Mi tvt1302) Plasma Investigations The effect of axisymmetric two-dimensional magnetic field on the configuration of vacuum-arc plasma E. F. Prozorov, K. N. Ulyanov, V. A. Fedorov Russian Electrotechnical Institute Named after V. I. Lenin Abstract: An experimental investigation is made of the effect of axisymmetric two-dimensional magnetic field on the forming of plasma and on the configuration of cathode spots in a vacuum-arc discharge. It is demonstrated that a magnetic field with a transverse (relative to the discharge axis) component has a significant effect on the shape of plasma column and on the rate of expansion of the cathode spot region. In a magnetic field, arc plasma has the form of truncated cone expanding toward the anode. The cathode spots take up a part of the cathode area which decreases with increasing magnetic field. Arguments are given in support of the assumption that the arrangement of cathode spots and the form of arc plasma are defined by the minimum principle similar to the Steinbeck principle. In so doing, the displacement of spots is caused by their emergence in a new region corresponding to a lower arc voltage. Also discussed is the mechanism associated with retrograde motion of cathode spot in view of the effect of azimuthal magnetic field on the axial component of current and of the effect of axial magnetic field on the azimuthal component of current. Full text: PDF file (1513 kB) English version: High Temperature, 2006, 44:2, 166–171 UDC: 537.52 PACS: 52.25.Hz, 52.80.Vp Citation: E. F. Prozorov, K. N. Ulyanov, V. A. Fedorov, “The effect of axisymmetric two-dimensional magnetic field on the configuration of vacuum-arc plasma”, TVT, 44:2 (2006), 174–179; High Temperature, 44:2 (2006), 166–171 Citation in format AMSBIB \Bibitem{ProUlyFed06} \by E.~F.~Prozorov, K.~N.~Ulyanov, V.~A.~Fedorov \paper The effect of axisymmetric two-dimensional magnetic field on the configuration of vacuum-arc plasma \jour TVT \yr 2006 \vol 44 \issue 2 \pages 174--179 \mathnet{http://mi.mathnet.ru/tvt1302} \elib{http://elibrary.ru/item.asp?id=9187068} \transl \jour High Temperature \yr 2006 \vol 44 \issue 2 \pages 166--171 \crossref{https://doi.org/10.1007/s10740-006-0020-4} \elib{http://elibrary.ru/item.asp?id=13517013} \scopus{http://www.scopus.com/record/display.url?origin=inward&eid=2-s2.0-33646734866}
2019-10-20 04:44:49
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.26829415559768677, "perplexity": 4433.216306618343}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-43/segments/1570986702077.71/warc/CC-MAIN-20191020024805-20191020052305-00546.warc.gz"}
https://www.physicsforums.com/threads/solutions-of-permuted-linear-equations.611011/
# Solutions of permuted linear equations 1. Jun 3, 2012 ### anthony2005 Solutions of "permuted" linear equations Hi everyone, one little mathematic puzzle. Say I have $m$ vectors $\overrightarrow{\mu}_{i}$ and one vector $\overrightarrow{\rho}$ all in an $n\leq m$ dimensional vector space, which are known. My question is, if $\sigma$ permutes the $m$ indices, how many of the $m!$ equations $\alpha_{\sigma\left(1\right)}\overrightarrow{{\mu}_{1}}+\alpha_{\sigma\left(2\right)}\overrightarrow{\mu}_{2}+...\alpha_{\sigma\left(m\right)} \overrightarrow{{\mu}_{m}}=\overrightarrow{\rho}$ will be satisfied if $\alpha_{i}$ is a non negative integer? Thank you
2018-01-21 10:57:28
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9121284484863281, "perplexity": 1217.9403729520704}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-05/segments/1516084890514.66/warc/CC-MAIN-20180121100252-20180121120252-00786.warc.gz"}
http://albanyareamathcircle.blogspot.com/2013/05/recommended-summer-reading-for-young.html
## Wednesday, May 29, 2013 ### Recommended summer reading for young students (and their parents!) aspiring to climb mathematical mountains together this summer Does math class make your child feel like a hamster in a cage stuck in a wheel of an endlessly repetitive "spiral curriculum" with little to challenge or inspire her?   If you answered yes, then this book could provide a much-needed breath of fresh air. Imagine if one of your daughter's classmates had an MIT professor dad who loved the fun of mathematical problem solving in his spare time.  Dream on and imagine that he volunteered to share his enthusiasm and talents as a mentor with a small group of students including your child, busting them out of the conventional curriculum hamster wheel to take them on challenging mathematical rock-climbing adventures with inspiring views of beautiful mathematical mountain vistas. Glenn Ellison's daughters are fortunate to have just such a dad and this engaging book is the result of his very successful mathematical excursions with his daughters and their schoolmates. Some of the students with whom he has worked for a number of years have now grown into world-class problem solvers. Written in a good-natured conversational style, Hard Math for Elementary School lays the foundation for elementary school students to develop the tools and habits of confident, capable, and curious problem solvers.   The text provides well-organized explanations and the accompanying workbook poses thoughtfully composed practice problems designed to inspire children to tackle tough problems that exceed the expectations of conventional textbooks. This book and its earlier counterpart for somewhat older students, Hard Math for Middle School, are great solutions to questions frequently posed by parents of young students looking for summer reading for their mathematically voracious students. S Lhude sing cuccu! Groweth sed and bloweth med And springth the wude nu, Sing cuccu! Enjoy your summer!  Parents may find they too enjoy learning some new mathematical insights if they talk about these problems with their children.  It is great for students to discover that sometimes they can figure out answers to problems that stump grownups!  As I have discovered myself, time and time again, when working with my own children as well as other people's children in my math outreach activities, while it may be humbling for me, it is empowering and exciting for children when a flash of insight enables them to climb a mathematical mountain before I do. (Disclosure:  thanks to Professor Ellison for sharing a prepublication review copy of the manuscript with me.)
2019-05-21 00:47:15
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 2, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.2300080806016922, "perplexity": 3428.0449424263757}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-22/segments/1558232256184.17/warc/CC-MAIN-20190521002106-20190521024106-00514.warc.gz"}
https://plainmath.net/45007/when-should-i-use-brackets-or-parenthesis-in-finding-domain
# When should I use brackets or parenthesis in finding domain When should I use brackets or parenthesis in finding domain or range? You can still ask an expert for help • Questions are typically answered in as fast as 30 minutes Solve your problem for the price of one coffee • Math expert for every subject • Pay only if we can solve it Paineow You should use a bracket (square bracket) to indicate that the endpoint is included in the interval, and a parenthesis (round bracket) to indicate that it is not. Brackets are like inequalities that say "or equal" and parentheses are like strict inequalities. For example, (3,7) includes 3.1 and 3.007 and 3.00000000002, but it does not include 3. It also includes numbers greater than 3 and less than 7, but it does not include 7. We can say say this is 3 to 7 "exclusive" (Excluding the endpoints) [4,9] includes 4 and every number from 4 up to 9, and it also includes 9. We can say this is 4 to 9 "inclusive" (Including the endpoints) $\left(a,b\right)=\left\{x:a $\left[a,b\right]=\left\{x:a\le x\phantom{\rule{1em}{0ex}}\text{and}\phantom{\rule{1em}{0ex}}x\le b\right\}$ Of course, there can be also mixed intervals (a,b] or [a,b). The symbols are used to indicate that there is no left or right endpoint for the interval. They always take parentheses. For example: Domain of , $\sqrt{0}=0$ is a number. Domain of , $\frac{1}{\sqrt{x}}$ is a not number. Ana Robertson I'm really grateful, thanks
2022-10-05 12:08:48
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 47, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.7179469466209412, "perplexity": 934.4716600989992}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337625.5/warc/CC-MAIN-20221005105356-20221005135356-00393.warc.gz"}
http://www.mathworks.com/help/comm/ref/continuoustimevco.html?nocookie=true
Continuous-Time VCO Implement voltage-controlled oscillator Library Components sublibrary of Synchronization Description The Continuous-Time VCO (voltage-controlled oscillator) block generates a signal with a frequency shift from the Quiescent frequency parameter that is proportional to the input signal. The input signal is interpreted as a voltage. If the input signal is u(t), then the output signal is $y\left(t\right)={A}_{c}\mathrm{cos}\left(2\pi {f}_{c}t+2\pi {k}_{c}{\int }_{0}^{t}u\left(\tau \right)d\tau +\phi \right)$ where Ac is the Output amplitude parameter, fc is the Quiescent frequency parameter, kc is the Input sensitivity parameter, and φ is the Initial phase parameter. This block uses a continuous-time integrator to interpret the equation above. The input and output are both sample-based scalar signals. Dialog Box Output amplitude The amplitude of the output. Quiescent frequency The frequency of the oscillator output when the input signal is zero. Input sensitivity This value scales the input voltage and, consequently, the shift from the Quiescent frequency value. The units of Input sensitivity are Hertz per volt. Initial phase The initial phase of the oscillator in radians.
2015-07-02 23:22:21
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 1, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8985393643379211, "perplexity": 2955.9315653038334}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2015-27/segments/1435375095677.90/warc/CC-MAIN-20150627031815-00103-ip-10-179-60-89.ec2.internal.warc.gz"}
https://planetmath.org/TukeysLemma
# Tukey’s lemma Each nonempty family of finite character has a maximal element. Here, by a maximal element we a maximal element with respect to the inclusion ordering: $A\leq B$ iff $A\subseteq B$. This lemma is equivalent to the axiom of choice. Title Tukey’s lemma TukeysLemma 2013-03-22 13:13:34 2013-03-22 13:13:34 Koro (127) Koro (127) 6 Koro (127) Theorem msc 03E25 AxiomOfChoice MaximalityPrinciple ZornsLemma ZermelosPostulate KuratowskisLemma
2019-10-18 14:54:30
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 2, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9127300381660461, "perplexity": 7312.460232304469}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-43/segments/1570986682998.59/warc/CC-MAIN-20191018131050-20191018154550-00378.warc.gz"}
http://openstudy.com/updates/508889cbe4b05241909140b6
## sara1234 Group Title Help please ? one year ago one year ago 1. soty2013 Group Title sure 2. satellite73 Group Title you are looking for the number between 0 and $$\pi$$ whose cosine is $$\frac{1}{2}$$ 3. satellite73 Group Title if you still have that cheat sheet, look at the place on the upper half of the unit circle where the second coordinate is $$\frac{1}{2}$$ 4. satellite73 Group Title 5. Aperogalics Group Title π/3 simple 6. Aperogalics Group Title :) 7. satellite73 Group Title ok here is the picture |dw:1351127015253:dw| 8. satellite73 Group Title in this picture, $$\theta=\arctan(2)$$ an angle whose tangent is 2 since tangent is "opposite over adjacent" i labelled the opposite side 2 and the adjacent side 1 9. satellite73 Group Title the hypotenuse we find by pythagoras, it is $$\sqrt{1^2+2^2}=\sqrt{1+4}=\sqrt{5}$$ 10. satellite73 Group Title and using sine as "opposite over hypotenuse" we see that the sine of the angle is $\frac{2}{\sqrt{5}}$
2014-07-28 16:43:26
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.7076972723007202, "perplexity": 3755.296227294554}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2014-23/segments/1406510261249.37/warc/CC-MAIN-20140728011741-00176-ip-10-146-231-18.ec2.internal.warc.gz"}
https://www.gamedev.net/forums/topic/635693-compute-tangent-of-partial-derivates/
# Compute tangent of partial derivates This topic is 1867 days old which is more than the 365 day threshold we allow for new replies. Please post a new topic. ## Recommended Posts UPS!!: Title wrong - should be: Compute Normal of partial derivates Hey, I've created a bezier surface and want to compute the normal of each vertex. To solve the problem I've read that I have to compute the cross product of the partial derivatesbut the partial derivates in this case are a float4 vector and the normal is a float3 vector. If i calculate just with the .xyz components i have a kinda leaking result, or? This is my approach: [domain("quad")] DomainOut DS( PatchTess patchTess, float2 uv : SV_DomainLocation, const OutputPatch<HullOut, 16> bezPatch) { DomainOut dout; float4 basisU = BernsteinBasis(uv.x); float4 basisV = BernsteinBasis(uv.y); float3 p = CubicBezierSum(bezPatch, basisU, basisV); dout.PosH = mul(float4(p, 1.0f), gWorldViewProj); dout.PosW = mul(float4(p, 1.0f), gWorld).xyz; float3 NormalL = cross(basisU.xyz, basisV.xyz); dout.NormalW = mul(float4(NormalL, 1.0f), gWorldInvTranspose); return dout; } Can u give me a hint how it's done correctly? Thanks Edited by ~Helgon
2018-01-22 18:34:50
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.3274732232093811, "perplexity": 8904.415636077701}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-05/segments/1516084891530.91/warc/CC-MAIN-20180122173425-20180122193425-00376.warc.gz"}
https://www.r-bloggers.com/linear-regression-and-anova-shaken-and-stirred-part-1-2/
Linear Regression and ANOVA shaken and stirred (Part 1) March 19, 2017 By (This article was first published on Pachá (Batteries Included), and kindly contributed to R-bloggers) Linear Regression and ANOVA concepts are understood as separate concepts most of the times. The truth is they are extremely related to each other being ANOVA a particular case of Linear Regression. Even worse, its quite common that students do memorize equations and tests instead of trying to understand Linear Algebra and Statistics concepts that can keep you away from misleading results, but that is material for another entry. Most textbooks present econometric concepts and algebraic steps and do empathise about the relationship between Ordinary Least Squares, Maximum Likelihood and other methods to obtain estimates in Linear Regression. Here I present a combination of little algebra and R commands to try to clarify some concepts. Linear Regression $$\vec{y} = X\vec{\beta} + \vec{e}$$ Being: $$\underset{n\times 1}{\vec{y}} = \begin{pmatrix}y_0 \cr y_1 \cr \vdots \cr y_n\end{pmatrix} \text{ and } \underset{n\times p}{X} = \begin{pmatrix}1 & x_{11} & & x_{1p} \cr 1 & x_{21} & & x_{2p} \cr & \ddots & \cr 1 & x_{n1} & & x_{np}\end{pmatrix} = (\vec{1} \: \vec{x}_1 \: \ldots \: \vec{x}_p)$$ In linear models the aim is to minimize the error term by chosing $$\hat{\vec{\beta}}$$. One possibility is to minimize the squared error by solving this optimization problem: $$\label{min} \displaystyle \min_{\vec{\beta}} S = \|\vec{y} – X\vec{\beta}\|^2$$ Books such as Baltagi discuss how to solve $$\eqref{min}$$ and other equivalent approaches that result in this optimal estimator: $$\label{beta} \hat{\vec{\beta}} = (X^tX)^{-1} X^t\vec{y}$$ With one independent variable and intercept, this is $$y_i = \beta_0 + \beta_1 x_{i1} + e_i$$, equation $$\eqref{beta}$$ means: $$\label{beta2} \hat{\beta}_1 = cor(\vec{y},\vec{x}) \cdot \frac{sd(\vec{y})}{sd(\vec{x})} \text{ and } \hat{\beta}_0 = \bar{y} – \hat{\beta}_1 \bar{\vec{x}}$$ Coding example with mtcars dataset Consider the model: $$mpg_i = \beta_0wt_i + \beta_1cyl_i + e_i$$ This is how to write that model in R notation: lm(mpg ~ wt + cyl, data = mtcars) Call: lm(formula = mpg ~ wt + cyl, data = mtcars) Coefficients: (Intercept) wt cyl 39.686 -3.191 -1.508 Or written in matrix form: y = mtcars$mpg x0 = rep(1, length(y)) x1 = mtcars$wt x2 = mtcars$cyl X = cbind(x0,x1,x2) It’s the same to use lm or to perform a matrix multiplication because of equation $$\eqref{beta}$$: fit = lm(y ~ x1 + x2) coefficients(fit) (Intercept) x1 x2 39.686261 -3.190972 -1.507795 beta = solve(t(X)%*%X) %*% (t(X)%*%y) beta [,1] x0 39.686261 x1 -3.190972 x2 -1.507795 Coding example with Galton dataset Equation $$\eqref{beta2}$$ can be verified with R commands: #install.packages("HistData") require(HistData) # read the documentation ??Galton y = Galton$child x = Galton$parent beta1 = cor(y, x) * sd(y) / sd(x) beta0 = mean(y) - beta1 * mean(x) c(beta0, beta1) [1] 23.9415302 0.6462906 #comparing with lm results lm(y ~ x) Call: lm(formula = y ~ x) Coefficients: (Intercept) x 23.9415 0.6463 Coding example with mtcars dataset and mean centered regression Another possibility in linear models is to rewrite the observations in the outcome and the design matrix with respect to the mean of each variable. That will only alter the intercept but not the slope coefficients. So, for a model like $$y_i = \beta_0 + \beta_1 x_{i1} + \beta_2 x_{i2} + e_i$$ I can write the equivalent model: $$y_i – \bar{y} = \beta_0 + \beta_1 (x_{i1} – \bar{x}_{i1}) + \beta_2 (x_{i2} – \bar{x}_{i2}) + e_i$$ Another possibility is to consider that $$\bar{y}_i = \beta_0 + \beta_1 \bar{x}_{i1} + \beta_2 \bar{x}_{i2} + 0$$ under the classical assumption $$\bar{e}_i = 0$$ and substracting I obtain: $$y_i – \bar{y} = \beta_1 (x_{i1} – \bar{x}_{i1}) + \beta_2 (x_{i2} – \bar{x}_{i2}) + e_i$$ I’ll analyze the first case, without dropping $$\beta_0$$ unless there’s statistical evidence to show its not significant. In R notation the model $$y_i – \bar{y} = \beta_0 + \beta_1 (x_{i1} – \bar{x}_{i1}) + \beta_2 (x_{i2} – \bar{x}_{i2}) + e_i$$ can be fitted in this way: # read the documentation ??mtcars new_y = mtcars$mpg - mean(mtcars$mpg) new_x1 = mtcars$wt - mean(mtcars$wt) new_x2 = mtcars$cyl - mean(mtcars$cyl) fit2 = lm(new_y ~ new_x1 + new_x2) coefficients(fit2) (Intercept) new_x1 new_x2 4.527895e-16 -3.190972e+00 -1.507795e+00 new_X = cbind(x0,new_x1,new_x2) new_beta = solve(t(new_X)%*%new_X) %*% (t(new_X)%*%new_y) new_beta [,1] x0 6.879624e-16 new_x1 -3.190972e+00 new_x2 -1.507795e+00 Here the intercept is close to zero, so I can obtain more information to check significance: summary(fit2) Call: lm(formula = new_y ~ new_x1 + new_x2) Residuals: Min 1Q Median 3Q Max -4.2893 -1.5512 -0.4684 1.5743 6.1004 Coefficients: Estimate Std. Error t value Pr(>|t|) (Intercept) 4.528e-16 4.539e-01 0.000 1.000000 new_x1 -3.191e+00 7.569e-01 -4.216 0.000222 *** new_x2 -1.508e+00 4.147e-01 -3.636 0.001064 ** --- Signif. codes: 0 '***' 0.001 '**' 0.01 '*' 0.05 '.' 0.1 ' ' 1 Residual standard error: 2.568 on 29 degrees of freedom Multiple R-squared: 0.8302, Adjusted R-squared: 0.8185 F-statistic: 70.91 on 2 and 29 DF, p-value: 6.809e-12 In this particular case I should drop the intercept because its not significant so I write: fit3 = lm(new_y ~ new_x1 + new_x2 - 1) coefficients(fit3) new_x1 new_x2 -3.190972 -1.507795 new_X = cbind(new_x1,new_x2) new_beta = solve(t(new_X)%*%new_X) %*% (t(new_X)%*%new_y) new_beta [,1] new_x1 -3.190972 new_x2 -1.507795 Residuals The total sum of squares is defined as the sum of explained and residual (or unexplained) sum of squares or, in other words, the sum of explained and unexplained variance in the model: $$TSS = ESS + RSS = \sum_i (\hat{y}_i – \bar{y})^2 + \sum_i (y_i – \hat{y}_i)^2 = \sum_i (y_i – \bar{y})^2$$ Being $$\hat{\vec{y}} = X\hat{\vec{\beta}}$$. Here $$TSS$$ follows a $$F(p,n-1)$$ distribution with $$n-1$$ degrees of freedom. This is, $$ESS$$ has $$p$$ degrees of freedom and $$RSS$$ has $$n-p-1$$ degrees of freedom and the F-statistic is: $$F = \frac{ESS/p}{RSS/(n-p-1)}$$ This statistic tests the null hypothesis $$\vec{\beta} = \vec{0}$$. This is, the F-statistic provides information about the joint effect of all the variables in the model together and therefore p-values are required to determine single coefficients’ significance. ANOVA The term analysis of variance refers to categorical predictors so ANOVA is a particular case of the linear model that works around the statistical test just described and the difference in group means. ANOVA is a particular case of the linear model where predictors (or independent variables) are dummy variables that reflect if an observation belongs to a certain group. An example of this would be $$x_{i1} = 1$$ if observation $$i$$ belongs to a group of interest (e.g. the interviewed person is in the group of people who has a Twitter account) and $$x_{i1} = 0$$ otherwise. The null hypothesis in ANOVA is “group means are all equal” as I’ll explain with examples. This comes from the fact that regression coefficients in ANOVA measure the effect of belonging to a group, and as its explained about F test you can examinate the associated p-value to a regression coefficient to check if the group effect is statistically different from zero (e.g. if you have a group of people who uses social networks and a subgroup of people who use Twitter, then if the dummy variable that expresses Twitter using has a non-significative regression coefficient, then you have to evidence to state that group means are different) An example with mtcars dataset In the mtcars dataset, am can be useful to explain ANOVA as its observations are defined as: $$am_i = \begin{cases}1 &\text{ if car } i \text{ is manual} \cr 0 &\text{ if car } i \text{ is automatic}\end{cases}$$ Case 1 Consider a model where the outcome is mpg and the design matrix is $$X = (\vec{x}_1 \: \vec{x}_2)$$ so that the terms are defined in this way: y = mtcars$mpg x1 = mtcars$am x2 = ifelse(x1 == 1, 0, 1) This is: $$x_1 = \begin{cases}1 &\text{ if car } i \text{ is manual} \cr 0 &\text{ if car } i \text{ is automatic}\end{cases} \quad \quad x_2 = \begin{cases}1 &\text{ if car } i \text{ is automatic} \cr 0 &\text{ if car } i \text{ is manual}\end{cases}$$ The estimates without intercept would be: fit = lm(y ~ x1 + x2 - 1) fit$coefficients x1 x2 24.39231 17.14737 Taking $$\eqref{beta}$$ and replacing in this particular case would result in this estimate: $$\hat{\vec{\beta}} = \begin{bmatrix}\bar{y}_1 \cr \bar{y}_2 \end{bmatrix}$$ being $$\bar{y}_1$$ and $$\bar{y}_2$$ the group means. This can be verified with R commands: y1 = y*x1; y1 = ifelse(y1 == 0, NA, y1) y2 = y*x2; y2 = ifelse(y2 == 0, NA, y2) mean(y1, na.rm = TRUE) [1] 24.39231 mean(y2, na.rm = TRUE) [1] 17.14737 If you are not convinced of this result you can write down the algebra or use R commands. I’ll do the last with the notation $$U = (X^tX)^{-1}$$ and $$V = X^t\vec{y}$$: X = cbind(x1,x2) U = solve(t(X)%*%X) V = t(X)%*%y U;V;U%*%V x1 x2 x1 0.07692308 0.00000000 x2 0.00000000 0.05263158 [,1] x1 317.1 x2 325.8 [,1] x1 24.39231 x2 17.14737 $$U$$ entries are just one over the number of observations of each group and V entries are the sum of mpg observations of each group so that the entries of $$UV$$ are the means of each group: u11 = 1/sum(x1) u22 = 1/sum(x2) v11 = sum(y1, na.rm = TRUE) v21 = sum(y2, na.rm = TRUE) u11;u22 [1] 0.07692308 [1] 0.05263158 v11;v21 [1] 317.1 [1] 325.8 u11*v11;u22*v21 [1] 24.39231 [1] 17.14737 Aside from algebra, now I’ll show the equivalency between lm and aov that is the command used to perform an analysis of variance: y = mtcars$mpg x1 = mtcars$am x2 = ifelse(x1 == 1, 0, 1) fit2 = aov(y ~ x1 + x2 - 1) fit2$coefficients x1 x2 24.39231 17.14737 Case 2 Changing the design matrix to $$X = (\vec{1} \: \vec{x}_1)$$ will lead to the estimate: $$\hat{\vec{\beta}} = \begin{bmatrix}\bar{y}_2 \cr \bar{y}_1 – \bar{y}_2 \end{bmatrix}$$ Fitting the model results in: y = mtcars$mpg x1 = mtcars$am fit = lm(y ~ x1) fit$coefficients (Intercept) x1 17.147368 7.244939 So to see the relationship between the estimates and the group means I need additional steps: x0 = rep(1,length(y)) X = cbind(x0,x1) beta = solve(t(X)%*%X) %*% (t(X)%*%y) beta [,1] x0 17.147368 x1 7.244939 I did obtain the same estimates with lm command so now I calculate the group means: x2 = ifelse(x1 == 1, 0, 1) x1 = ifelse(x1 == 0, NA, x1) x2 = ifelse(x2 == 0, NA, x2) m1 = mean(y*x1, na.rm = TRUE) m2 = mean(y*x2, na.rm = TRUE) beta0 = m2 beta1 = m1-m2 beta0;beta1 [1] 17.14737 [1] 7.244939 In this case this means that the slope for the two groups is the same but the intercept is different, and therefore exists a positive effect of manual transmission on miles per gallon in average terms. Again I’ll verify the equivalency between lm and aov in this particular case: y = mtcars$mpg x1 = mtcars$am x2 = ifelse(x1 == 1, 0, 1) fit2 = aov(y ~ x1) fit2$coefficients (Intercept) x1 17.147368 7.244939 A simpler way to write the model is: fit3 = lm(mpg ~ am, data = mtcars) summary(fit3) Call: lm(formula = mpg ~ am, data = mtcars) Residuals: Min 1Q Median 3Q Max -9.3923 -3.0923 -0.2974 3.2439 9.5077 Coefficients: Estimate Std. Error t value Pr(>|t|) (Intercept) 17.147 1.125 15.247 1.13e-15 *** am 7.245 1.764 4.106 0.000285 *** --- Signif. codes: 0 '***' 0.001 '**' 0.01 '*' 0.05 '.' 0.1 ' ' 1 Residual standard error: 4.902 on 30 degrees of freedom Multiple R-squared: 0.3598, Adjusted R-squared: 0.3385 F-statistic: 16.86 on 1 and 30 DF, p-value: 0.000285 I can calculate the residuals by hand: mean_mpg = mean(mtcars$mpg) fitted_mpg = fit3$coefficients[1] + fit3$coefficients[2]*mtcars$am observed_mpg = mtcars$mpg TSS = sum((observed_mpg - mean_mpg)^2) ESS = sum((fitted_mpg - mean_mpg)^2) [1] 1126.047 [1] 405.1506 [1] 720.8966 Here its verified that $$TSS = ESS + RSS$$ but aside from that I can extract information from aov: summary(fit2) Df Sum Sq Mean Sq F value Pr(>F) x1 1 405.2 405.2 16.86 0.000285 *** Residuals 30 720.9 24.0 --- Signif. codes: 0 '***' 0.001 '**' 0.01 '*' 0.05 '.' 0.1 ' ' 1 And check that, as expected, $$ESS$$ is the variance explained by x1. I also can run ANOVA over lm with: anova(fit3) Analysis of Variance Table Response: mpg Df Sum Sq Mean Sq F value Pr(>F) am 1 405.15 405.15 16.86 0.000285 *** Residuals 30 720.90 24.03 --- Signif. codes: 0 '***' 0.001 '**' 0.01 '*' 0.05 '.' 0.1 ' ' 1 The table provides information on the effect of am over mpg. In this case the null hypothesis is rejected because of the large F-value and the associated p-values. Considering a 0.05 significance threshold I can say, with 95% of confidence, that the regression slope is statistically different from zero or that there is a difference in group means between automatic and manual transmission. R-bloggers.com offers daily e-mail updates about R news and tutorials on topics such as: Data science, Big Data, R jobs, visualization (ggplot2, Boxplots, maps, animation), programming (RStudio, Sweave, LaTeX, SQL, Eclipse, git, hadoop, Web Scraping) statistics (regression, PCA, time series, trading) and more...
2018-05-23 12:54:10
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 2, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.7254952192306519, "perplexity": 4468.894483848051}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-22/segments/1526794865651.2/warc/CC-MAIN-20180523121803-20180523141803-00197.warc.gz"}
http://openstudy.com/updates/51269edae4b0111cc68f0e31
## anonymous 3 years ago Need Helpp MEdalls!!!!! 1. anonymous $\frac{ -4+20x }{ 4 }$ Simplify ? 2. anonymous Divide both -4 and 20x by 4 3. anonymous but 4 doesnt goin to -4 4. anonymous It does: suppose you want to check a division, e.g. 18/3=6. Then you know 18 = 6*3. Now -4/4 = ???, then 4 * ???= -4. What number is ??? 5. anonymous 1 6. anonymous No, because 4*1=4 and not -4 7. anonymous -1 8. anonymous OK, so -4/4 = -1 Now you have to do: 20x/4 9. anonymous 5 10. anonymous You lost x 11. anonymous 5x 12. anonymous Done! 13. anonymous so thats my final answer are my final equation 14. anonymous You have -4 and 5x, so the final answer is... -4+5x 15. anonymous how bout this one $-\frac{ 1 }{ 2 }(4-6a)$ 16. anonymous Use the distributive property: a(b-c)=ab-ac 17. anonymous I dont knoe the distributie Prop 18. anonymous It tells you how to get rid of the brackets! So you have to multiply -1/2 with 4 and also multiply -1/2 with -6a 19. anonymous okay I see 20. anonymous What do you get? 21. anonymous how do I multiply a fraction in to a number 22. anonymous Just forget the "-" for a while. 1/2 * 4 means you have four halves. How many is that? 23. anonymous 2 wholes 24. anonymous OK. Now remember that if you multiply a negative number with a positive number, the outcome is negative. So: -1/2 * 4 = ... 25. anonymous -2 26. anonymous Right!, now the other one... 27. anonymous so for the other half it be-3 the i would be left with(-2- -3) and my answer be -5 28. anonymous You have to multiply, so -1/2 * -6a = ... 29. anonymous 6 isn't negative that was the subtraction sign 30. anonymous OK, so save it for later. So: -1/2 * 6a = ... 31. anonymous -3 32. anonymous You lost the a 33. anonymous -3a ugh srry:p 34. anonymous so then my equation would be(-2 - -3a) 35. anonymous the my answer should be -5a right 36. anonymous No, -2 hasn't got an a with it, so you can't add it to 3a 37. anonymous So it would be -2+3a 38. anonymous -- = + 39. anonymous so it be 1a right 40. anonymous No, it would be -2+3a. This cannot be written shorter! If it was -2a+3a this would be 1a or a. 41. anonymous oh okay got it
2016-09-26 05:36:58
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8070011734962463, "perplexity": 11949.004573749953}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2016-40/segments/1474738660706.30/warc/CC-MAIN-20160924173740-00139-ip-10-143-35-109.ec2.internal.warc.gz"}
https://gmatclub.com/forum/the-numbers-p-and-q-are-both-positive-if-p-percent-of-160-equals-q-pe-273944.html
Summer is Coming! Join the Game of Timers Competition to Win Epic Prizes. Registration is Open. Game starts Mon July 1st. It is currently 18 Jul 2019, 21:24 ### GMAT Club Daily Prep #### Thank you for using the timer - this advanced tool can estimate your performance and suggest more practice questions. We have subscribed you to Daily Prep Questions via email. Customized for You we will pick new questions that match your level based on your Timer History Track every week, we’ll send you an estimated GMAT score based on your performance Practice Pays we will pick new questions that match your level based on your Timer History # The numbers p and q are both positive. If p percent of 160 equals q pe Author Message TAGS: ### Hide Tags Math Expert Joined: 02 Sep 2009 Posts: 56257 The numbers p and q are both positive. If p percent of 160 equals q pe  [#permalink] ### Show Tags 21 Aug 2018, 06:21 00:00 Difficulty: 15% (low) Question Stats: 85% (00:56) correct 15% (00:51) wrong based on 33 sessions ### HideShow timer Statistics The numbers p and q are both positive. If p percent of 160 equals q percent of 40, then p/q= A. Cannot be determined B. 1/4 C. 2/5 D. 5/2 E. 4 _________________ NUS School Moderator Joined: 18 Jul 2018 Posts: 982 Location: India Concentration: Finance, Marketing WE: Engineering (Energy and Utilities) The numbers p and q are both positive. If p percent of 160 equals q pe  [#permalink] ### Show Tags 21 Aug 2018, 06:48 P percent of 160 = $$\frac{P}{100}$$*160 Q percent of 40 = $$\frac{Q}{100}$$*40 $$\frac{P}{100}$$*160 = $$\frac{Q}{100}$$*40. Solving gives 4P = Q. Therefore $$\frac{P}{Q}$$ = $$\frac{1}{4}$$ _________________ Press +1 Kudos If my post helps! Board of Directors Status: QA & VA Forum Moderator Joined: 11 Jun 2011 Posts: 4512 Location: India GPA: 3.5 Re: The numbers p and q are both positive. If p percent of 160 equals q pe  [#permalink] ### Show Tags 21 Aug 2018, 07:23 Bunuel wrote: The numbers p and q are both positive. If p percent of 160 equals q percent of 40, then p/q= A. Cannot be determined B. 1/4 C. 2/5 D. 5/2 E. 4 $$\frac{p}{100}*160 = \frac{q}{100}*40$$ Or, $$4p = q$$ So, $$\frac{p}{q} = \frac{1}{4}$$, Answer must be (B) _________________ Thanks and Regards Abhishek.... PLEASE FOLLOW THE RULES FOR POSTING IN QA AND VA FORUM AND USE SEARCH FUNCTION BEFORE POSTING NEW QUESTIONS How to use Search Function in GMAT Club | Rules for Posting in QA forum | Writing Mathematical Formulas |Rules for Posting in VA forum | Request Expert's Reply ( VA Forum Only ) Senior Manager Joined: 07 Jul 2012 Posts: 372 Location: India Concentration: Finance, Accounting GPA: 3.5 Re: The numbers p and q are both positive. If p percent of 160 equals q pe  [#permalink] ### Show Tags 21 Aug 2018, 09:07 1.6p=0.4q $$\frac{p}{q}$$= $$\frac{0.4}{1.6}$$= $$\frac{1}{4}$$ _________________ Kindly hit kudos if my post helps! Intern Joined: 14 Jan 2018 Posts: 45 Location: India Concentration: General Management, Entrepreneurship GMAT 1: 660 Q50 V29 GPA: 3.8 WE: Engineering (Manufacturing) Re: The numbers p and q are both positive. If p percent of 160 equals q pe  [#permalink] ### Show Tags 21 Aug 2018, 09:11 160*p/100=40*q/100 4p=q P/q=1/4 Option B Posted from my mobile device Re: The numbers p and q are both positive. If p percent of 160 equals q pe   [#permalink] 21 Aug 2018, 09:11 Display posts from previous: Sort by
2019-07-19 04:24:32
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.6903319358825684, "perplexity": 12338.44671194615}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-30/segments/1563195525974.74/warc/CC-MAIN-20190719032721-20190719054721-00259.warc.gz"}
https://www.vedantu.com/question-answer/a-cement-company-earns-a-profit-of-rs-8-per-bag-class-8-maths-cbse-5ee4933fbe1b52452d364b9f
Question # A cement company earns a profit of Rs. 8 per bag of white cement sold and a loss of Rs. 5 per bag of grey cement sold. if the company sells 3000 bags of white cement and 5000 bags of grey cement in a month. What is its profit or loss? Verified 52.5k+ views Hint: To solve the question, we have to calculate the total loss caused by selling grey cement and total profit earned by selling white cement by using the given information. As per the given information, we have The profit earned by a cement company by selling white cement = Rs.8 per bag. The number of bags of white cement sold is equal to 3000 bags. The total profit earned by selling the given number of bags$=3000\times 8=24,000$ rupees. The loss by a cement company by selling grey cement = Rs.5 per bag. The number of bags of white cement sold is equal to 5000 bags. Therefore, the total loss by selling the given number of bags is, $=5000\times 5=25,000$ rupees Total loss is greater than total profit earned. 25,000 > 24,000 Thus, loss occurs. The total loss of the cement company = Total loss by selling the given number of bags - Total profit earned by selling the given number of bags. Substituting the corresponding values, we get Total loss = 25,000 – 24,000 = 1000 Thus, the total loss of the cement company is equal to Rs.1000. Note: The possibility of mistake is to calculate loss without calculating the total cost using the number of bags of white and grey cement. The point to remember is loss is negative value and when total loss is greater than total profit then it is the case of loss for the cement company.
2021-10-25 18:34:34
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.24560129642486572, "perplexity": 1805.3499167165921}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-43/segments/1634323587719.64/warc/CC-MAIN-20211025154225-20211025184225-00138.warc.gz"}
https://math.stackexchange.com/questions/2151699/translation-of-english-sentences-to-first-order-logic-in-conjunctive-normal-form
# Translation of english sentences to first order logic in conjunctive normal form walkthrough? I am having a hard time on a homework problem which involves converting a given English sentence into first order logic and then converting that into Conjunctive Normal Form. The sentences are (EDIT: Along with the entire problem statement) Translate the following into First Order Logic and then convert to Conjunctive Normal Form (CNF): According to the Pidgeon: If little girls eat eggs, then they are a kind of serpent. Alice (who is a little girl) eats eggs. Therefore, she is a kind of serpent. I think I have done well so far below but I seem to be stuck trying to reduce the sentences any further, I am not sure what rule to apply from here. Hypothesis where L(x) = LittleGirl(x), E(x) = EatEggs(x), S(x) = Serpent(x) {[∀x ((L(x) ∧ E(x)) ⇒ S(x))] ∧ [L(Alice) ∧ E(Alice)]} ⇒ S(Alice) Implication: [A ⇒ B ≡ ¬A ∨ B] Applied to: L and E {[∀x (¬(L(x) ∧ E(x)) ∨ S(x))] ∧ [L(Alice) ∧ E(Alice)]} ⇒ S(Alice) Implication: [A ⇒ B ≡ ¬A ∨ B] Applied to: Pidgeon's final implication ¬{[∀x (¬(L(x) ∧ E(x)) ∨ S(x))] ∧ [L(Alice) ∧ E(Alice)]} ∨ S(Alice) Universal Instantiation: [∀x P(x) ⇒ P(a/x)] Applied to: The only occurance ¬{[(¬(L(a/x) ∧ E(a/x)) ∨ S(a/x))] ∧ [L(Alice) ∧ E(Alice)]} ∨ S(Alice) DeMorgan's Law: [¬(A ∧ B) ≡ ¬A ∨ ¬B, ¬(A ∨ B) ≡ ¬A ∧ ¬B] Applied to: Statement in brackets ¬[(¬(L(a/x) ∧ E(a/x)) ∨ S(a/x))] ∨ ¬[L(Alice) ∧ E(Alice)] ∨ S(Alice) DeMorgan's Law: [¬(A ∧ B) ≡ ¬A ∨ ¬B, ¬(A ∨ B) ≡ ¬A ∧ ¬B] Applied to: a/x statement (¬¬(L(a/x) ∧ E(a/x)) ∧ ¬S(a/x)) ∨ ¬[L(Alice) ∧ E(Alice)] ∨ S(Alice) DeMorgan's Law: [¬(A ∧ B) ≡ ¬A ∨ ¬B, ¬(A ∨ B) ≡ ¬A ∧ ¬B] Applied to: Middle statement (¬¬(L(a/x) ∧ E(a/x)) ∧ ¬S(a/x)) ∨ ¬L(Alice) ∨ ¬E(Alice) ∨ S(Alice) Double negation elimination: [¬¬A ≡ A] Applied to: Left statement ((L(a/x) ∧ E(a/x)) ∧ ¬S(a/x)) ∨ ¬L(Alice) ∨ ¬E(Alice) ∨ S(Alice) Though after doing some reading I think that I might have done this wrong. After reviewing this webpage describing Universal Instantiation I feel like I could apply its example problem directly to mine. I could have initially applied Universal Instantiation and then Modus ponens to have simply ended with Serpent(Alice) which is in CNF. But this feels like I am "cheating" the problem, or is this most likely the solution the professor seeks for the question? Am I on the right track above or should I try applying the example from the website to my problem? If I am on the right track what rule might I apply next (specifically to the top statement) to continue? One immediate problem I see is that hou have three sentences (making up 1 argument), but you treat this as 1 sentence. Instead, you should put each of the three sentences in CNF by themselves. Second, you say to put the sentences in CNF, but the first sentence involves a quantifier. Does this mean that you just have to put the body in CNF and leave the quantifier? Or are you doing the preprocessing to use resolution? I get the feeling you are ... Anyway, let's take that first sentence, and do some algebraic manipulation: $\forall x ((L(x) \land E(x)) \rightarrow S(x)) \Leftrightarrow$ (Implication) $\forall x (\neg (L(x) \land E(x)) \lor S(x)) \Leftrightarrow$ (DeMorgan) $\forall x (\neg L(x) \lor \neg E(x) \lor S(x))$ And now the body is in CNF. If you have to get rid of the unatifier in preparation of resolution, you just drop it and get: $\neg L(x) \lor \neg E(x) \lor S(x)$ The second statement is already in CNF: $L(a) \land E(a)$ And again , assuming you are setting this up for resolution (which is a consistency checking method), you should negate the conclusion ... Which is also in CNF: $\neg S(a)$ This gives you 4 clauses: 1. $\{ \neg L(x) , \neg E(x) , S(x) \}$ 2. $\{ L(a) \}$ 3. $\{ E(a) \}$ 4. $\{ \neg S(a)\}$ And now we ca resolve: 1. $\{ \neg E(a), S(a) \}$ 1,2 2. $\{ S(a) \}$ 3,5 3. $\{ \}$ 4,6 Contradition, so the original argument is valid! • I have posted the entire problem statement, there is not need for resolution only to "put [the sentences] into CNF". // Though I feel like its "cheating"/not in the spirit of the problem to not link the three English sentences into one logical sentence? – KDecker Feb 20 '17 at 0:13 • Also, could you add a short explanation why the body is in CNF even through it is the disjunction of 3 functions? – KDecker Feb 20 '17 at 0:24 • @KDecker I really don't think it is cheating to not link the 3 sentences together, because the passage contains 3 sentences, not 1. There is 1 argument, but an argument is not a sentence. Indeed, an 'if... Then' sentence is really quite different from a '.. Therefore ...' argument. – Bram28 Feb 20 '17 at 0:31 • @KDecker Any disjunction of literals is in CNF. That feels weird, but it is true. A claim is in CNF when it is a general conjunction of disjunctions of literals. A claim like $A \lor B$ Is in CNF because it is a single conjunct, which is a disjunction of literals! – Bram28 Feb 20 '17 at 0:34 • When reading the Wikipedia for CNF I saw A∨B was in CNF so I had assumed A∨B∨C would also be. But yes that is quite odd. Do you mean anything special by "general conjunction" or just conjunction? // As for using 3 or 1 statements, could I come to 1 statement at the end by and'ing all three statements together (which would be in CNF?) (¬L(x) ∨ ¬E(x) ∨ S(x)) ∧ (L(Alice) ∧ E(Alice)) ∧ S(Alice)? – KDecker Feb 20 '17 at 0:53
2020-02-19 20:36:09
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.6513508558273315, "perplexity": 473.0403700960246}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-10/segments/1581875144167.31/warc/CC-MAIN-20200219184416-20200219214416-00497.warc.gz"}
https://solvedlib.com/n/consider-the-following-functlons-polnt-kx-3-fl-2-3-x2-0,6775478
# Consider the following _ Functlons Polnt Kx) = *-3 Fl) =2 +3, X2 0 (2, 7) (a) Find the domains ###### Question: Consider the following _ Functlons Polnt Kx) = *-3 Fl) =2 +3, X2 0 (2, 7) (a) Find the domains of f and (Enter your answer using interval notation.) Domain of f Domain of €-1 (b) Find the ranges of f and F71 (Enter your answer using Interval notation:) Range of f Range of f-1 (c) Graph f and F1 (d) Show that the slopes of the graphs of f ad f1 are reciprocals at the given points f'(7) (F1)(2) Viawing Savea Work Rovert t0 Last kosponse Submit Answer Practice Another Vorsion #### Similar Solved Questions ##### CJohnson Math 1314-19002Test: Test 1This Question:Solve the absolute value ireqquality8/21Rewrite Ihe Inequality wilhout absolute value bars Select the correct choice below and fIl in the answer (Simplify vouf answcrs;0A820 0 € X=8 = 0i * - 8 2Select the coiraci answer below and ilnecessary: fll in Iho answurcomiplete vour chcice04 The solution = eiin intorval notalion (Simplity your answer 0 B. The solullon set IsClkk io serectand pclocUc nserKctoncshatrsType nemto teatch CJohnson Math 1314-19002 Test: Test 1 This Question: Solve the absolute value ireqquality 8/21 Rewrite Ihe Inequality wilhout absolute value bars Select the correct choice below and fIl in the answer (Simplify vouf answcrs; 0A 820 0 € X=8 = 0i * - 8 2 Select the coiraci answer below and ilne... ##### Calculator Exercise 7-25 (Algorithmic) (LO.4) During the current year, Tucker had the following personal casualty gains... Calculator Exercise 7-25 (Algorithmic) (LO.4) During the current year, Tucker had the following personal casualty gains and losses (after deducting the $100 floor): Asset Asset 1 Asset 2 Asset 3 Holding Period 18 months 2 months 3 years Gain or (Loss) ($27,100) 16,260 33,875 What are the tax consequ... ##### Asher is a stay at home dad who provides math tutoring for extra cash Asher is a stay at home dad who provides math tutoring for extra cash. At a wage of $30 per hour, he is willing to tutor 5 hours per week. At$45 per hour, he is willing to tutor 6 hours per week. Using the midpoint method, the elasticity of Asher’s labor supply between the wages of $30 and$4... ##### A table of values of an increasing function f is shown. Use the table to find lower and upper estimates for f(x) dx 10lower estimateupper estimate10 14 18 22 26 30f(x) -11 -5 -1/ 1 | 4 A table of values of an increasing function f is shown. Use the table to find lower and upper estimates for f(x) dx 10 lower estimate upper estimate 10 14 18 22 26 30 f(x) -11 -5 -1/ 1 | 4... ##### Webwork math220f19 disk_and_ washer_methodsDisk and Washer Methods: Problem 3 Previous Problem List Nextpoint) Find the volume of the solid obtained by rotating the region bounded by =r y = 0,and X=l about the Y-axis.Preview My AnswersSubmit AnswersYou have attempted this problem times. You have unlimited attempts remaining_Email instructor webwork math220f19 disk_and_ washer_methods Disk and Washer Methods: Problem 3 Previous Problem List Next point) Find the volume of the solid obtained by rotating the region bounded by =r y = 0,and X=l about the Y-axis. Preview My Answers Submit Answers You have attempted this problem times. You hav... ##### Check my work Check My Work button is now enabledItem 4Item 4 Part 1 of 2... Check my work Check My Work button is now enabledItem 4Item 4 Part 1 of 2 4 points Required information [The following information applies to the questions displayed below.] Stoll Co.'s long-term available-for-sale portfolio at the start of this year consists of the following. Available-for-Sale... ##### The atmospheric pressure varies proportionally from sea level to height, and the air temperature ... The atmospheric pressure varies proportionally from sea level to height, and the air temperature drops by 6K for every T km increase (a) Draw a cylindrical volume that is height inside the atmosphere, and then calculate the pressure change and expression (dP/dy-pg) depending on the height. (b) obtai... ##### Question 63 I am most likely to have Production Strategy a Production Strategy and have in... Question 63 I am most likely to have Production Strategy a Production Strategy and have in a Shortages, Level, overtime, Level Inventory. Chase, overtime, Level Shortages, Chase, Inventory. Level Inventory, Level, overtime. Chase... ##### [2 points] A car coasts at a constant speed through a circular valley, as shown in the figure. Which of the free-body diagrams illustrated (A, B, C, D, E) correctly represents the individual forces acting on the car and their relative magnitudes? [2 points] A car coasts at a constant speed through a circular valley, as shown in the figure. Which of the free-body diagrams illustrated (A, B, C, D, E) correctly represents the individual forces acting on the car and their relative magnitudes?... ##### How do I derive y=sqrt(9+x^2)+sqrt(x^2-10x+41)? How do I derive y=sqrt(9+x^2)+sqrt(x^2-10x+41)?... ##### Speedometer readings for vehicle (in motion)at 6-second intervals are given in the table. (sec) (ftIs) 0 30 38Estimate the distance traveled by the vehicle during this 30-second period using the velocities at the beginning of the time intervals_distance traveledfeetGive another estimate using the velocities at the end of the time periods_distance traveledfeet Speedometer readings for vehicle (in motion)at 6-second intervals are given in the table. (sec) (ftIs) 0 30 38 Estimate the distance traveled by the vehicle during this 30-second period using the velocities at the beginning of the time intervals_ distance traveled feet Give another estimate using th... ##### 3.34/10 pointsPrevious AnswersSERPSE1O 23.4.P.032My NotesAsk Your TeacherAssume the magnitude of the electric field on each face of the cube of edge L=L.ll m in the figure below is uniform and the directions of the fields on each face are as indicated (Take E, 33.0 N/c and Ez 22.5 N/C )"u.N/cP0ON/C"2uN/cI5ON/C(a) Find the net electric flux through the cube_ m2/c(b) Find the net charge inside the cubec) Could the net charge be a single point charge?Need Help?Cenmrn 3.34/10 points Previous Answers SERPSE1O 23.4.P.032 My Notes Ask Your Teacher Assume the magnitude of the electric field on each face of the cube of edge L=L.ll m in the figure below is uniform and the directions of the fields on each face are as indicated (Take E, 33.0 N/c and Ez 22.5 N/C ) "u... ##### The ' following table is obtained from a random sample f student absences Day Mon Tue Wed Thu Fri Number AbsentYou wish to use a goodness-of-fit test to test the claim that the absences occur on the five days with equal frequency: What is the value of the x? test statistic? Round to the nearest hundredth. The ' following table is obtained from a random sample f student absences Day Mon Tue Wed Thu Fri Number Absent You wish to use a goodness-of-fit test to test the claim that the absences occur on the five days with equal frequency: What is the value of the x? test statistic? Round to the neares... ##### 2n 1 Π=0 2. Determine whether the series converges, and if so, find its sum. (1)... 2n 1 Π=0 2. Determine whether the series converges, and if so, find its sum. (1) Σ3-agn+1 (3) Σ(-3) η(η - 1) 3 (4) Σ (5) ΣIn ρ2n (6) Σ[arctan(n + 1) - arctan n] η +1 n? +4 1+ 2η 1 (7) ΣIn (8) Σ 2n2 +1 (9) η2 (η + 1)2. 1 η C... ##### Ildentify the product of the following reaction sequence:(1) CHaMg8r; (2) Hjo:Hzc Ch "ch3(a) CH,CH,CH;OCH;(6) CH;-CH-OCH; CH;CH;-CHCHzOH Ch,(d) CH_CHz-CH-CH; OhneureratenAnswrer Answor (C)Hnsen Ildentify the product of the following reaction sequence: (1) CHaMg8r; (2) Hjo: Hzc Ch "ch3 (a) CH,CH,CH;OCH; (6) CH;-CH-OCH; CH; CH;-CHCHzOH Ch, (d) CH_CHz-CH-CH; Oh neureraten Answrer Answor (C) Hnsen... ##### VII. Please give the detailed mechanism of the following reaetion by showing wll elementary steps, using curved arrows to show the flow of electrons: Hf = 4ny lone pairs are involved in reaction, they must be shown: (8 pts) ot HzSo4 VII. Please give the detailed mechanism of the following reaetion by showing wll elementary steps, using curved arrows to show the flow of electrons: Hf = 4ny lone pairs are involved in # reaction, they must be shown: (8 pts) ot HzSo4... ##### 5. How much faster would a reaction be if a catalyst is used that lowers the... 5. How much faster would a reaction be if a catalyst is used that lowers the activation energy from 20.0 kJ/mol to 10.0 kJ/mol? Do the calculation at two temperatures: first at 25.0°C and then at 0.0°C. (20 points)... ##### Questionold2290 kg space stalion orbits Earth Ah altilude of 4.57 10' Find the magnilude of the farce with which the spxuce station alracts Earth. The mass and mean rallius Fanth are 5.48 10" kg and 6.37 10" m, Tespcctivelyforce; Question old 2290 kg space stalion orbits Earth Ah altilude of 4.57 10' Find the magnilude of the farce with which the spxuce station alracts Earth. The mass and mean rallius Fanth are 5.48 10" kg and 6.37 10" m, Tespcctively force;... ##### 3. Problem on Inflation Risk The US Treasury started issuing TIPS (inflation protected securities) in 1997.... 3. Problem on Inflation Risk The US Treasury started issuing TIPS (inflation protected securities) in 1997. The key provisions and features of these securities can be found at https://www.treasurydirect.gov/indiv/research/indepth/tips/res_tips_rates.htm, and are reported here The coupon rate which i... ##### Enatotieortd betatra phenaartnltne trchelds: nartnt cell Lt(ion cirenith Veeel cemeeti sirags Lhrourhouti [Exing cens thot dutrioute = that conshts dd Niy Chlbea totat Ilchens Aipe Eenson tuer allofthe following exccpt Trrplnton puab requires Lher37e waler ~motaules to cllubse Cohesonkelwttn Waer mobaules Expanionofulermaljet KEe bensport tnrouph rxlem vexseh nnroon touch nfrerhHor coprjcnites Fipeon Lhe powth o/ plants? Ffoanpmul nitroren Prorniotlsfom mMyconinize Frclnc cuim Allof the abovda F Enatoti eortd betatra phenaartnltne trchelds: nartnt cell Lt(ion cirenith Veeel cemeeti sirags Lhrourhouti [Exing cens thot dutrioute = that conshts dd Niy Chlbea totat Ilchens Aipe Eenson tuer allofthe following exccpt Trrplnton puab requires Lher37e waler ~motaules to cllubse Cohesonkelwttn Waer m... ##### We have the following model of the economy: (I)Y-C+S+T (2) E-C+I+G (3) Y E (4) C-(YD. CA (5) S-s(... 1-5 We have the following model of the economy: (I)Y-C+S+T (2) E-C+I+G (3) Y E (4) C-(YD. CA (5) S-s(YD SA) (6) I=IA 7) G-GA (8) T TA (9) YD Y T (10) Deficit =G-T The following data for equilibrium values will help in this problem. G-800 I 30 T=650 Y'=5,000 Calculate 1. the equilibrium value ... ##### A thin film of water (n=1.33 ) on a flat glass (n=1.5) surfaceis illuminated by a perpendicular light beam. The light in the beamis monochromatic and its wavelength can be changed. As thewavelength is changed continuously, the reflected intensity changesfrom a minimum value at lambda = 530 nm to a maximum value atlambda = 790 nm. How thick is the film? A thin film of water (n=1.33 ) on a flat glass (n=1.5) surface is illuminated by a perpendicular light beam. The light in the beam is monochromatic and its wavelength can be changed. As the wavelength is changed continuously, the reflected intensity changes from a minimum value at lambda = 530 nm to... ##### Rocnave Jnoa the Using angiereo molarty mill 1 @ignificantf solute mass digits_ moVL and silver solulion nitrate solution that contai8 silver nltrate (AgNOs) Rocnave Jnoa the Using angiereo molarty mill 1 @ignificantf solute mass digits_ moVL and silver solulion nitrate solution that contai 8 silver nltrate (AgNOs)... ##### What is a $y$-intercept of a graph? What is a $y$-intercept of a graph?... ##### Graph the Cnye with parametric equatlonssintt}sin(2t} ,sin( 30) .Find the tota length This cunve comectdeuma places_Need Help? Graph the Cnye with parametric equatlons sintt} sin(2t} , sin( 30) . Find the tota length This cunve comect deuma places_ Need Help?... ##### Find the periodic payment which will amount to sum of S 6000 if an interest rate The periodic payment is S 99 compounded annually at the end of 15 consecutive years. (Round to the nearest cent:) Find the periodic payment which will amount to sum of S 6000 if an interest rate The periodic payment is S 99 compounded annually at the end of 15 consecutive years. (Round to the nearest cent:)... ##### 3. One of the most controversial issues in the field of Marketing is the long-run impact... 3. One of the most controversial issues in the field of Marketing is the long-run impact that advertising has on brand awareness. That is, while a short-term impact is commonly observed lie, increased awareness of the advertised brand within a one-two week time frame of the ad appearing the longer-t... ##### QUESTION 3 0.5 points Save Answer According the the kinetic molecular theory of gases, rank the... QUESTION 3 0.5 points Save Answer According the the kinetic molecular theory of gases, rank the following molecules at STP by their density: H2 CH4 Cl2 {In each blank type <; =, or > as appropriate.} QUESTION 4 0.5 points Save Answer pressure, A real gas will deviate most from the properties o... ##### Discussion-Normal Probability Distributions The probability that an observation taken from a standard normal population will be... Discussion-Normal Probability Distributions The probability that an observation taken from a standard normal population will be between -1.96 and 1.28 is ? Explain results....
2022-08-12 02:11:02
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.5024955868721008, "perplexity": 6420.803275009543}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-33/segments/1659882571538.36/warc/CC-MAIN-20220812014923-20220812044923-00183.warc.gz"}
http://clay6.com/qa/10003/the-mean-of-a-binomial-distribution-is-5-and-its-standard-deviation-is-2-th
Browse Questions # The mean of a binomial distribution is $5$ and its standard deviation is $2.$ Then the value of $n$ and $p$ are $\begin{array}{1 1}(1)\left(\frac{4}{5},25\right)&(2)\left(25,\frac{4}{5}\right)\\(3)\left(\frac{1}{5},25\right)&(4)\left(25,\frac{1}{5}\right)\end{array}$ Can you answer this question? Given mean $np= 5$ Standard deviation $\sqrt {npq} =2$ $npq= 4$ $\large\frac{npq}{np}= \frac{4}{5}$ $q= \large\frac{4}{5}$ $p=1-q=1-\large\frac{4}{5}=\large\frac{1}{5}$ $np=5=n \times \large\frac{1}{5}$$=5$ $n=25$ $(n,p)$ is $(25,\large\frac{1}{5})$ Hence 4 is the correct answer. answered May 23, 2014 by
2016-12-04 08:19:51
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 2, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9630704522132874, "perplexity": 461.68890677155036}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2016-50/segments/1480698541220.56/warc/CC-MAIN-20161202170901-00105-ip-10-31-129-80.ec2.internal.warc.gz"}
https://vcvina.com/2020/05/05/memoryManagement/
# Memory ## What is “Input Queue” ? Collection of processes on the disk that are waiting to be brought into memory to run the program. When we want to put a process into our main memory, we use Input Queue and pick a process from it. ## The multistep processing of a user programming we use “Address” to detect those processes, but how? Is the address given by CPU same as the address in memory? Absolutely, no. We have two kinds of address: 1. Logical address(Virtual address): Is the address at which an item (memory cell, storage element, network host) appears to reside from the perspective of an executing application program. 2. Physical address: Physical Address is a memory address that is represented in the form of a binary number on the address bus circuitry in order to enable the data bus to access a particular storage cell of main memory, or a register of memory mapped I/O device. ## Overlay(“オーバーレイ”) and Paging(“ページング”) in virtual memory(仮想記憶) Target: Enable a process to be larger than the amount of memory allocated to it • Keep in memory only those instructions and data that are needed at any given time. • Implemented by user, no special support needed from operating system, programming design of overlay structure is complex. What is Symbol Table(シンボルテーブル): • Δ Symbol table is a data structure used by a language translator such as a compiler or interpreter, where each identifier (a.k.a. symbol) in a program’s source code is associated with information relating to its declaration or appearance in the source. ## What is “Contiguous Allocation”(“連続メモリ割り当て”)? • Each process is contained in a single contiguous section of memory, this mechanism is called “Contiguous Allocation”(“連続メモリ割り当て”). • There are two partitions in the main memory: 1. Resident operating system, usually held in low memory with interrupt vector(“割り込みベクタ”). 2. User processes then held in high memory. Memory is usually divided into two areas, one for the operating system and the other for user processes. The operating system can be located in low memory or high memory. The main factor affecting this decision is the location of the interrupt vector. Because the interrupt vector is usually located in low memory, programmers usually put the operating system page in low memory. However, because those allocation is contiguous, how can we protect one process from one another ? ## Memory Protection: • Relocation Register: which contains value of smallest physical address. • Limit Register: which contains range of logical addresses. • Under the joint action of these two mechanisms, the MMU maps the logical address dynamically by adding the value in the relocation register, and each logical address must be less than the limit register (Limit Register can control the memory range of each process) ## Multiple-partition Allocation: • Hole: block of available memory;Holes of various size are scattered throughout memory. When a process arrives, it is allocated memory from a hole large enough to accommodate it. OS maintains information about: 1. allocated partitions 2. free partitions (hole) • How to satisfy a request of size n from a list of free holes? 1. First-fit: Allocate the first hole that is big enough. 2. Best-fit: Allocate the smallest hole that is big enough; must search entire list, unless ordered by size. Produces the smallest leftover hole. 3. Worst-fit: Allocate the largest hole; must also search entire list. Produces the largest leftover hole. • Fragmentation (“フラグメンテーション”) : 1. External Fragmentation(“外部断片化”): total memory space exists to satisfy a request, but it is not contiguous. 2. Internal Fragmentation(“内部断片化”): allocated memory may be slightly larger than requested memory; this size difference is memory internal to a partition, but not being used. Reduce external fragmentation by memory compaction: コンパクションとガベージコレクションの違い: メモリの断片化を解消する機能はコンパクション(memory compaction)と呼ばれ、実現方法によってはガベージコレクションと共にコンパクションも行う仕組みになっている。そのためコンパクションを含めてガベージコレクションと呼ぶ場合もあるが、厳密には区別される。 https://ja.wikipedia.org/wiki/%E3%82%AC%E3%83%99%E3%83%BC%E3%82%B8%E3%82%B3%E3%83%AC%E3%82%AF%E3%82%B7%E3%83%A7%E3%83%B3 Shuffle memory contents to place all free memory together in one large block. Compaction is possible only if relocation is dynamic (“動的再配置”) , and is done at execution time. Compaction is one way to solve this problem, but there is another way is Paging, which is a classical method in modern OS. # Paging ## Background of “Paging” • Divide physical memory (“物理アドレス”) into fixed-sized blocks called Frames(“フレイム”) (size is power of 2, between 512 bytes and 8192 bytes). • Divide logical memory (“仮想/論理アドレス”) into blocks of same size called Pages(“ページ”). Address generated by CPU is divided into: • Page number (p) (“ページ番号”): used as an index into a page table which contains base address of each page in physical memory. • Page offset (d) (“オフセット”): combined with base address to define the physical memory address that is sent to the memory unit. Page 0 => page table (0,5) => Frame 5 Page 1 => page table (1,6) => Frame 6 … … “a” => (number, offset) = (0, 0) => page table (0,5) => [Frame 5, +0] = 4*5 + 0 “f” => (number, offset) = (1, 1) => page table (1,6) => [Frame 6, +1] = 4*5 + 1 … … Paging is a form of dynamic relocation. Every logical address is bound by the paging hardware to some physical address. Using paging is similar to using a table of base (or relocation) registers, one for each frame of memory. ## Use “Paging” to solve the fragmentation problem (External Fragmentation) Using paging scheme, there is no external fragmentation, but may be some internal fragmentation, because each page-size is usually 4 bytes long, and process may be leave some space. ## Implementation of Page Table Page table is kept in main memory. • Page-table base register (PTBR) points to the page table. • Page-table length register (PTLR) indicates size of the page table. What is Hierarchical Page Tables? Most modern computer systems support a large logical address space, in which the page table becomes excessively large. So we would not want to allocate the page table contiguously in main memory, we break up the logical address space into multiple page tables. ## Feature of Paging An important aspect of paging is the clear separation between the user’s view of memory and the actual physical memory: The difference between the user’s view of memory and the actual physical memory is reconciled by the address-translation hardware. # Segmentation ## What is Segmentation Memory segmentation is a computer (primary) memory management technique of division of a computer’s primary memory into segments or sections. In a computer system using segmentation, a reference to a memory location includes a value that identifies a segment and an offset (memory location) within that segment. The difference between Page and Segmentation • A page is of fixed block size, and segmentation is of variable size. • Page may lead to internal fragmentation, segmentation may lead to external fragmentation ## Segmentation table (different from page table) Every segmentation has a Segment-table base register(STBR) and Segment-table length register(STLR). # Virtual Memory(仮想記憶) ## What is virtual memory? Virtual memory is a technology of memory management in computer system. It makes the application think that it has continuous available memory (a continuous and complete address space), but in fact, it is usually divided into multiple physical memory fragments, and some of them are temporarily stored on the external disk storage for data exchange when necessary. Compared with the system without virtual memory technology, the system using this technology makes the writing of large programs easier and the use of real physical memory (such as RAM) more efficient. ## Demand Paging(デマンドページング): Bring a page into memory only when it is need. When there is no free frame: Page Replacement(ページ置換): • What’s that mean? Find some page in memory, but not really in use, swap it out. • Prevent Page Fault(“ページフォールト”): Prevent over-allocation of memory by modifying page-fault service routine to include page replacement. Page replacement completes separation between logical memory and physical memory – large virtual memory can be provided on a smaller physical memory. • Of course, we want lowest page fault rate. • Paging Replacement Algorithm: FIFO, Least Recently Used, Counting (including LFU, MFU) Benefits of virtual memory: • Copy-on-write: 1. Allows both parent and child process to initially share the same pages in memory. 2. If either process modifies a shared page, it will modify a new page copied from the shared page. • memory -mapped file: 1. Allows file I/O to be treated as routine memory access by mapping a disk block to a page in memory. 2. A file is initially read using demand paging. A page-sized portion of the file is read from the file system into a physical page. Subsequent reads/writes to/from the file are treated as ordinary memory accesses. ## Allocate the frame Schemes: • Fixed allocation: If 100 frames and 5 processes, give each process 20 pages. • Priority allocation: According to the size of process. select for replacement a frame from a process with lower priority number. • Global allocation: One process can take a frame from another. If another process has a old frame, you can replace that frame, but local allocate could only check it’s own frames’ status. ## Thrashing: A process is busy swapping pages in and out. It will lead to: • Low CPU utilization. • OS thinks that it needs to increase the degree of multiprogramming. another process added to the system. VCVina copyrights this specification. No part of this specification may be reproduced in any form or means, without the prior written consent of VCVina.
2020-07-15 02:22:29
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.199653759598732, "perplexity": 4084.3101822449853}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-29/segments/1593657154789.95/warc/CC-MAIN-20200715003838-20200715033838-00385.warc.gz"}
https://jonreeve.com/2019/02/workshop-word-embeddings/
# Workshop Notebook: Advanced Topics in Word Embeddings Posted 2019-02-20 This notebook originally accompanied a workshop I gave at NYCDH Week, in February of 2019, called “Advanced Topics in Word Embeddings.” (In truth, it’s only somewhat advanced. With a little background in NLP, this could even serve as an introduction to the subject.) You can run the code in a Binder, here. Word embeddings are among the most discussed subjects in natural language processing, at the moment. If you’re not already familiar with them, there are a lot of great introductions out there. In particular, check out these: ## An Example of Document Vectors: Project Gutenberg This figure shows off some of the things you can do with document vectors. Using just the averaged word vectors of each document, and projecting them onto PCA space, you can see a nice divide between fiction and nonfiction books. In fact, I like to think of the line connecting the upper-left and the lower-right as a vector of “fictionality,” withthe upper-left corner as “highly fictional,” and the lower right as “highly non-fictional.” Curiously, religious texts are right in between. There’s more on this experiment in this 2015 post. ## Getting Started First, import the libraries below. (Make sure you have the packages beforehand, of course.) import pandas as pd import spacy from glob import glob # import word2vec # import gensim # from gensim.test.utils import common_texts # from gensim.models import Word2Vec from sklearn.manifold import TSNE from sklearn.decomposition import PCA from matplotlib import pyplot as plt import json from mpl_toolkits.mplot3d import Axes3D, proj3d #??? from numpy import dot from numpy.linalg import norm %matplotlib notebook plt.rcParams["figure.figsize"] = (12,8) Now load the Spacy data that you downloaded (hopefully) prior to the workshop. If you don’t have it, or get an error below, you might want to check out the documentation that Spacy maintains here for how to download language models. Download the en_core_web_lg model. nlp = spacy.load('en_core_web_lg') # Word Vector Similarity First, let’s make SpaCy “document” objects from a few expressions. These are fully parsed objects that contain lots of inferred information about the words present in the document, and their relations. For our purposes, we’ll be looking at the .vector property, and comparing documents using the .similarity() method. The .vector is just an average of the word vectors in the document, where each word vector comes from pre-trained model—the Stanford GloVe vectors. Just for fun, I’ve taken the examples below from Monty Python and the Holy Grail, the inspiration for the name of the Python programming language. (If you haven’t seen it, this is the scene I’m referencing..) africanSwallow = nlp('African swallow') europeanSwallow = nlp('European swallow') coconut = nlp('coconut') africanSwallow.similarity(europeanSwallow) 0.8596378859289445 africanSwallow.similarity(coconut) 0.2901231866716321 The .similarity() method is nothing special. We can implement our own, using dot products and norms: def similarity(vecA, vecB): return dot(vecA, vecB) / (norm(vecA, ord=2) * norm(vecB, ord=2)) similarity(africanSwallow.vector, europeanSwallow.vector) 0.8596379 # Analogies (Linear Algebra) In fact, using our custom similarity function above is probably the easiest way to do word2vec-style vector arithmetic (linear algebra). What will we get if we subtract “European swallow” from “African swallow”? swallowArithmetic = (africanSwallow.vector - europeanSwallow.vector) To find out, we can make a function that will find all words with vectors that are most similar to our vector. If there’s a better way of doing this, let me know! I’m just going through all the possible words (all the words in nlp.vocab) and comparing them. This should take a long time. def mostSimilar(vec): highestSimilarities = [0] highestWords = [""] for w in nlp.vocab: sim = similarity(vec, w.vector) if sim > highestSimilarities[-1]: highestSimilarities.append(sim) highestWords.append(w.text.lower()) return list(zip(highestWords, highestSimilarities))[-10:] mostSimilar(swallowArithmetic) [('croup', 0.06349668), ('deceased', 0.11223719), ('jambalaya', 0.14376064), ('cobra', 0.17929554), ('tanzania', 0.25093195), ('rhinos', 0.3014531), ('lioness', 0.34080425), ('giraffe', 0.37119308), ('african', 0.5032688)] Our most similar word here is “african”! So “European swallow” - “African swallow” = “African”! Just out of curiosity, what will it say is the semantic neighborhood of “coconut”? mostSimilar(coconut.vector) [('jambalaya', 0.24809697), ('tawny', 0.2579049), ('concentrate', 0.35225457), ('lasagna', 0.36302277), ('puddings', 0.4095627), ('peel', 0.47492552), ('eucalyptus', 0.4899935), ('carob', 0.57747585), ('peanut', 0.6609557), ('coconut', 1.0000001)] Looks like a recipe space. Let’s try the classic word2vec-style analogy, king - man + woman = queen: king, queen, woman, man = [nlp(w).vector for w in ['king', 'queen', 'woman', 'man']] answer = king - man + woman mostSimilar(answer) [('gorey', 0.03473952), ('deceased', 0.2673984), ('peasant', 0.32680285), ('guardian', 0.3285926), ('comforter', 0.346274), ('virgins', 0.3561441), ('kissing', 0.3649173), ('woman', 0.5150813), ('kingdom', 0.55209804), ('king', 0.802426)] It doesn’t work quite as well as expected. What about for countries and their capitals? Paris - France + Germany = Berlin? paris, france, germany = [nlp(w).vector for w in ['Paris', 'France', 'Germany']] answer = paris - france + germany mostSimilar(answer) [('orlando', 0.48517892), ('dresden', 0.51174784), ('warsaw', 0.5628617), ('stuttgart', 0.5869507), ('vienna', 0.6086052), ('prague', 0.6289497), ('munich', 0.6677783), ('paris', 0.6961337), ('berlin', 0.75474036), ('germany', 0.8027713)] It works! If you ignore the word itself (“Germany”), then the next most similar one is “Berlin”! # Pride and Prejudice Now let’s look at the first bunch of nouns from Pride and Prejudice. It starts: It is a truth universally acknowledged, that a single man in possession of a good fortune, must be in want of a wife. However little known the feelings or views of such a man may be on his first entering a neighbourhood, this truth is so well fixed in the minds of the surrounding families, that he is considered the rightful property of some one or other of their daughters. First, load and process it. We’ll grab just the first fifth of it, so we won’t run out of memory. (And if you still run out of memory, maybe increase that number.) pride = open('pride.txt').read() pride = pride[:int(len(pride)/5)] prideDoc = nlp(pride) Now grab the first, say, 40 nouns. prideNouns = [w for w in prideDoc if w.pos_.startswith('N')][:40] prideNounLabels = [w.lemma_ for w in prideNouns] prideNounLabels[:10] ['truth', 'man', 'possession', 'fortune', 'want', 'wife', 'feeling', 'view', 'man', 'neighbourhood', 'truth', ... Get the vectors of those nouns. prideNounVecs = [w.vector for w in prideNouns] Verify that they are, in fact, our 300-dimensional vectors. prideNounVecs[0].shape (300,) Use PCA to reduce them to three dimensions, just so we can plot them. reduced = PCA(n_components=3).fit_transform(prideNounVecs) reduced[0].shape (3,) prideDF = pd.DataFrame(reduced) Plot them interactively, in 3D, just for fun. %matplotlib notebook plt.rcParams["figure.figsize"] = (10,8) def plotResults3D(df, labels): fig = plt.figure() ax = fig.add_subplot(111, projection='3d') ax.scatter(df[0], df[1], df[2], marker='o') for i, label in enumerate(labels): ax.text(df.loc[i][0], df.loc[i][1], df.loc[i][2], label) plotResults3D(prideDF, prideNounLabels) Now we can rewrite the above function so that instead of cycling through all the words ever, it just looks through all the Pride and Prejudice nouns: # Redo this function with only nouns from Pride and Prejudice def mostSimilar(vec): highestSimilarities = [0] highestWords = [""] for w in prideNouns: sim = similarity(vec, w.vector) if sim > highestSimilarities[-1]: highestSimilarities.append(sim) highestWords.append(w.text.lower()) return list(zip(highestWords, highestSimilarities))[-10:] Now we can investigate, more rigorously than just eyeballing the visualization above, the vector neighborhoods of some of these words: mostSimilar(nlp('fortune').vector) [('', 0), ('truth', 0.3837785), ('man', 0.40059176), ('fortune', 1.0000001)] # Senses If we treat words as documents, and put them in the same vector space as other documents, we can infer how much like that word the document is, vector-wise. Let’s use four words representing the senses: senseDocs = [nlp(w) for w in ['sound', 'sight', 'touch', 'smell']] def whichSense(word): doc = nlp(word) return {sense: doc.similarity(sense) for sense in senseDocs} whichSense('symphony') {sound: 0.37716483832358116, sight: 0.20594014841156277, touch: 0.19551651130481998, smell: 0.19852637065751555} %matplotlib inline plt.rcParams["figure.figsize"] = (14,8) testWords = 'symphony itchy flower crash'.split() pd.DataFrame([whichSense(w) for w in testWords], index=testWords).plot(kind='bar') It looks like it correctly guesses that symphony correlates with sound, and also does so with crash, but its guesses for itchy (smell) and for flower (touch) are less intuitive. # The Inaugural Address Corpus In this repo, I’ve prepared a custom version of the Inaugural Address Corpus included with the NLTK. It just represents the inaugural addresses of most of the US presidents from the 20th and 21st centuries. Let’s compare them using document vectors! First let’s generate parallel lists of documents, labels, and other metadata: inauguralFilenames = sorted(glob('inaugural/*')) inauguralLabels = [fn[10:-4] for fn in inauguralFilenames] inauguralDates = [int(label[:4]) for label in inauguralLabels] parties = 'rrrbbrrrbbbbbrrbbrrbrrrbbrrbr' # I did this manually. There are probably errors. inauguralRaw = [open(f, errors="ignore").read() for f in inauguralFilenames] # Sanity check: peek for i in range(4): print(inauguralLabels[i][:30], inauguralDates[i], inauguralRaw[i][:30]) 1901-McKinley 1901 My fellow-citizens, when we as 1905-Roosevelt 1905 My fellow citizens, no people 1909-Taft 1909 My fellow citizens: Anyone who 1913-Wilson 1913 There has been a change of gov Process them and compute the vectors: inauguralDocs = [nlp(text) for text in inauguralRaw] inauguralVecs = [doc.vector for doc in inauguralDocs] Now compute a similarity matrix for them. Check the similarity of everything against everything else. There’s probably a more efficient way of doing this, using sparse matrices. If you can improve on this, please send me a pull request! similarities = [] for vec in inauguralDocs: thisSimilarities = [vec.similarity(other) for other in inauguralDocs] similarities.append(thisSimilarities) df = pd.DataFrame(similarities, columns=inauguralLabels, index=inauguralLabels) Now we can use .idmax() to compute the most semantically similar addresses. df[df < 1].idxmax() 1901-McKinley 1925-Coolidge 1905-Roosevelt 1913-Wilson 1909-Taft 1901-McKinley 1913-Wilson 1905-Roosevelt 1917-Wilson 1905-Roosevelt 1921-Harding 1953-Eisenhower 1925-Coolidge 1933-Roosevelt 1929-Hoover 1901-McKinley 1933-Roosevelt 1925-Coolidge 1937-Roosevelt 1933-Roosevelt 1941-Roosevelt 1937-Roosevelt 1945-Roosevelt 1965-Johnson 1949-Truman 1921-Harding 1953-Eisenhower 1957-Eisenhower 1957-Eisenhower 1953-Eisenhower 1961-Kennedy 2009-Obama 1965-Johnson 1969-Nixon 1969-Nixon 1965-Johnson 1973-Nixon 1981-Reagan 1977-Carter 2009-Obama 1981-Reagan 1985-Reagan 1985-Reagan 1981-Reagan 1989-Bush 1965-Johnson 1993-Clinton 2017-Trump 1997-Clinton 1985-Reagan 2001-Bush 1981-Reagan 2005-Bush 1953-Eisenhower 2009-Obama 1981-Reagan 2017-Trump 1993-Clinton dtype: object If we reduce the dimensions here using PCA, we can visualize the similarity in 2D: embedded = PCA(n_components=2).fit_transform(inauguralVecs) xs, ys = embedded[:,0], embedded[:,1] for i in range(len(xs)): plt.scatter(xs[i], ys[i], c=parties[i], s=inauguralDates[i]-1900) plt.annotate(inauguralLabels[i], (xs[i], ys[i])) # Detective Novels I’ve prepared a corpus of detective novels, using another notebook in this repository. It contains metadata and full texts of about 10 detective novels. Let’s compute their similarities to certain weapons! It seems the murder took place in the drawing room, with a candlestick, and the murderer was Colonel Mustard! detectiveJSON = open('detectives.json') detectivesData = detectivesData[1:] # Chop off #1, which is actually a duplicate detectiveTexts = [book['text'] for book in detectivesData] We might want to truncate these texts, so that we’re comparing the same amount of text throughout. detectiveLengths = [len(text) for text in detectiveTexts] detectiveLengths [351240, 415961, 440629, 611531, 399572, 242949, 648486, 350142, 288955] detectiveTextsTruncated = [t[:min(detectiveLengths)] for t in detectiveTexts] detectiveDocs = [nlp(book) for book in detectiveTextsTruncated] # This should take a while extraWords = "gun knife snake diamond".split() extraDocs = [nlp(word) for word in extraWords] extraVecs = [doc.vector for doc in extraDocs] detectiveVecs = [doc.vector for doc in detectiveDocs] detectiveLabels = [doc['author'].split(',')[0] + '-' + doc['title'][:20] for doc in detectivesData] detectiveLabels ['Collins-The Haunted Hotel: A', 'Rohmer-The Insidious Dr. Fu', 'Chesterton-The Innocence of Fat', 'Doyle-The Return of Sherlo', 'Chesterton-The Wisdom of Father', 'Doyle-A Study in Scarlet', "Gaboriau-The Count's Millions", "Rinehart-Where There's a Will", "Michelson-In the Bishop's Carr"] pcaOut = PCA(n_components=10).fit_transform(detectiveVecs + extraVecs) tsneOut = TSNE(n_components=2).fit_transform(pcaOut) xs, ys = tsneOut[:,0], tsneOut[:,1] for i in range(len(xs)): plt.scatter(xs[i], ys[i]) plt.annotate((detectiveLabels + extraWords)[i], (xs[i], ys[i])) If you read the summaries of some of these novels on Wikipedia, this isn’t terrible. To check, let’s just see how often these words occur in the novels. # Sanity check counts = {label: {w: 0 for w in extraWords} for label in detectiveLabels} for i, doc in enumerate(detectiveDocs): for w in doc: if w.lemma_ in extraWords: counts[detectiveLabels[i]][w.lemma_] += 1 pd.DataFrame(counts).T.plot(kind='bar') I welcome your comments and annotations in the Hypothes.is sidebar to the right. →
2023-03-25 20:51:30
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.48095229268074036, "perplexity": 8899.211006267922}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2023-14/segments/1679296945372.38/warc/CC-MAIN-20230325191930-20230325221930-00318.warc.gz"}
http://mathhelpforum.com/discrete-math/124591-equivalence-classes-print.html
# equivalence classes • January 20th 2010, 09:25 AM 1234567 equivalence classes Hi i cans show that the below problem is an equivalence class no problem, but i am finding it difficult to describe its equivalence classes. Attachment 14917 • January 20th 2010, 09:30 AM Plato The post says "whenever $|A|-|B|$." What does that mean? • January 20th 2010, 09:33 AM Jhevon Quote: Originally Posted by Plato The post says "whenever $|A|-|B|$." What does that mean? It says $A \sim B$ whenever $|A| = |B|$ • January 20th 2010, 09:41 AM Plato Quote: Originally Posted by Jhevon It says $A \sim B$ whenever $|A| = |B|$ Not in the image that I see. • January 20th 2010, 09:44 AM Jhevon Quote: Originally Posted by Plato Not in the image that I see. Well, I don't know what's going on. That's what I see. • January 20th 2010, 09:47 AM Plato Quote: Originally Posted by Jhevon Well, I don't know what's going on. That's what I see. I think that is why we ought to insist on the use of LaTeX. • January 20th 2010, 10:08 AM Jhevon Quote: Originally Posted by Plato I think that is why we ought to insist on the use of LaTeX. Maybe. Or at least insist that questions are not posted in image files, unless there are accompanying diagrams or something. Quote: Originally Posted by 1234567 Hi i cans show that the below problem is an equivalence class no problem, but i am finding it difficult to describe its equivalence classes. Attachment 14917 I don't really see a better way to describe the classes other than to reuse the language of the problem. Something like, For $A \in \mathcal P (\mathbb N),~ [A] = \{ B \in \mathcal P (\mathbb N) ~:~ |A| = |B| \}$ • January 20th 2010, 12:44 PM Drexel28 Quote: Originally Posted by Jhevon Maybe. Or at least insist that questions are not posted in image files, unless there are accompanying diagrams or something. I don't really see a better way to describe the classes other than to reuse the language of the problem. Something like, For $A \in \mathcal P (\mathbb N),~ [A] = \{ B \in \mathcal P (\mathbb N) ~:~ |A| = |B| \}$ What about saying it a little better. The equivalence class of a subset of the naturals under this relation is the class of all sets such that there exists a bijection between that set and the class representative. • January 20th 2010, 12:53 PM Jhevon Quote: Originally Posted by Drexel28 What about saying it a little better. The equivalence class of a subset of the naturals under this relation is the class of all sets such that there exists a bijection between that set and the class representative. that is fine i suppose. i wanted to emphasize that we are dealing with sets here. and what you described is what |A| = |B| means by definition. so it's a matter of taste, i think... which i think is also what you're saying. • January 20th 2010, 12:55 PM Drexel28 Quote: Originally Posted by Jhevon that is fine i suppose. i wanted to emphasize that we are dealing with sets here. and what you described is what |A| = |B| means by definition. so it's a matter of taste, i think... which i think is also what you're saying. It is just a matter of taste haha. • January 20th 2010, 11:48 PM Shanks A~B iff A and B have the same cardinal. therefore, there are countable many equivalent classes: 1-class: the collection of all sets that contains only one element. 2-class: the collection of all sets that contains two elements. ... n-class: the collection of all sets that contains n elements. ... infinite-class:the collection of all sets that contains countable infinitely many elements. A further question: What is P(N)/~? I leave it for you to solve it. Solve it, and you will understand the cardinality better.
2014-07-31 05:32:32
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 8, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.815123438835144, "perplexity": 592.8948106538184}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2014-23/segments/1406510272584.13/warc/CC-MAIN-20140728011752-00195-ip-10-146-231-18.ec2.internal.warc.gz"}
http://www.jguru.com/faq/view.jsp?EID=226474
# What is the difference between sequence diagrams and collaboration diagrams? John Moore Sequence diagrams and collaboration diagrams are essentially semantically equivalent. You can use either to model the dynamic aspects of a system in terms of objects interacting by exchanging messages. The difference is more in how the information is presented than in the underlying semantics of the diagram. Sequence diagrams emphasize the time ordering of messages, whereas collaboration diagrams depict more of an organizational structure and are more space efficient. Many UML tools will automatically convert from one diagram type to the other. UML probably included both diagrams for historical reasons, since both were in widespread use (although with different names) before the development of UML. In practice, many organizations tend to prefer one over the other, but there doesn’t seem to be a clear favorite overall. Comment and Contribute (Maximum characters: 1200). You have 1200 characters left.
2018-04-25 22:12:52
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.939288318157196, "perplexity": 912.0198556914546}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-17/segments/1524125947968.96/warc/CC-MAIN-20180425213156-20180425233156-00183.warc.gz"}
https://www.ann-geophys.net/36/841/2018/
Journal topic Ann. Geophys., 36, 841–853, 2018 https://doi.org/10.5194/angeo-36-841-2018 Ann. Geophys., 36, 841–853, 2018 https://doi.org/10.5194/angeo-36-841-2018 Regular paper 13 Jun 2018 Regular paper | 13 Jun 2018 # Statistical analysis of the correlation between the equatorial electrojet and the occurrence of the equatorial ionisation anomaly over the East African sector Statistical analysis of the correlation between the equatorial electrojet and the occurrence of the equatorial ionisation anomaly over the East African sector Patrick Mungufeni1, John Bosco Habarulema2,3, Yenca Migoya-Orué4, and Edward Jurua1 Patrick Mungufeni et al. • 1Mbarara University of Science and Technology, P.O. Box 1410 Mbarara, Uganda • 2South African National Space Agency (SANSA) Space Science, Hermanus 7200, South Africa • 3Department of Physics and Electronics, Rhodes University, Grahamstown 6140, South Africa • 4T/ICT4D laboratory of the Abdus Salam International Center for Theoretical Physics, 34151 Trieste, Italy Correspondence: Patrick Mungufeni (pmungufeni@gmail.com) Abstract This study presents statistical quantification of the correlation between the equatorial electrojet (EEJ) and the occurrence of the equatorial ionisation anomaly (EIA) over the East African sector. The data used were for quiet geomagnetic conditions (Kp  3) during the period 2011–2013. The horizontal components, H, of geomagnetic fields measured by magnetometers located at Addis Ababa, Ethiopia (dip lat. $\sim \mathrm{1}{}^{\circ }$ N), and Adigrat, Ethiopia (dip lat. $\sim \mathrm{6}{}^{\circ }$ N), were used to determine the EEJ using differential techniques. The total electron content (TEC) derived from Global Navigation Satellite System (GNSS) signals using 19 receivers located along the 30–40 longitude sector was used to determine the EIA strengths over the region. This was done by determining the ratio of TEC over the crest to that over the trough, denoted as the CT : TEC ratio. This technique necessitated characterisation of the morphology of the EIA over the region. We found that the trough lies slightly south of the magnetic equator (0–4 S). This slight southward shift of the EIA trough might be due to the fact that over the East African region, the general centre of the EEJ is also shifted slightly south of the magnetic equator. For the first time over the East African sector, we determined a threshold daytime EEJ strength of  40 nT that is mostly associated with prominent EIA occurrence during a high solar activity period. The study also revealed that there is a positive correlation between daytime EEJ and EIA strengths, with a strong positive correlation occurring during the period 13:00–15:00 LT. Keywords. Ionosphere (equatorial ionosphere) 1 Introduction One of the factors that determines the distribution of ambient ion and electron density in the low-latitude F region ionosphere is the vertical E×B drift, which is roughly directly proportional to the equatorial electrojet (EEJ) during daytime . The daytime EEJ is a narrow band of enhanced eastward current flowing in the 100–120 km altitude region within $±\mathrm{3}{}^{\circ }$ latitude of the dip equator . The horizontal configuration of the Earth's magnetic field at the dip equator leads to the inhibition of the Hall current. The resulting increase in Cowling conductivity produces the EEJ . The EEJ current reverses and flows in a westward direction in most cases during quiet geomagnetic and low solar activity conditions as well as during solstice months . This phenomenon is referred to as the counter electrojet (CEJ). The upward E×B drift lifts plasma to higher altitudes, which then diffuses north and south along magnetic field lines. Due to gravity and pressure gradient forces, there is also a downward diffusion of plasma. The net effect is the formation of two belts of high electron density around magnetic latitudes of ±15. This phenomenon is known as the equatorial ionisation anomaly (EIA) (Appleton1946). The regions with high electron density are referred to as crests of the EIA, while the region over the magnetic equator with low electron density is called the trough of the EIA. The process that leads to the formation of the EIA is sometimes referred to as the fountain effect. The large enhancements in electron densities on either side of the magnetic equator significantly affect radio frequency signals passing through the ionosphere and ground-to-ground high-frequency communication systems . Due to such problems, several studies have been undertaken to understand the influence of the EEJ on the development of the EIA. Most of the studies related to the development of the EIA and EEJ have been done over American and Indian longitude sectors . There are also some studies that have used global data to study the morphology of the EIA. For instance, derived peak electron density in the F2 layer (NmF2) from the Global Position System (GPS) Radio Occultation (RO) observations made by the Constellation Observation System for Meteorology, Ionosphere and Climate (COSMIC) mission to study the morphology of the EIA statistically during 2006–2014. They found that NmF2 increases more significantly with solar activity in the crest region than that of the trough region. The ratio of NmF2 at the crest to that at the trough has one peak in the noontime and another around the time of occurrence of the pre-reversal enhancement (PRE) of the zonal electric field. The ratios are smaller during May–August than other months for all the local times (LT). also used global data to investigate the relationship between the EIA and the strength of the EEJ. They found that the correlation coefficients between hourly parameters that define development of the EIA and the midday EEJ strength tend to maximise between 13:00 and 16:00 LT. On a seasonal basis, the correlation coefficients tend to minimise around the June solstice. However, there are a few studies that have used data over the African region to examine occurrence of the EIA during geomagnetically disturbed (e.g. ) and quiet conditions . The study by used data during quiet geomagnetic and low solar activity conditions in the year 2009 to confirm the roles of the EEJ and integrated EEJ (IEEJ) in determining the hemispheric extent of the EIA crest over African mid-latitudes and low latitudes. They reported that, in the Southern Hemisphere, EIA crests can be seen in the magnetic latitudes ranging from about 17 to 19 S. At the moment, there is no statistical quantification of the correlation between the EEJ strength and the formation/development of the EIA over the African sector. Therefore, this study focused on statistically quantifying the correlation between daytime EEJ strength and the occurrence of the EIA over the East African sector during the high solar activity years of 2011–2013. The method we used necessitated searching for locations of the trough and crest of the EIA over the region. The data that were used in this study are described in Sect. 2. Figure 1A map showing data sites. The red and black dots indicate the locations of the magnetometers and the UNAVCO stations, respectively. The dotted and dashed lines represent the magnetic equator and the southern crest of the EIA, respectively. Table 1Sites of GNSS receivers and magnetometers used in the study. 2 Data and analyses ## 2.1 EEJ data The daytime strength of the EEJ can be determined by calculating the difference between the magnitudes of the horizontal component, H, of Earth's magnetic field measured by magnetometer placed directly on the magnetic equator and displaced 6–9 away from the magnetic equator . The locations of the magnetometers at Adigrat, Ethiopia (ETHI) (http://magnetometers.bc.edu, last access: 15 September 2017), and Addis Ababa, Ethiopia (AAE) (http://www.intermagnet.org, last access: 15 September 2015), that were used in this study to determine the daytime strengths of the EEJ are shown in Fig. 1 by red dots. The same magnetometers have been used in several studies to determine the EEJ over the East African sector (e.g. ). In Fig. 1, the dotted line represents the magnetic equator, while the dashed line represents the southern crest of the EIA. More details about data sites used in this study are provided in Table 1. The last column of the table shows the usage of a data site, i.e. to determine either EEJ or total electron content (TEC). Later in Sect. 2.2, we explained the appropriateness of high solar activity data during 2011–2013 that were used in this study. In order to cater for the different offset values of different magnetometers, the baseline value HB of each day was subtracted from the values of H . For each day, values of H measured at a particular station during 23:00–23:59 LT were averaged to give the HB value for that day. The values obtained for a specific station after subtracting HB from H were denoted as HS. To obtain the EEJ (ΔH), the HS values calculated at ETHI were subtracted from those of the corresponding days that were calculated at AAE. Most studies on the EEJ report the peak of the diurnal EEJ around 12:00 LT . In line with this information, the daytime EEJ strength for each day in this study was represented by the mean of the EEJ during the period 10:00–13:00 LT, when the peak of the daytime EEJ is expected to occur. ## 2.2 Determination of EIA strength The EIA strengths were calculated using data obtained from Global Navigation Satellite System (GNSS) receivers along the 30–40 longitude sector of Africa. The stations are represented with black dots in Fig. 1. The latitude range of the stations considered were mainly restricted to the south of the dip equator. However, a few stations slightly north of the dip equator were considered to allow us to locate the trough of the EIA over the region. The Receiver INdependent EXchange (RINEX) data files of the receivers were obtained from the University NAVstar COnsortium (UNAVCO) website (ftp://data-out.unavco.org/pub/rinex/, last access: 15 September 2017). Data of geomagnetically quiet days (Kp  3) were considered. The development of the EIA during disturbed conditions could be examined in a separate study since it involves additional mechanisms such as the prompt penetration of magnetospheric electric fields and the disturbance dynamo electric fields. Table 2 shows the number of geomagnetically quiet days when the magnetometers at ETHI and AAE and seven of the GNSS receivers (ADIS, ARMI, MOIU, RCMN, EBBE, MAL2 and DODM) listed in Table 1 were simultaneously operational. Dashes in the table depict unavailability of data. Considering the fact that the formation of the EIA depends on solar activity (e.g. ), we grouped the data into low (2008 and 2010) and high (2011–2013) solar activity periods. This grouping might minimise the effect of solar activity on the correlation between EIA and EEJ. For statistical analysis, it appears that the amount of data of low solar activity period shown in Table 2 is not sufficient. Therefore, in this study, we used data of the high solar activity years of 2011–2013. Table 2Number of days in which magnetometers and some GNSS receivers used were simultaneously operational. The RINEX files were processed using GPS-TEC analysis application software to obtain the daily vertical TEC (VTEC) data over a station with 30 s resolution. In this study, to minimise multi-path effects, we used data of satellites with elevation angles greater than 25 . The daily VTEC data were analysed in two ways. In the first analysis, we computed monthly mean TEC over a station as described in the following procedure. The daily VTEC data for all the days within the study period were binned according to months. Therefore, 12 monthly bins were formed from the data during the period 2011–2013. The monthly bins were further binned according to LT. The mean values of the LT bins were determined to yield monthly mean TEC with 30 s resolution. In the second analysis, we computed the EIA strengths. Various studies have represented EIA strength in many ways, including (i) computing the difference of TEC measured at the crest and that at the trough (Sastri1982), (ii) determining the normalised difference of TEC measured at the crest and that measured at the trough (Sastri1982), (iii) simply using the peak of TEC measured at the crest and (iv) determining the ratio of TEC measured at the crest to that measured at the trough , referred to as the CT : TEC ratio. Unlike methods (i) and (ii) which might produce both negative and positive EIA strengths, the last two methods only yield positive values. For the convenience of only working with positive values, method (iv) was used to determine the EIA strength in this study. The advantage of CT : TEC ratio over methods (i) and (iii) is that it provides a relative variability of the EIA, which usually represents variability of a physical phenomenon well. Figure 2Contour plots of monthly mean TEC as a function of latitude and LT. (a–j) are for November and January–September. 3 Results and discussions ## 3.1 Occurrence of the EIA We illustrate the occurrence of the EIA over the region by contour plots of monthly mean TEC as a function of magnetic latitude and LT. The monthly mean TEC plotted values were obtained from the 19 GNSS receiver stations listed in Table 1. The contour plots helped in determining the location of the trough and the region of the crest over the East African sector. Figure 2a–j are for the months of November and January–September. The panels for the months of October and December are missing because these months do not have data over several stations. The data gaps would limit observation of the EIA features over the region. In Fig. 2, the colour bar ranges from blue (low TEC) to red (high TEC). The white spaces within a panel indicate missing data. In Fig. 2, the EIA appears to start forming at about 09:00 LT, existing up to 20:00 LT. Although their data did not cover the low-latitude regions sufficiently, showed that the noontime critical frequency in the F2 layer appears to increase with higher values of dip latitude. Further, stated that latitudinal variation of TEC in the previous studies indicates the existence of an equatorial anomaly in TEC, in its latitudinal variation, similar to the one in F2 region critical frequency. Therefore, the observation of occurrence of the EIA during 09:00–20:00 LT over the East African region is in line with these stated observations. In equinox months (February–April, August, September) and November, clear occurrence of the EIA exists beyond 20:00 LT, lasting till 24:00 LT (see Fig. 2). Various authors have found different time delays that exist between intense EEJ strength and the occurrence of the prominent EIA. For instance, an approximate time delay of 2–3 and 4 h were reported by and , respectively. Since the peak of the EEJ occurs at around 12:00 LT, the cases of prominent EIA about 8 h later (during 20:00–24:00 LT) might not be due to increased zonal electric fields associated with increased EEJ. Other factors such as the PRE just before this period might be a probable cause. Moreover, stated that although the formation of the EIA is primarily due to the fountain effect, many intricate features of the EIA morphology can be influenced by neutral meridional winds at F region altitudes. Probably due to the prolonged fountain effect, the EIA observed during 20:00–24:00 LT has a wider trough compared to that observed during 09:00–20:00 LT. From this point onwards, we shall not discuss the cases of the EIA during 20:00–24:00 LT, since the focus of this study as stated before is the statistical analysis of the correlation between daytime EEJ strength and the occurrence of the daytime EIA. Most of the panels in Fig. 2 except that of November and April indicate that the trough of the EIA exists between 0 and 4 S, covering locations of ARMI, NAZR, and NEGE. This slight southward shift of the trough appears to be in line with the fact that both the EEJ and CEJ exhibit large latitudinal excursions (exceeding mag. lat. 1) on different days as well as at different hours of the same day . The normal EEJ over the East African region is generally centred over the magnetic latitude range of 0–0.5 S . Based on this information, we tentatively suggest that over the East African region, E×B drifts over the magnetic latitude range of 0–4 S are the strongest compared to other latitudes. This implies that over the region, the location where the fountain effect is triggered lies slightly south of the magnetic equator. In Fig. 2, the southern crest appears to exist from 4 to 19 S, covering locations of MOIU, EBBE, RCMN, MAL2, ARSH, DODM and TUKC. Among the stations at the trough, ARMI appeared to have more data. Therefore, in our calculation of EIA strengths, the VTEC over ARMI was considered as that of the trough, while the VTEC over other stations with latitudes ranging 8–19.5 S were considered as VTEC data at the crest. Our results during the high solar activity period of 2011–2013 differ in some aspects from the study done during the low solar activity year of 2009 reported by . For instance, their study revealed that the daytime EIA occurrence rarely exceeds 18:00 LT and there was practically no occurrence of the EIA past this time. The highest strength of TEC at the southern crest depicted by their study was about 50 TECU, while our results demonstrated approximately 80 TECU. Whereas the inner edge of the southern crest they established was slightly further from the magnetic equator ( 17 S), our results indicated the same close to the magnetic equator (4 S). The increased electron density close to the magnetic equator we observed might be due to ionisation resulting from the location of the sun above the southern crest close to the zenith. Otherwise, during high solar activity conditions the EEJ values are expected to increase. This should have originated from the increased zonal electric field which results in the increased EIA when plasma is transported far from the magnetic equator. Figure 3Panels (a), (b), (c) and (d) present the global distribution of electron density derived from the IRI model at 05:00, 11:00, 17:00 and 23:00 UT, respectively on 2 September 2013. The black solid and dotted lines indicate the locations of the magnetic equator and the crests of the EIA. The red box indicates the longitude sector of the current study region. Another important feature worth mentioning in our Fig. 2a, b, d and e is the occurrence of a second southern EIA crest spanning the magnetic latitude  28–40 S. Figures 4 and 5 of also depict these features over the southern low- and mid-latitude regions of East Africa. However, their emphasis was on the same feature that was observed north of the magnetic equator. They suggested that these scenarios could result from inconsistent transportation of plasma to higher latitudes. In the next section, we compared the worldwide morphology of the EIA using the IRI model. ## 3.2 EIA morphology depicted by International Reference Ionosphere (IRI) model For the international standard specification of ionospheric parameters, the Committee On Space Research (COSPAR) and the International Union of Radio Science (URSI) recommended the IRI model. The model is primarily developed using data sources, such as the (i) worldwide network of ionosondes and incoherent scatter radars, (ii) ISIS and Alouette topside sounders and (iii) in situ instruments flown on satellites and rockets (http://irimodel.org/, last access: 1 May 2018). However, theoretical considerations have been used in bridging data gaps and for internal consistency checks (Bilitza2001). In order to verify our observations of a southward displacement of the EIA trough, we used global snapshots of the EIA morphology depicted by the IRI 2012 model at an altitude of 100 km. Figure 3a, b, c and d present the distribution of electron density as a function of longitude and latitude at 05:00, 11:00, 17:00 and 23:00 UT, respectively on 2 September 2013. The selection of this date aimed at identifying a year and season when high chances of EIA occurrence exist. This particular date considered was geomagnetically quiet since the study only analysed data during such conditions. It is important to note that we specified the date and geographic coordinates, while the rest of the input parameters required by the model were provided by the default option in the model. In Fig. 3, the red box indicates the longitude range 20–60, where our region of study lies. The black solid and dashed lines indicate the location of the magnetic equator and the nominal location of the EIA crests, respectively. The colour bar ranges from blue to yellow, indicating low and high electron densities, respectively. Figure 4Panel (a) shows variation of the hourly EEJ as a function of LT during days in monthly bins. Corresponding CT : TEC ratios over DODM, MAL2 and MOIU are shown in (b–d), respectively. Arrows pointing right and left indicate some cases of the prominent EIA during the strong daytime EEJ and the background-level EIA during the weak daytime EEJ, respectively. Within the red box in Fig. 3b, it can be seen that the trough is not symmetrical over the magnetic equator. This seems to support our observation of the EIA trough over the East African region being displaced slightly southward. The location of the EIA trough is symmetrical over longitudes  100 (Fig. 3a) and the range from 80 to 60 (Fig. 3c and d), while it is slightly displaced southward over the longitude range from 20 to 20 (Fig. 3c) and 20 to 60 (Fig. 3b). Figure 3a appears to show that, over India, at a longitude of  80, the EIA trough centre lies south of the magnetic equator. This is in line with the result that over Indian region, the dip latitude of the centre of the EEJ is 0.19 . More time is needed to check over other longitude sectors if the alignment of the EIA trough with respect to the magnetic equator is similar to that of the EEJ. Otherwise, based on the cases observed over Africa and India, we suggest that the location of the EIA trough over a particular longitude depends on the alignment of the centre of the EEJ with respect to the magnetic equator. Next, we present the fairly long-term trend of occurrence of the EIA simultaneously with the corresponding trend of the EEJ over East Africa. This allowed us to clearly visualise the effect of the EEJ on the occurrence of the EIA. ## 3.3 Simultaneous observations of EEJ and EIA strengths The hourly EIA strengths were used to constitute the daily EIA strengths. The maximum CT : TEC ratio in a 1 h interval could be used to represent the EIA strength in that interval. Since such values are prone to errors, the upper quartile, which is close to the maximum value, was used for such representation. The development of the EIA exhibits a diurnal pattern that is dependent on the phase of the solar activity cycle . At the solar maximum, though the formation of the crests takes place around 09:00 LT, the crests continue to develop and move polewards throughout the day till around 20:00 LT. Based on this idea and the fact that the EEJ is a daytime phenomenon, the local time intervals considered to determine hourly EIA strengths (upper quartile of CT : TEC ratios in a 1 h interval) in a day in this study ranged from 09:00 to 18:00 LT. The daily EIA strengths for the entire study period (2011–2013) were binned according to months, yielding 12 monthly bins. In a similar way, hourly EEJ strengths were also computed to constitute daily values, which were then binned based on months. Figure 4 presents the daily EEJ strengths (panel a) and the corresponding EIA strengths over DODM (panel b), MAL2 (panel c) and MOIU (panel d). The horizontal dotted lines separate data of the various monthly bins which are labelled on the vertical axis. The colour bar between panels (a) and (b) ranges from blue (low EEJ) to red (high EEJ). In the case of the colour bar to the right of panel (d), blue denotes a low CT : TEC ratio, while red denotes a high CT : TEC ratio. For the sake of illustration and making sure that the southern crest is well covered, the three stations (DODM, MAL2 and MOIU) are almost separated by about 4 magnetic latitude and are well distributed over the southern crest. Most of the days in the monthly bins appear to show occurrence of the EIA (CT : TEC ratio > 1). This may be due to the existence of a daily daytime eastward electric field due to the global E region dynamo driven by tidal winds. The resulting ever-present upward E×B drift lifts plasma to higher altitudes, which then diffuses north and south along magnetic field lines as well as downwards, resulting into the EIA observed over the stations. As discussed in Sect. 3.1, there are many other factors that may disturb this mechanism of EIA formation, which in turn limits observation of the EIA on some days. It can be deduced from Fig. 4b–d that a CT : TEC ratio  1 signified prominent occurrence of the EIA (TEC over the crest exceeds that over the trough by a factor  1). By visual inspection of Fig. 4, it is difficult to relate the variations in occurrence of the EIA at background levels (CT : TEC ratio < 1) over the stations with that of daytime EEJ strength. However, the conspicuous cases when high EEJ strength ( 50 nT) simultaneously occurs with the prominent EIA are clearly visible over MAL2 and DODM. Some of the cases are marked in Fig. 4, with arrows pointing to the right. The arrows pointing to the left in Fig. 4 depict cases when throughout the day, low values of EEJ (< 50 nT) and EIA strengths (CT : TEC ratios < 1) are measured simultaneously. Figure 5A zoom-in of Fig. 4 for the months of February and December. Figure 5 is a zoom-in of Fig. 4 for the months of February and December. In the figure, the vertical numbers indicate the number of days in the monthly bin. The arrows pointing left in December and February in Fig. 4 correspond to day 1 of the December bin and day 34 of the February bin in Fig. 5, respectively. The arrow pointing right in February in Fig. 4 corresponds to day 11 of February in Fig. 5. These three examples clearly show that high EEJ strengths occur simultaneously with the prominent EIA. This observation is similar to the one made by . Their results showed experimental evidence of how EEJ strength, which is a proxy of E×B drift, mostly controls plasma transportation over low latitudes. Usually, low values of E×B drifts result in poor formation of the EIA . Three other general observations can be made from Fig. 4. (i) The occurrence of the prominent EIA during daytime was fairly common in equinox seasons (February, March, April, September and October) compared to solstice seasons (May, June and July). Along 120 longitude, made similar observations. They reported that the EIA strength showed a semi-annual variation, with maximum peak values occurring in the equinoctial months. Moreover, used data measured over India during the low solar activity year of 1975 to show that EEJ effects on TEC and NmF2 due to the associated zonal electric fields are much more pronounced in the equinoxes than in winter and summer. (ii) The EEJ and EIA strengths are weaker in the June solstice compared to other seasons. This is consistent with the results of . (iii) The highest values of EEJ and EIA strengths appear to occur approximately during 11:00–13:00 and 13:00–18:00 LT, respectively. In the next subsection we examine the correlation between the hourly EIA strengths and daytime EEJ strength. Figure 6The LT variation of correlation coefficients between hourly EIA strength and daytime EEJ strength in MEQX (blue), JSLT (green), SEQX (yellow) and DSLT (black). Panels (a–e) are for correlation coefficients over MOIU, EBBE, RCMN, MAL2 and DODM, respectively. ## 3.4 The correlation between daytime EEJ and EIA strengths In order to determine the correlation between daytime EEJ and EIA strengths, the daily EIA strengths (data similar to that plotted in panels (b)–(d) of Fig. 4) were first binned into the March equinox (March and April), the June solstice (June and July), the September equinox (September and October) and the December solstice (December and January). These seasons were denoted as MEQX, JSLT, SEQX and DSLT, respectively. The seasonal bins were further binned according to local time, with a window size of 1 h. The values of EIA strengths at every local time bin were then correlated with the values of the daytime EEJ strength (mean of the EEJ during the period 10:00–13:00 LT) of the corresponding days. Figure 6 presents the variation of the correlation coefficients as a function of LT in MEQX (blue), JSLT (green), SEQX (yellow) and DSLT (black). Panels (a)–(e) are for the coefficients that were determined over MOIU, EBBE, RCMN, MAL2 and DODM, respectively. These stations lie within the first southern EIA crest and they are among the seven GNSS stations which were used to construct Table 2. Correlation coefficients for stations such as ARSH and TUKC were not presented because of insufficient data. Regarding the second southern crest seen in Fig. 2, in addition to stations lying there not having enough data, its occurrence does not seem to be dominant in any of the months. After filtering the data of days with the afternoon CEJ, unrealistic values of daytime EEJ and EIA strengths, the number of samples over a particular station remained almost constant at all LT bins for a specific season. The afternoon CEJ days were identified when the EEJ values remained negative consecutively for  2 h during 14:00–18:00 LT . Although it still needs to be investigated, we assumed that the morning CEJ might not affect EIA development significantly. The number of samples used to compute the correlation coefficients that are presented in Fig. 6 are shown in Table 3. Table 3Number of samples used to compute correlation coefficients. Most of the r values presented in Fig. 6 are significant (the probability p for the correlations are < 0.05. The p values are not presented here). It may not be meaningful to associate the cases of non-significant r in this study with low number of samples since the number of samples in a particular season does not vary with LT, yet both cases of significant and non-significant r values exist during all the seasons. However, we noted that the cases of non-significant r values mostly occur when occurrence of the EIA does not seem to be influenced by the strength of the EEJ (r≤0.3, which occur mostly during periods < 11:00 and > 16:00 LT). Overall, Fig. 6 shows that r values appear to be positive and increasing from 09:00 up to 13:00 LT when the peak occurred and then decreasing gently till 18:00 LT. The average r values during 13:00–15:00 LT indicate strong positive correlations (r≥0.5) over East Africa. This result is similar to that reported by . They found that, over the American sector, in October of the high solar activity year of 1958, the correlation coefficients between the EIA and EEJ parameters maximised during 13:00–15:00 LT. The unique feature presented by Fig. 6 is the strong positive correlations in SEQX over MAL2 and DODM that remained till 18:00 LT. This point needs further investigation. However, it can be noted that these two stations are far from the magnetic equator compared to the remaining three. Moreover, they are closer to the nominal southern crest of the EIA at 15 S. Based on the general trend, the occurrence of strong positive correlations during 13:00–15:00 LT between the EIA and the daytime EEJ strength is consistent with the idea that the EIA maximises about a few (2–4) hours from the time of the intensified cause. In this case, the cause might be the increased zonal electric field manifested in the EEJ that appeared to peak during 11:00–13:00 LT (see Fig. 4). There is no clear trend in the r values that are related to the seasonal and latitudinal variations. There might be an average value of daytime EEJ strength above which chances of prominent zonal electric field and EIA occurrence might be high. In the next subsection, we illustrate how such a value can be determined. Figure 7The distributions of EEJ strengths associated with prominent EIA at 14:00 LT over (a) MOIU, (b) EBBE, (c) RCMN, (d) MAL2, and (e) DODM. Table 4Percentage of prominent EIA corresponding to threshold EEJ strength. ## 3.5 A threshold EEJ strength As deduced from Fig. 4, cases of daytime prominent EIA might be associated with the EEJ  50 nT. The approximate values for each station considered in this study were determined by examining most of the data for the entire study period. We extracted all the values of daytime EEJ strength that corresponded to hourly prominent EIA strengths over MOIU, EBBE, RCMN, MAL2 and DODM (i) during the equinox and DSLT seasons (when chances of prominent EIA were high) and (ii) at 14:00 LT (chances of r>0.5 are high). As shown later in Table 4, similar results would be obtained at 13:00 or 15:00 LT. Figure 7a–e present the frequency distribution of the various ranges of EEJ strengths over MOIU, EBBE, RCMN, MAL2 and DODM, respectively. On the right of each panel, the total number (Tot), mean and standard deviation (SD) of the EEJ values are indicated. It can be observed from the panels that the mean EEJ at which the prominent EIA occurred ranged from 44.2 to 60.9 nT (overall mean 55.1 nT), while the SD ranged from 9.9 to 19.2 nT (overall SD 14.4 nT). Therefore, over the East African sector, the EIA might occur prominently during 13:00–15:00 LT when measurements of the EEJ  40.7 nT (overall mean EEJ  overall SD) are made. The suitability of the EEJ threshold value (40.7 nT) to predict occurrence of the EIA was ascertained. This was again done for MEQX, SEQX and DSLT when high chances of prominent EIA occurrence were expected. The number of observed EIA occurrences (CT : TEC ratio > 1) during days with EEJ strength  40.7 nT were determined for each season and station. These were denoted as NoPromEIA. The total number of observed EIA strengths (including both a CT : TEC ratio  1 and a CT : TEC ratio > 1) during days with EEJ strength  40.7 nT were also determined and denoted as TotalNoEIA. The ratios of NoPromEIA to TotalNoEIA were expressed as percentages. Table 4 presents the percentages that were determined over MOIU, EBBE, RCMN, MAL2 and DODM. In the table, column 1 presents the seasons and the three local times 13:00, 14:00 and 15:00 at which the percentages were determined. At 13:00, 14:00 and 15:00 LT, the fractions of the number of entries with percentages > 80 were 12∕15, 13∕15 and 12∕15, respectively. Therefore, the percentages at the three local times (13:00, 14:00 and 15:00 LT) were similar. These fractions indeed confirm the fact that the chances of observing prominent EIA occurrence are high when measurements of EEJ strength of at least 40.7 nT are made. This appears to be the first time that a threshold value of the EEJ over the East African region has been determined that can be associated with the zonal electric field, which in turn produces the pronounced EIA. 4 Conclusions We have established the statistics of the correlation between daytime EEJ strength and the occurrence of the EIA over the East African sector. The main results of this study are as follows. (i) The chances of prominent EIA occurrence were high in equinoctial and December solstice seasons. (ii) Generally, the EIA strengths were weaker in the June solstice compared to other seasons. (iii) The daytime EIA strengths were mostly positively correlated (r>0) with daytime EEJ strength. Particularly, strong positive correlations (r≥0.5) were observed mostly during 13:00–15:00 LT. These first three results are similar to the ones reported in the previous studies based on data from other regions . In addition to confirming that the results over the East African region are consistent with those reported over other regions by the previous studies, the next two results appear to be novel. (iv) Over the East African region, the trough of the EIA during high solar activity and quiet geomagnetic conditions lies slightly south (0–4 S) of the magnetic equator. We suggest that the slight southward shift of the EIA trough is consistent with the general centre of the EEJ. The latter is also shifted slightly south of the magnetic equator. (v) During the equinox and December solstice seasons, and the local time interval of 13:00–15:00, the probability of observing the EIA on days with daytime EEJ strength  40 nT was mostly > 80 %. It should be noted that these results pertain to a high solar activity period in the ascending phase of Solar Cycle 24. They might change in the seasons of a solar minimum period. This was not done due to unavailability of geomagnetic field measurements over the East African region. Data availability Data availability. The data used in this study were obtained from ftp://data-out.unavco.org/pub/rinex/, http://swdcwww.kugi.kyoto-u.ac.jp/, http://www.intermagnet.org, last access: 15 September 2015, http://magnetometers.bc.edu, last access: 15 September 2017 and http://spidr.ionosonde.net/spidr/, last access: 1 May 2018. Competing interests Competing interests. The authors declare that they have no conflict of interest. Acknowledgements Acknowledgements. Patrick Mungufeni is thankful to his scientific coordinator at the Abdus Salam International Centre for Theoretical Physics (ICTP), Sandro Radicella. Through the associateship scheme with ICTP and with the help of Sandro Radicella, Patrick Mungufeni attended many workshops/conferences organised by ICTP in the research field of this manuscript. The knowledge obtained during the workshops and the interaction with other scientists helped in formulating the problem presented in this study. John Bosco Habarulema's contributions were supported by the South African National Research Foundation (NRF) grant 105778. The International Science Programme of Sweden supported the contributions of Edward Jurua. The topical editor, Dalia Buresova, thanks two anonymous referees for help in evaluating this paper. References Abdu, M. A.: The International Equatorial Electro-jet Year, AGU, EOS transactions, 73, 49–64, 1992. a Anderson, D., Anghel, A., Yumoto, K., Ishitsuka, M., and Kudeki, E.: Estimating daytime vertical E×B drift velocities in the equatorial F-region using ground-based magnetometer observations, Geophys. Res. Lett., 29, 12, 37-1–37-4 https://doi.org/10.1029/2001GL014562, 2002. a Anderson, D., Anghel, A., Chau, J., and Veliz, O.: Daytime vertical E×B drift velocities inferred from ground-based magnetometer observations at low latitudes, Space Weather, 2, https://doi.org/10.1029/2004SW000095, 2004. a, b Appleton, E. V.: Two Anomalies in the Ionosphere, Nature, 691, 1946. a, b Bilitza, D.: International Reference Ionosphere 2000, Radio Sci., 36, 261–276, 2001. a Bolaji, O., Owolabi, O., Falayi, E., Jimoh, E., Kotoye, A., Odeyemi, O., Rabiu, B., Doherty, P., Yizengaw, E., Yamazaki, Y., Adeniyi, J., Kaka, R., and Onanuga, K.: Observations of equatorial ionization anomaly over Africa and Middle East during a year of deep minimum, Ann. Geophys., 35, 123–132, https://doi.org/10.5194/angeo-35-123-2017, 2017. a, b, c, d, e Chakraborty, S. K. and Hajra, R.: Electrojet control of ambient ionization near the crest of the equatorial anomaly in the Indian zone, Ann. Geophys., 27, 93–105, https://doi.org/10.5194/angeo-27-93-2009, 2009. a Chapman, S.: The equatorial electro-jet as detected from the abnormal electric current distribution about Huancayo, Peru and elsewhere, Arch. Meteor. Geophy. A, 4, 368–390, 1951. a Gouin, P.: Reversal of the magnetic daily variation at Addis Ababa, Nature, 193, 1145–1146, 1962. a Gouin, P. and Mayaud, P. N.: A propos de lexistence possible d'un “contre-electrojet” aux latitudes magnetiques equatoriales, Ann. Geophys., 23, 41–47, 1967. a Hajra, R., Chakraborty, S. K., and Paul, A.: Electro-dynamical control of the ambient ionization near the equatorial anomaly crest in the Indian zone during counter electrojet days, Radio Sci., 44, RS3009, https://doi.org/10.1029/2008RS003904, 2009. a, b, c Kane, R. P. and Rastogi, R. G.: Some Characteristics of the Equatorial Electrojet in Ethiopia (East Africa), Indian J. Radio Space, 6, 85–101, 1977. a Kane, R. P. and Trivedi, N. B.: Are the equatorial electrojet and counterelectrojet centered invariably on the dip equator, J. Atmos. Terr. Phys., 44, 301–304, 1982. a Mungufeni, P., Habarulema, J. B., and Jurua, E.: Modeling of Ionospheric Irregularities during Geomagnetically Disturbed Conditions over African Low Latitude Region, Space Weather, 710–723, https://doi.org/10.1002/2016SW001446, 2016. a Olwendo, O. J., Yosuke, Y., Pierre, C., Baki, P., Ngwira, C. M., and Mito, C.: A study on the response of the Equatorial Ionization Anomaly over the East Africa sector during the geomagnetic storm of November 13, 2012, Adv. Space Res., 55, 2863–2872, https://doi.org/10.1016/j.asr.2015.03.011, 2015. a, b, c Rabiu, A. B., Onwumechili, C. A., Nagarajan, N., and Yumoto, K.: Characteristics of equatorial electrojet over India determined from a thick current shell model, J. Atmos. Sol.-Terr. Phy., 92, 105–115, 2012. a Rastogi, R. G.: Westward Equatorial Electro-jet During Daytime Hours, J. Geophys. Res., 79, 1503–1512, 1974. a, b Reddy, C. A.: The Equatorial Electro-jet, PAGEOPH, 131, 485–508, 1989. a, b Rodriguez-Zuluaga, J., Radicella, M., S., Nava, B., Amory-Mazaudier, C., Mora-Páez, H., and Alazo-Cuartas, K.: Distinct responses of the low-latitude ionosphere to CME and HSSWS: The role of the IMF Bz oscillation frequency, J. Geophys. Res.-Space Phys., 121, 11528–11548, 2016. a Rush, C. M. and Richmond, A. D.: The relationship between the structure of the equatorial anomaly and the strength of the equatorial electrojet, J. Atmos. Terr. Phys., 35, 1171–1180,, 1973. a, b, c, d Rush, C. M., Rush, S. V., Lyons, L. R., and Venkateswaran, S. V.: Equatorial anomaly during a period of declining solar activity, Radio Sci., 4, 829–841, 1969. a Sastri, J. H.: Post-Sunset Behaviour of the Equatorial Anomaly in the Indian Sector, Indian J. Radio Space, 11, 33–37, 1982. a, b Sastri, J. H.: Equatorial anomaly in F-region – A review, Indian J. Radio Space, 19, 225–240, 1990. a, b Seemala, G. and Valladares, C.: Statistics of total electron content depletions observed over the South American continent for the year 2008, Radio Sci., 46, RS5019, https://doi.org/10.1029/2011RS004722, 2011. a Sethia, G., Rastogi, R. G., Deshpande, M. R., and Chandra, H.: Equatorial Electrojet Control of the Low Latitude Ionosphere, J. Geomagn. Geoelectr., 32, 207–216, 1980. a, b, c, d Subhadra Devi, P. K. and Unnikrishnan, K.: Study of daytime vertical E×B drift velocities inferred from ground-based magnetometer observations of ΔH, at low latitudes under geomagnetically disturbed conditions, Adv. Space Res., 53, 752–762, 2014. a Venkatesh, K., P. R., Fagundes, D. S. V. V. D., Prasad, C. M., Denardini, A. J., de Abreu, R. D. J., and Gende, M.: Day-to-day variability of equatorial electro-jet and its role on the day-to-day characteristics of the equatorial ionization anomaly over the Indian and Brazilian sectors, J. Geophys. Res.-Space Phys., 120, 9117–9131, https://doi.org/10.1002/2015JA021307, 2015.  a, b, c, d Yizengaw, E., Moldwin, M. B., Mebrahtu, A., Damtie, B., Zesta, E., Valladares, C. E., and Doherty, P.: Comparison of storm time equatorial ionospheric electrodynamics in the African and American sectors, J. Atmos. Sol.-Terr. Phy., 73, 156–163, 2010. a, b, c Yizengaw, E., Moldwin, M. B., Zesta, E., Biouele, C. M., Damtie, B., Mebrahtu, A., Rabiu, B., Valladares, C. F., and Stoneback, R.: The longitudinal variability of equatorial electrojet and vertical drift velocity in the African and American sectors, Ann. Geophys., 32, 231–238, https://doi.org/10.5194/angeo-32-231-2014, 2014. a, b, c, d, e Yue, X., Schreiner, W., Kuo, Y., and Lei, J.: Ionosphere equatorial ionization anomaly observed by GPS radio occultations during 2006–2014, J. Atmos. Sol.-Terr. Phy., 129, 30–40, 2015. a, b, c Zhang, M.-L., Wan, W., Liu, L., and Ning, B.: Variability study of the crest-to-trough TEC ratio of the equatorial ionization anomaly around 120 E longitude, Adv. Space Res., 43, 1762–1769, 2009. a, b, c, d
2020-01-29 11:54:36
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 3, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.6801707744598389, "perplexity": 3135.5116681210097}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-05/segments/1579251796127.92/warc/CC-MAIN-20200129102701-20200129132701-00132.warc.gz"}
https://design.tutsplus.com/tutorials/how-to-create-an-academy-icon-from-simple-shapes--vector-3177
Unlimited PS Actions, graphics, videos & courses! Unlimited asset downloads! From $16.50/m Advertisement # How To Create An Academy Icon From Simple Shapes Difficulty:IntermediateLength:MediumLanguages: In this tutorial you will learn how to construct a cool academy icon by putting together simple shapes in Illustrator and then applying layer effects on them in Photoshop. I'm using a German versions of both Illustrator and Photoshop, so some screenshots are in German. You should be able to understand it fine though, as I provide detailed instructions and numerous sample images. Let's get started! ### Final Image Preview Below is the final image we will be working towards. Want access to the full Source files and downloadable copies of every tutorial, including this one? Join Vector Plus for just 9$ a month. #### Tutorial Details • Program: Illustrator CS3 and Photoshop CS3 • Difficulty: Intermediate • Estimated Completion Time: 60 minutes ### Step 1 Open up a new Illustrator document and switch to Outline mode (Command + Y). Begin creating four rectangles of the same size, using the Rectangle Tool (M). These will be the columns of our academy. ### Step 2 Now add three more rectangles to each column. These will be the fluting of the column. Make the inner rectangle slightly bigger to give a little perspective. ### Step 3 Create rectangles at the top and the bottom of each column. These will be the capital and the base of the column. ### Step 4 Create three rectangles below the columns. These will be the stairway of our academy. ### Step 5 Create a rectangle above the columns. This is the architrave of the academy. ### Step 6 Create three small rectangles on each side of the architrave. ### Step 7 Use the Pen Tool to create a triangular shape on top of the architrave. This is the roof. ### Step 8 Create a smaller triangular shape within the roof. ### Step 9 Place text in the center of the architrave. Choose a font you like, I chose Avenir Heavy. ### Step 10 Create a pen icon with the Pen Tool and place it in the middle of our roof. You may want to draw one half, then flip a copy by going to Transform > Reflect > Vertical and hit copy. Then line up the two halves and use the Pathfinder tools to Merge them. ### Step 11 Now there are several ways to create a sunburst effect in Illustrator. I usually do it this way: Create a small circle. Give it a dashed stroke with a really big Weight (250pt in this example). Now set the dash to a value that suits you (this determines how many beams your sunburst will have). I set it to 1,3pt. Set the gap value if you want to, I didn't. Make sure to set Align Stroke to Outside. ### Step 12 Expand Appearance and place the shape beneath the pen shape. ### Step 13 Duplicate the smaller triangular shape. Crop it with the sunburst shape via the Pathfinder. ### Step 14 Clean the sunburst shape up a little so that nothing of it is visible inside the pen shape. ### Step 15 Create a rectangle behind the columns. This is the background of the academy. Create a rectangle that matches the dimensions of your document. This is the image background. ### Step 16 Now we're done with the Illustrator part of this tutorial. We will now export our academy into a PSD file. However, there are certain things to take care of when exporting into PSD. Illustrator merges all paths in a group into one single layer making it impossible to edit them separately. To avoid that we need to place every path in a group of its own. It is very helpful to then name the groups correctly, it makes it much easier to work with them in Photoshop. After grouping and naming everything correctly your layers window should look something like that shown below. ### Step 17 Select all paths and give them a white fill with no stroke, colors will be applied later in Photoshop. Then go to File > Export and export into PSD. Use the following settings. ### Step 18 Open the exported file in Photoshop. What you see will be a completely white image. Luckily we have named everything properly so we can begin with applying layer effects. Start by giving the background layer a Color Overlay of 80% gray. Give the academy background layer a Color Overlay of 40% gray and a Gradient Overlay of Multiply, Opacity at 60%, black to white, and linear at 90°. ### Step 19 The top step of the stairway has the following layer effects: • Inner Shadow: Screen, Opacity at 75%, Distance at 3px, Size at 5px, white • Inner Glow: Multiply, Opacity at 25%, Size at 3px, black • Color Overlay: Multiply, Opacity at 100%, 80% grey • Gradient Overlay: Multiply, Opacity at 50%, linear 90°, white to black • Stroke: Size: 1px, Position at Outside, Multiply, Opacity at 25%, black When applying the Inner Shadow set the light angle to 90° and check Use Global Light. The steps below have the same effects, but slightly lighter Color Overlays (B: 85/B: 90). ### Step 20 The column capitals have the following layer effects: • Drop Shadow: Multiply, Opacity at 60%, Distance at 2px, Size at 5px, black • Inner Shadow: Screen, Opacity at 75%, Distance at 2px, Size at 4px, white • Inner Glow: Multiply, Opacity at 25%, Size at 3px, black • Color Overlay: Multiply, Opacity at 100%, 98% gray • Gradient Overlay: Multiply, Opacity at 40%, reflected 0°, black to white • Stroke: Size of 1px, Position of Outside, Multiply, Opacity at 20%, black The column bases have the same layer effects but no Drop Shadow. ### Step 21 The columns itself have the same layer effects as the column capitals but no Drop Shadow, no Inner Shadow and no Color Overlay. The size of the Inner Glow is 5 px. ### Step 22 For the fluting we have to give each of the three rectangles slightly different layer effects. The center one gets the following: • Outer Glow: Screen, Opacity at 100%, Size at 5px, white • Inner Glow: Multiply, Opacity at 10%, Size at 5px, black • Color Overlay: Multiply, Opacity at 100%, Color: H=0;, S=1%, and B=94% • Gradient Overlay: Multiply, Opacity at 10%, reflected 0°, black to white • Stroke: Size at 1px, Position: Inside, Multiply, Opacity at 35%, black The left one has the same effects except for these changes: • Gradient Overlay: Multiply, Opacity at 10%, linear 0°, black to white • Stroke: Size: 1px, Position of Inside, Multiply, Opacity at 25%, black And the right one has the same effects as the left one except of course for the Gradient Overlay which is: • Gradient Overlay: Multiply, Opacity at 10%, linear 0°, white to black ### Step 23 Now for the architrave. It gets the following layer effects: • Drop Shadow: Multiply, Opacity at 35%, Distance at 5px, Size at 5px, black • Inner Shadow: Screen, Opacity at 75%, Distance at 5px, Size of 3px, white • Inner Glow: Multiply, Opacity at 25%, Size of 2px, black • Gradient Overlay: Multiply, Opacity at 20%, linear 90°, black to white • Stroke: Size of 1px, Position: Outside, Multiply, Opacity at 50%, black ### Step 24 The horizontal stripes at the left and at the right get the following layer effects: • Drop Shadow: Screen, Opacity at 75%, Distance of 2px, Size of 2px, white • Inner Shadow: Multiply, Opacity at 50%, Distance of 3px, Size of 2px, black • Color Overlay: Multiply, Opacity at 100%, 80% gray The text gets the same layer effects with the following changes: • Inner Glow: Screen, Opacity at 75%, Size of 2px, white • Color Overlay: Multiply, Opacity at 100%, 40% gray Set the text anti-alias to smooth, default for Illustrator imports is crisp. ### Step 25 The roof gets the same effects as the architrave but no Drop Shadow. The inner part of the roof gets the following effects: • Drop Shadow: Screen, Opacity at 75%, Distance of 5px, Size of 5px, white • Inner Shadow: Multiply, Opacity at 50%, Distance of 5px, Size of 10px, black • Inner Glow: Screen, Opacity at 75%, Size of 2px, white • Color Overlay: Multiply, Opacity at 100%, 70% gray • Gradient Overlay: Multiply, Opacity at 80%, linear 90°,white to black ### Step 26 The pen shape gets the following layer effects: • Drop Shadow: Multiply, Opacity at 50%, Distance of 2px, Size of 2px, black • Inner Shadow: Screen, Opacity at 50% Distance of 2px, Size of 2px, white • Inner Glow: Screen, Opacity at 50%, Size of 5px, white • Color Overlay: Multiply, Opacity at 100%, Color: H=0°, S=1%, and B=100% • Gradient Overlay: Multiply, Opacity at 10%, linear 90°, black to white • Stroke: Size of 1px, Position set to Outside, Multiply, Opacity at 50%, black Wow, that was quite a lot of layer effects. Vector Plus members can review all the styles located in the PSD source file. Of course it isn't necessary to stick to these values. Play around and find out what you like the best. Now that we are through with the layer effects, let's do the rest. ### Step 27 Create an opacity mask for the sunburst shape. In the opacity mask, select the Gradient tool (G) and draw a radial white-to-black gradient from the center of the sunburst towards its ends. Then set the layer's Opacity to 60%. ### Step 28 For the shadows on the columns I use this simple method: Control-click on the column layer thumbnail and press Select Pixels. This creates a selection of the layers content. Then with the Elliptical Marquee subtract a portion of the selection so you get something like that shown below. ### Step 29 Go to Select > Modify > Contract and contract your selection by 1 px. Create a new layer, name it "Column 1 shadow" and fill the selection with black. Create an opacity mask for this layer. In the opacity mask, select the Gradient Tool and draw a linear white to black gradient from the bottom-right corner to the upper-left part of your shadow. Set the Opacity of the layer to 40% and you should have a shadow like that shown below. ### Step 30 Repeat with the other three columns. I decided to make the shadows bigger to the right. ### Step 31 Use the same technique to add highlights to the staircase: Select Pixels of one of the steps, subtract a portion, contract by 1, create a new layer, fill with white, add opacity mask, draw a gradient and then set the layer's Opacity to 40%. ### Final Image Add highlights to the other two steps, to the roof and to the pen shape and you're done! Subscribe to the Vectortuts+ RSS Feed to stay up to date with the latest vector tutorials and articles.
2021-06-23 19:03:13
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.2318185269832611, "perplexity": 7917.201834360008}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-25/segments/1623488539764.83/warc/CC-MAIN-20210623165014-20210623195014-00563.warc.gz"}
https://www.physicsforums.com/threads/theoretic-or-vector-notation.12297/
# Theoretic or vector notation 1. Jan 8, 2004 ### babbagee I am having a hard time with this problem, and i need some help. It says: In Exercisies 11-17, use set theoretic or vector notation or both to describe the points that lie in the given configurations. 11.) The plane spanned by v1 = (2,7,0) and v2 = (0,2,7) In the back of the book they have this answer {(2s,7s+2t,7t)| sER, tER} I know all they did was add the two vectors together, but i dont know how the got s, and t and what they represent. E= is a member of 2. Jan 8, 2004 ### master_coda A vector is in the span of v1 and v2 if and only if it is a linear combination of v1 and v2. In other words, v is in the span if $$\boldsymbol{v}=s\boldsymbol{v}_1+t\boldsymbol{v}_2$$ where s and t are any two real numbers. This is pretty much what the set theory notation is saying. Perhaps I didn't explain this too well. I'm not sure what your level of knowledge is, so I don't know how in-depth you need me to go. Last edited: Jan 8, 2004 3. Jan 9, 2004 ### HallsofIvy Staff Emeritus Looks good to me master_coda. Rajvirnijjar, perhaps you should review the concept of "spanning".
2017-03-28 10:32:52
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.5505663156509399, "perplexity": 612.719424039766}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-13/segments/1490218189686.56/warc/CC-MAIN-20170322212949-00432-ip-10-233-31-227.ec2.internal.warc.gz"}
https://brilliant.org/discussions/thread/how-can-i-streak-on-brilliantorg/
× # How can I streak on brilliant.org I don't know how to streak in brilliant.org . what does streak really mean ??? Hi guys ! I want your suggestions on this matter . Hope you will help me. Have a great day. Note by Chinmoyranjan Giri 2 years, 5 months ago Sort by: A streak denotes the number of days continuously you have solved at least a problem anywhere on the website. · 2 years, 5 months ago
2017-08-20 00:12:32
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8166550993919373, "perplexity": 3262.5103261609133}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-34/segments/1502886105955.66/warc/CC-MAIN-20170819235943-20170820015943-00246.warc.gz"}
http://motls.blogspot.co.uk/2006/04/eric-pianka-saving-earth-by-killing-58.html
## Wednesday, April 05, 2006 ... ///// ### Eric Pianka: Saving Earth by killing 5.8 billion people The article by Nick Schulz that we discussed here has explained that the plan of some of the leading environmentalists who fight against the so-called "global warming", including those in the Time magazine, was to erase 90 percent of the GDP of the civilized world. Another text linked in that blog article argued that the global warming activists are people who don't like people. A reduction of the GDP by 90 percent may sound as a bad idea but you will see that there are stronger ideas available on the market. Moreover, the idea about "people who dislike people" will be given a very concrete face. The distinguished 2006 Texas scientist, Prof. Eric Pianka, has a more efficient solution, namely to exterminate 5.8 billion people from the Earth. He finds that "HIV is too slow, it's no good": Ebola is a better tool that "will control the scourge of the humanity; we're looking forward to a huge collapse". He argues that "we've grown fat, apathetic, and miserable." See Seguin Gazette for these quotes. Prof. Pianka can't understand why other people find his ideas controversial. Proof of his views: course evaluations In fact, he's been teaching this stuff for years at the University of Texas in his Biology 357 Ecology class. You can see that the students think it was the perfect class and Pianka is the best teacher at the University of Texas. Sorry, Steven Weinberg and Jacques Distler, but this is how your school works. For example, one student from Fall 2004 says (search for "ebola" on the perfect class link): • I don't root for ebola, but maybe a ban on having more than one child. I agree... too many people ruining this planet. The following student also mentions ebola: • Though I agree that convervation biology is of utmost importance to the world, I do not think that preaching that 90% of the human population should die of ebola is the most effective means of encouraging conservation awareness. I found Pianka to be knowledgable, but spent too much time focusing on his specific research and personal views. You see. The statements about the intended killing of billions of people using ebola are confirmed including the 90% figure. There are other messages from the students that show what the course was about - but the comment above was the most accurate one for proving the main statement. This should settle all doubts whether Prof. Eric Pianka is preaching these things or not. There is probably no way to deny that he is teaching and preaching what is claimed that he is teaching and preaching. This theme was also one of the main points of his whole courses at University of Texas; fortunately, not the only one. The kids are getting credits for these things. Why does he hate humanity so much? Is it because of his lost 10 cm of tibia after his left leg became gangrenous? Who knows. To make everyone (including Sean Carroll) even more certain that Dr. Pianka, also known as Dr. Doom, thinks that 5.8 billion people should be removed using ebola, let me cite several recent sources: Most of the resources that are not mainstream media are Christian or even creationist but given the available data, and especially the course evaluation above, I find it unimaginable that this whole story was invented. The recent detailed quotes by Pianka that were recorded by Forrest Mims strikingly agree with other sources cited on this page. But of course, Forrest Mims is not the only person on the Internet who has heard the talk. Brenna has heard it, too. Unlike Mims, she has been fully converted. She explains the talk as follows: • ... Dr. Pianka's talk at the TAS meeting was mostly of the problems humans are causing as we rapidly proliferate around the globe. While what he had to say is way too vast to remember it all, moreover to relay it here in this blog, the bulk of his talk was that he's waiting for the virus that will eventually arise and kill off 90% of human population. In fact, his hope, if you can call it that, is that the ebola virus which attacks humans currently (but only through blood transmission) will mutate with the ebola virus that attacks monkeys airborne to create an airborne ebola virus that attacks humans. He's a radical thinker, that one! I mean, he's basically advocating for the death of all but 10% of the current population! And at the risk of sounding just as radical, I think he's right. Humans are far too populous. We've used up our resources, and we're destroying the Earth at an accelerated pace. The more technology we create, the more damage we're capable of doing. ... She is then complaining that it is just technology that keeps her grandparents alive and that technology saves children with defects who should normally die. It's a tough stuff. Update: Now we can read the transcript from the March 31st lecture of Eric Pianka here. As far as I can see, you can find every single statement mentioned by Forrest Mims in this transcript. Pianka criticizes anthropocentrism, explains that HIV is no good because it is too slow, promotes the abilities of the Ebola virus, announces that microorganisms will take over again, think about that, and says that they're our equals. I also tend to believe the Intelligent Designer Dembski that Pianka said that • “We need to plan our collapse rather than just let it happen to us.” on a video whose currently available form has been doctored. Nevertheless, Prof. Pianka now argues that his quotes were taken out of context. He just believes that the human population is one order of magnitude larger than it should be, it is bad for the ecosystem, and we should regulate it before it's too late. And he just analyzed the most efficient ways to regulate these 90% of people away, and the Ebola virus turned out to be the best solution. It's the main result of his scientific research. How can you object? Why is it such a good idea to eliminate 90% of us? Pianka's web page offers an answer: • I do not bear any ill will toward humanity. However, I am convinced that the world WOULD clearly be much better off without so many of us. Simply stopping the destruction of rainforests would help mediate some current planetary ills, including the release of previously unknown pathogens. The ancient Chinese curse "may you live in interesting times" comes to mind -- we are living in one of the most interesting times humans have ever experienced. For example, consider the manifold effects of global warming. We need to make a transition to a sustainable world. If we don't, nature is going to do it for us in ways of her own choosing. By definition, these ways will not be ours and they won't be much fun. Think about that. The only thing we can do is to hope that Prof. Pianka has no experimental colleagues working with him on the transition to a sustainable world. I have a full tolerance for Prof. Pianka's opinions - but still, I find it very appropriate for the Department of Homeland Security to make some research: and I hope that the creationist William Dembski was not joking when he announced that the DHS has been notified. The DHS should ask: where will the pandemic start? Have they already written an obituary for the first people who will die within the project to construct a sustainable world? You see how the Academia sometimes works. If Lawrence Summers very carefully proposes a working hypothesis whose validity is rather obvious to most people with IQ above 80 - that women are statistically less likely to be very good in math than men are - he's eventually forced to resign. When a left-wing professor proposes to kill 5.8 billion people to realize his dreams about the ideal ecosystem, he receives a standing ovation and an award. It's because Pianka's thesis is more politically correct than Summers' conjecture: it is based on the politically correct assumption that most people are dirty backers of capitalism who threaten the ecosystem which is why they should be regulated away in a very egalitarian fashion. Today, such reasoning is as politically correct as the destruction of the Jews was politically correct 70 years ago. The main punch line is that many of these environmentalist people are quite dangerous people who have mostly lost their mind and most of their fellow left-wingers can't even comprehend how incredibly mad their wing of the political spectrum has become. The much-loved biologist is worshipped by the far Left exactly because he is the ultimate advocate of egalitarianism. "The biggest enemy we face is anthropocentrism," he said. To Pianka, a human life is no more valuable than any other - a lizard, a bison (he lives with them), a rhino. I find such a "value judgement" completely unscientific. If he counts the "value" of different animals to be equal, why does not he consider the individual cells of a skunk, to give a specific example, to be equivalent to a human being? That would make skunks billions of times as valuable as a human. If you adopt this paradigm, should we count the number of cells? Or is the value of an organism expressed by her weight? There is no objective counting of a "value"; there are just more reasonable and less reasonable subjective appraisals, and Pianka's is one of the least intelligent ones. The fact that this well-known left-wing biologist misunderstands this point - that science cannot tell us what is "good" - is an example of deeply-rooted antiscientific prejudices among some left-wing scholars. Thank you for all these "ideas" but I will continue to avoid egalitarianism and I will continue to believe that Prof. Pianka's and his soulmates' opinions about the ideal ecosystem are completely insane; scientifically, they're rubbish even if more than 1/2 of Pianka's stories about the past ecosystems are correct. Much-loved people like Prof. Pianka may indeed be equivalent to a friend of ours, a stinking skunk, but my opinions about other humans, at least some of them, will continue to be higher. I do not bear any ill will toward skunks but they have their own place in this world that is different from the humans' place. Now I realized that I have just violated Pianka's third commandment: that's a good moment to stop.
2017-04-23 10:02:58
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 1, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.2963690161705017, "perplexity": 1814.7921046864017}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-17/segments/1492917118519.29/warc/CC-MAIN-20170423031158-00468-ip-10-145-167-34.ec2.internal.warc.gz"}
https://glsr.wordpress.com/tag/multivariable-calculus/
# GL(s,R) ## September 12, 2012 ### Projecting onto Projections Filed under: High Effort/Low Payoff Ideas — Adam Glesser @ 1:53 am Tags: , , , The first time I saw the expression $\int_C \mathbf{F} \cdot \mathbf{n}\ d\mathbf{r}$, I thought, “Why should that dot product be in there. By the time I saw $\iint_S \mathbf{F} \cdot\ d\mathbf{S}$, I resigned myself to the fact that there was always a dot product in these seemingly random integrals. At some point, I decided that the dot products are in there to turn vectors (or vector fields) into scalar functions—which is something we know how to integrate. More recently, I’ve decided that the purpose of these dot products is to capture the projection of one vector on the other. For example, if I apply a force $\mathbf{F}$ to an object, then the work done by that force in moving the object a certain distance in a given direction (denote this shift by $\mathbf{v}$) is $\mathbf{F} \cdot \mathbf{v}$. If the force is not constant over some curve parametrized by $\mathbf{r}(t)$ ($a \leq t \leq b$), then we compute the work by evaluating the integral $\int_a^b \mathbf{F} \cdot \mathbf{r}'(t)\ dt$ since, at any given point, our $\mathbf{v}$ from above is just the tangent vector to the curve at that point, i.e., $\mathbf{r}'(t)$. If you understand multivariable calculus, then you are probably laughing at me. “Duh. Why did it take you so long to figure that out?” Here is my answer: We (or maybe just I) improperly motivate the dot product. This semester, I’m using Stewart for Multivariable Calculus*. He introduces vectors in a way that seems fairly standard for math texts. Definition: The dot product of $\langle x_1, \ldots, x_n \rangle$ and $\langle y_1, \ldots, y_n \rangle$ is $x_1y_1 + \cdots + x_ny_n$. Theorem: If $\mathbf{a}$ and $\mathbf{b}$ be vectors with angle $\theta$ between them, then $\mathbf{a} \cdot \mathbf{b} = \mid\mid \mathbf{a} \mid\mid\ \mid \mid\mathbf{b}\mid\mid \cos(\theta)$. The beauty here is that you can use the dot product to help compute angles and it is immediately obvious that the dot product of orthogonal vectors is $0$. *This wasn’t my choice, but rather the choice of my department. Oh, did I mention I got a new job? Indeed I finally gave up on east coast living and moved back to California. I am now in the mathematics department at California State University Fullerton. I’ve heard that in physics textbooks, they switch the order of the above, i.e., they define the dot product via the cosine formula and then prove the above definition as a theorem. As a mathematician, I always went with the first definition. Now, I am not so sure. What follows is the introduction to the dot product I plan to give to my students (until I come up with something better, anyway*). *In the comments, please do set me straight about the real purpose of the dot product or how you think it best to introduce it in this context. I am interested in how far $\mathbf{b}$ extends along $\mathbf{a}$, so I drop a line perpendicular to $\mathbf{a}$ from the end of $\mathbf{b}$. At this point, I’m already confused by what would happen if I had tried to see how far $\mathbf{a}$ goes along $\mathbf{b}$, but I decide that I could simply extend $\mathbf{b}$ and at least draw the following picture: Awesome, I have a couple of right triangles. And, heck, since they are right triangles that share the angle (let’s call it $\theta$) between $\mathbf{a}$ and $\mathbf{b}$, they are similar triangles. Let’s give some names to the important sides. The comment about similar triangles implies that $\dfrac{h}{||\mathbf{b}||} = \dfrac{k}{||\mathbf{a}||}$. Ugh, let’s clear denominators to get $h||\mathbf{a}|| = k||\mathbf{b}||$. On the other hand, $\cos(\theta) = \dfrac{h}{||\mathbf{b}||}$, and so if we multiply by $||\mathbf{a}||\ ||\mathbf{b}||$, we get $||\mathbf{a}||\ ||\mathbf{b}||\cos(\theta) = h||\mathbf{a}||$ The moral is that this important quantity—$h||\mathbf{a}|| = k||\mathbf{b}||$—coming from projecting the vectors onto each other, has a very simple reformulation as $||\mathbf{a}||\ ||\mathbf{b}||\cos(\theta)$ which only relies on knowing the original vectors and the angle between them. Since this projection property is so important to us physically, we give a short name to this expression: $\mathbf{a} \cdot \mathbf{b}$, and call it the dot product of $\mathbf{a}$ and $\mathbf{b}$. If $\mathbf{b}$ is orthogonal to $\mathbf{a}$, then the projection should be $0$, which of course it is since the cosine of $90^\circ$ is $0$. At this point one can go about proving that the dot product is obtained directly from the components, i.e., without knowing the angle between them. Of course,  there is still the issue of when $\theta$ is obtuse, and it will probably be helpful to cover that case as well. Geometrically it will look a bit different, but the algebra and trig will be almost the same*. *You do get to use the fact that the cosine of an angle equals the cosine of the supplementary angle! There is nothing really new here, but I think the ordering is important. Their first impression of the dot product should convey the purpose of the dot product, not just the easiest algorithm for computing it. As it stands, the projection of a vector onto another vector gets a a somewhat token reference at the end of the dot product chapter. As ubiquitous as the idea is throughout the end of the class, it deserves its time in the sun. ## January 17, 2011 ### A multivariable calculus list In addition to my calculus course this semester, I also get to teach a multivariable calculus course with only six students. I’ll start with the standard list for those interested in that sort of thing. Spring 2011 Multivariable Calculus Standards List Let me admit something, here, in between two documents—less likely to read in here—about teaching this course, now for the third time: I’m a fraud. That’s right, I’m a fake, a charlatan, an impostor. I’ve created a counterfeit course and hustle the students with a dash of hocus-pocus and a sprinkle of hoodwinking. It is only through mathematical guile that my misrepresentations, chicanery and flim-flam go unnoticed. In short, and in the passing Christmas spirit, I am a humbug. This is a physics course. It should be taught be someone proficient in physics, someone with honed intuition about the geometry of abstract mathematical notions like div, grad, curl and all that, someone who sees everything as an application of Stokes’ theorem and has strong feelings about whether it should be written Stokes’ theorem or Stokes’s theorem. About the only thing I bring to the table is that I can teach students to remember that: $\mathrm{curl}(\mathbf{F}) = \nabla \times \mathbf{F}$ and $\mathrm{div}(\mathbf{F}) = \nabla \cdot \mathbf{F}$ Here is the calendar for the course. After it, I’ll explain a little bit of what I’m trying. Spring 2011 Multivariable Calculus Calendar There are several big differences here from how I’ve taught this course in the past. First, I am going to try with all my might to get to Stokes’ theorem before the last week. Part of the way I plan to do this is, similar to my calculus class, to cut out most of the stuff on limits and continuity that I usually get bogged down on in the first couple of weeks—am I the only person who finds interesting the pathological examples that make Clairaut’s theorem necessary? I get to teach an extra hour a week to a subset of the class and that stuff will fit perfectly in there. For the science majors, I’m more interested in helping them figure out how to use this stuff and how to develop intuition. Second, I’m skipping Green’s theorem until the end. Yes, it changes the story I normally tell, one that progresses so nicely up the dimension chart, but the trade-off is that I get more time to show them Stokes’ theorem and more time to focus on the physical interpretation. Speaking of interpretation, you will notice in the calendar  eleven or so ‘Group Activities’. These are stolen from an excellent guide produced by Dray and Manogue at Oregon State as part of their Bridge Project. To work within their framework, I’ve made another structural change that I’d never considered given how I think about the subject. Immediately after finishing triple integration (which, essentially, finishes the first half of the course), we start with vectors (I never start with vectors as most calculus books do) and then I want to get to line integrals and surface integrals as fast as possible. Normally, I mess around with div and curl before getting to integration of vector fields. Instead, I’m going to push out the Divergence theorem—the theorem I always cover in the last 45 minutes of the course—and use this to motivate the definition of div. Then I’ll push out Stokes’ theorem and use this to help motivate the definition of curl. This ought to give me two solid weeks to explore the physical meaning of these theorems as well as to use them to prove some of the standard cool corollaries (like Green’s theorem). This class will also be the first of my SBG courses to incorporate a final project. If anyone has good suggestions based on experience about how best to incorporate projects into the SBGrading scheme, I would live to hear them. My current system is quite simplistic. The standards for the course are given a 90% weighting for the overall grade—did I mention that midterms and finals now are simply extended assessments whose grades are treated like an arbitrary quiz, just with a lot more standards tested?—and 10% weighting for the project. Create a free website or blog at WordPress.com.
2017-10-23 06:08:18
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 46, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.7563486695289612, "perplexity": 349.66166501779855}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 5, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-43/segments/1508187825700.38/warc/CC-MAIN-20171023054654-20171023074654-00711.warc.gz"}
https://math.stackexchange.com/questions/3649084/give-an-example-of-a-contractible-space-x-which-has-no-deformation-retraction-to
# Give an example of a contractible space X which has no deformation retraction to any point of X I am new in topological algebra however, I know an example of a space X which has just one point $$x_{0}\ \in\ X$$ that $$x_{0}$$ is a deformation retraction of X: A cone CQ wich Q is rational number set, is just have one deformation retract point and its the vertex point $$x_{0}$$. deformation retraction: A continuous map $$F:X \times [0,1] \rightarrow X$$ is a deformation retraction of a space X onto a subspace A if, for every x in X and a in A, $$F(x,0)=x,\ F(x,1)\ \in \ A\ and\ F(a,1)=a$$ The subspace A is called a deformation retract of X. Contractible space a topological space X is contractible if the identity map on X is null-homotopic, i.e. if it is homotopic to some constant map. Intuitively, a contractible space is one that can be continuously shrunk to a point within that space. my question is: I need an example of a topological space which is contractible but has no point to be its deformation retraction. • I asked this very question here. – Tyrone Apr 29 '20 at 10:04 • @Tyrone Your question asks for a space in which no point is a strong deformation retract. The OP considers deformation retracts. – Paul Frost Apr 30 '20 at 11:17 • @PaulFrost thanks for paying attention. Please feel free to replace ' this very' with 'a similar' in the above. ;) – Tyrone Apr 30 '20 at 13:10 A contractible space $$X$$ has each point as a deformation retract. In my answer to Is Armstrong saying that the comb space is not contractible? you can find a proof that $$X$$ is contractible to any $$x_0 \in X$$. This means that there exists a contraction of $$X$$ to $$x_0$$, i.e. a homotopy $$F :X \times I \to X$$ such that $$F(x,0) = x$$ and $$F(x,1) = x_0$$ for all $$x \in X$$. This $$F$$ is a deformation retraction of $$X$$ onto $$\{x_0\}$$ as defined in your question. However, in general $$X$$ does not have each point $$x_0$$ as a strong deformation retract. Recall that $$A \subset X$$ is a strong deformation retract of $$X$$ if there exists a strong deformation retraction $$F : X \times I \to X$$ such that $$F(x,0) = x, F(x,1) \in A$$ for all $$x \in X$$ and $$F(a,t) = a$$ for all $$a \in A$$ and $$t \in I$$. This strengthens the definition of a deformation retraction by requiring that $$F$$ keeps all points of $$A$$ fixed. This is not required for a deformation retraction - it only requires $$F(a,1) = a$$. Also note that $$x_0$$ is a strong deformation retract of $$X$$ if and only if $$X$$ is pointed contractible to $$x_0$$ which means that there is a contraction $$F : X \times I \to X$$ which keeps $$x_0$$ fixed.
2021-02-27 04:54:17
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 36, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.7281067371368408, "perplexity": 125.36625456172779}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-10/segments/1614178358064.34/warc/CC-MAIN-20210227024823-20210227054823-00569.warc.gz"}
https://indico.cern.ch/event/749003/contributions/3354244/
# XXVII International Workshop on Deep Inelastic Scattering and Related Subjects Apr 8 – 12, 2019 Turin Europe/Rome timezone ## Radiative leptonic decay $B\to \gamma \ell \nu_\ell$ with subleading power corrections Apr 10, 2019, 10:45 AM 35m Rettorato - Sala Athenaeum ### Rettorato - Sala Athenaeum Via Verdi, 8 Turin Parallel Session Talk WG5: Physics with Heavy Flavours ### Speaker Yao Ji (University of Regensburg) ### Description We discuss the QCD predictions for the radiative decay $B\to \gamma \ell \nu_\ell$ with an energetic photon in the final state by taking into account the $1/E_\gamma, 1/m_b$ power-suppressed hard-collinear and soft corrections from higher-twist $B$-meson light-cone distribution amplitudes (LCDAs). The soft contribution is estimated through a dispersion relation and light-cone QCD sum rules. The analysis of theoretical uncertainties and the dependence of the decay form factors on the leading-twist LCDA $\phi_+(\omega)$ shows that the latter dominates. The radiative leptonic decay is therefore well suited to constrain the parameters of $\phi_+(\omega)$, including the first inverse moment, $1/\lambda_B$, from the expected high-statistics data of the BELLE II experiment. ### Primary author Yao Ji (University of Regensburg) ### Co-authors Prof. Martin Beneke (Technical University Munich) Prof. Vladimir Braun (University of Regensburg) Dr Yan-Bing Wei (Nankai University)
2022-08-16 00:36:03
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.7940329909324646, "perplexity": 9284.418554070713}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-33/segments/1659882572215.27/warc/CC-MAIN-20220815235954-20220816025954-00681.warc.gz"}
https://core.ac.uk/display/4442155
## A constitutive equation for creep in glassy polymers and composites ### Abstract The creep of polymethyl methacrylate was investigated in four-point flexural loading mode. Measurements were taken at temperatures from 8$\sp\circ$C to 55$\sp\circ$C, time periods up to 450 hours and stresses ranging from 5 to 25 MN/m$\sp2$. The data obtained were successfully superposed vertically; the data reduction, in this way, was expressed in the form of a constitutive equation: e(t, T, S) = e$\sb0$ (ref). exp $\lbrack-(\Delta$H$\sb0$ $-$ $\beta$S)/R. (1/T $-$ 1/T$\sb{\rm ref}$)).exp ($\beta$/RT. (S $-$ S$\sb{\rm ref}$)). t$\sp{\rm n}$ which shows that the creep strain (e) may be obtained as a product of separable functions that express the effect of time (t), temperature (T) and stress (S). Subscript ref. indicates the chosen reference state. The creep behavior follows a power law time dependence with an exponent equal to 0.24. The apparent activation energy of the creep is independent of temperature (Arrhenius behavior), stress dependent and decreases with increasing stress Topics: Chemical engineering Year: 1988 OAI identifier: oai:scholarship.rice.edu:1911/13296
2021-06-17 21:46:31
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.6935529112815857, "perplexity": 4213.491569146002}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-25/segments/1623487633444.37/warc/CC-MAIN-20210617192319-20210617222319-00026.warc.gz"}
https://wikimili.com/en/Reflection_coefficient
# Reflection coefficient Last updated In physics and electrical engineering the reflection coefficient is a parameter that describes how much of an electromagnetic wave is reflected by an impedance discontinuity in the transmission medium. It is equal to the ratio of the amplitude of the reflected wave to the incident wave, with each expressed as phasors. For example, it is used in optics to calculate the amount of light that is reflected from a surface with a different index of refraction, such as a glass surface, or in an electrical transmission line to calculate how much of the electromagnetic wave is reflected by an impedance. The reflection coefficient is closely related to the transmission coefficient . The reflectance of a system is also sometimes called a "reflection coefficient". Physics is the natural science that studies matter and its motion and behavior through space and time and that studies the related entities of energy and force. Physics is one of the most fundamental scientific disciplines, and its main goal is to understand how the universe behaves. Electrical engineering is a professional engineering discipline that generally deals with the study and application of electricity, electronics, and electromagnetism. This field first became an identifiable occupation in the later half of the 19th century after commercialization of the electric telegraph, the telephone, and electric power distribution and use. Subsequently, broadcasting and recording media made electronics part of daily life. The invention of the transistor, and later the integrated circuit, brought down the cost of electronics to the point they can be used in almost any household object. The amplitude of a periodic variable is a measure of its change over a single period. There are various definitions of amplitude, which are all functions of the magnitude of the difference between the variable's extreme values. In older texts the phase is sometimes called the amplitude. ## Contents Different specialties have different applications for the term. ## Telecommunications In telecommunications, the reflection coefficient is the ratio of the complex amplitude of the reflected wave to that of the incident wave. In particular, at a discontinuity in a transmission line, it is the complex ratio of the electric field strength of the reflected wave (${\displaystyle E^{-}}$) to that of the incident wave (${\displaystyle E^{+}}$). This is typically represented with a ${\displaystyle \Gamma }$ (capital gamma) and can be written as: Telecommunication is the transmission of signs, signals, messages, words, writings, images and sounds or information of any nature by wire, radio, optical or other electromagnetic systems. Telecommunication occurs when the exchange of information between communication participants includes the use of technology. It is transmitted either electrically over physical media, such as cables, or via electromagnetic radiation. Such transmission paths are often divided into communication channels which afford the advantages of multiplexing. Since the Latin term communicatio is considered the social process of information exchange, the term telecommunications is often used in its plural form because it involves many different technologies. In mathematics, a ratio is a relationship between two numbers indicating how many times the first number contains the second. For example, if a bowl of fruit contains eight oranges and six lemons, then the ratio of oranges to lemons is eight to six. Similarly, the ratio of lemons to oranges is 6:8 and the ratio of oranges to the total amount of fruit is 8:14. In radio-frequency engineering, a transmission line is a specialized cable or other structure designed to conduct alternating current of radio frequency, that is, currents with a frequency high enough that their wave nature must be taken into account. Transmission lines are used for purposes such as connecting radio transmitters and receivers with their antennas, distributing cable television signals, trunklines routing calls between telephone switching centres, computer network connections and high speed computer data buses. ${\displaystyle \Gamma ={\frac {E^{-}}{E^{+}}}}$ The reflection coefficient may also be established using other field or circuit quantities. An electronic circuit is composed of individual electronic components, such as resistors, transistors, capacitors, inductors and diodes, connected by conductive wires or traces through which electric current can flow. To be referred to as electronic, rather than electrical, generally at least one active component must be present. The combination of components and wires allows various simple and complex operations to be performed: signals can be amplified, computations can be performed, and data can be moved from one place to another. The reflection coefficient of a load is determined by its impedance ${\displaystyle Z_{L}\,}$(load impedance) and the impedance toward the source ${\displaystyle Z_{S}\,}$(source impedance). ${\displaystyle \Gamma ={Z_{L}-Z_{S} \over Z_{L}+Z_{S}}}$ Notice that a negative reflection coefficient means that the reflected wave receives a 180°, or ${\displaystyle \pi }$, phase shift. The magnitude (designated by vertical bars) of the reflection coefficient can be calculated from the standing wave ratio, ${\displaystyle SWR}$: In mathematics, the absolute value or modulus|x| of a real number x is the non-negative value of x without regard to its sign. Namely, |x| = x for a positive x, |x| = −x for a negative x, and |0| = 0. For example, the absolute value of 3 is 3, and the absolute value of −3 is also 3. The absolute value of a number may be thought of as its distance from zero. In radio engineering and telecommunications, standing wave ratio (SWR) is a measure of impedance matching of loads to the characteristic impedance of a transmission line or waveguide. Impedance mismatches result in standing waves along the transmission line, and SWR is defined as the ratio of the partial standing wave's amplitude at an antinode (maximum) to the amplitude at a node (minimum) along the line. ${\displaystyle |\Gamma |={SWR-1 \over SWR+1}}$ The reflection coefficient is displayed graphically using a Smith chart. The Smith chart, invented by Phillip H. Smith (1905–1987), is a graphical aid or nomogram designed for electrical and electronics engineers specializing in radio frequency (RF) engineering to assist in solving problems with transmission lines and matching circuits. The Smith chart can be used to simultaneously display multiple parameters including impedances, admittances, reflection coefficients, scattering parameters, noise figure circles, constant gain contours and regions for unconditional stability, including mechanical vibrations analysis. The Smith chart is most frequently used at or within the unity radius region. However, the remainder is still mathematically relevant, being used, for example, in oscillator design and stability analysis. ## Seismology Reflection coefficient is used in feeder testing for reliability of medium. ## Optics and microwaves In optics and electromagnetics in general, "reflection coefficient" can refer to either the amplitude reflection coefficient described here, or the reflectance, depending on context. Typically, the reflectance is represented by a capital R, while the amplitude reflection coefficient is represented by a lower-case r.These related concepts are covered by Fresnel equations in classical optics. Optics is the branch of physics that studies the behaviour and properties of light, including its interactions with matter and the construction of instruments that use or detect it. Optics usually describes the behaviour of visible, ultraviolet, and infrared light. Because light is an electromagnetic wave, other forms of electromagnetic radiation such as X-rays, microwaves, and radio waves exhibit similar properties. Reflectance of the surface of a material is its effectiveness in reflecting radiant energy. It is the fraction of incident electromagnetic power that is reflected at an interface. The reflectance spectrum or spectral reflectance curve is the plot of the reflectance as a function of wavelength. The Fresnel equations describe the reflection and transmission of light when incident on an interface between different optical media. They were deduced by Augustin-Jean Fresnel who was the first to understand that light is a transverse wave, even though no one realized that the "vibrations" of the wave were electric and magnetic fields. For the first time, polarization could be understood quantitatively, as Fresnel's equations correctly predicted the differing behaviour of waves of the s and p polarizations incident upon a material interface. ## Acoustics Acousticians use reflection coefficients to understand the effect of different materials on their acoustic environments. ## Related Research Articles In physics, attenuation or, in some contexts, extinction is the gradual loss of flux intensity through a medium. For instance, dark glasses attenuate sunlight, lead attenuates X-rays, and water and air attenuate both light and sound at variable attenuation rates. The characteristic impedance or surge impedance (usually written Z0) of a uniform transmission line is the ratio of the amplitudes of voltage and current of a single wave propagating along the line; that is, a wave travelling in one direction in the absence of reflections in the other direction. Alternatively and equivalently it can be defined as the input impedance of a transmission line when its length is infinite. Characteristic impedance is determined by the geometry and materials of the transmission line and, for a uniform line, is not dependent on its length. The SI unit of characteristic impedance is the ohm. Polarization is a property applying to transverse waves that specifies the geometrical orientation of the oscillations. In a transverse wave, the direction of the oscillation is perpendicular to the direction of motion of the wave. A simple example of a polarized transverse wave is vibrations traveling along a taut string (see image); for example, in a musical instrument like a guitar string. Depending on how the string is plucked, the vibrations can be in a vertical direction, horizontal direction, or at any angle perpendicular to the string. In contrast, in longitudinal waves, such as sound waves in a liquid or gas, the displacement of the particles in the oscillation is always in the direction of propagation, so these waves do not exhibit polarization. Transverse waves that exhibit polarization include electromagnetic waves such as light and radio waves, gravitational waves, and transverse sound waves in solids. In some types of transverse waves, the wave displacement is limited to a single direction, so these also do not exhibit polarization; for example, in surface waves in liquids, the wave displacement of the particles is always in a vertical plane. The propagation constant of a sinusoidal electromagnetic wave is a measure of the change undergone by the amplitude and phase of the wave as it propagates in a given direction. The quantity being measured can be the voltage, the current in a circuit, or a field vector such as electric field strength or flux density. The propagation constant itself measures the change per unit length, but it is otherwise dimensionless. In the context of two-port networks and their cascades, propagation constant measures the change undergone by the source quantity as it propagates from one port to the next. In telecommunications, return loss is the loss of power in the signal returned/reflected by a discontinuity in a transmission line or optical fiber. This discontinuity can be a mismatch with the terminating load or with a device inserted in the line. It is usually expressed as a ratio in decibels (dB); A waveguide is a structure that guides waves, such as electromagnetic waves or sound, with minimal loss of energy by restricting expansion to one dimension or two. There is a similar effect in water waves constrained within a canal, or guns that have barrels which restrict hot gas expansion to maximize energy transfer to their bullets. Without the physical constraint of a waveguide, wave amplitudes decrease according to the inverse square law as they expand into three dimensional space. In electronics, impedance matching is the practice of designing the input impedance of an electrical load or the output impedance of its corresponding signal source to maximize the power transfer or minimize signal reflection from the load. In geophysics and reflection seismology, amplitude versus offset (AVO) or amplitude variation with offset is the general term for referring to the dependency of the seismic attribute, amplitude, with the distance between the source and receiver. AVO analysis is a technique that geophysicists can execute on seismic data to determine a rock’s fluid content, porosity, density or seismic velocity, shear wave information, fluid indicators. Scattering parameters or S-parameters describe the electrical behavior of linear electrical networks when undergoing various steady state stimuli by electrical signals. The SWR meter or VSWR meter measures the standing wave ratio in a transmission line. The meter can be used to indicate the degree of mismatch between a transmission line and its load, or evaluate the effectiveness of impedance matching efforts. The transmission coefficient is used in physics and electrical engineering when wave propagation in a medium containing discontinuities is considered. A transmission coefficient describes the amplitude, intensity, or total power of a transmitted wave relative to an incident wave. Filters designed using the image impedance methodology suffer from a peculiar flaw in the theory. The predicted characteristics of the filter are calculated assuming that the filter is terminated with its own image impedances at each end. This will not usually be the case; the filter will be terminated with fixed resistances. This causes the filter response to deviate from the theoretical. This article explains how the effects of image filter end terminations can be taken into account. The transfer-matrix method is a method used in optics and acoustics to analyze the propagation of electromagnetic or acoustic waves through a stratified medium. This is for example relevant for the design of anti-reflective coatings and dielectric mirrors. A signal travelling along an electrical transmission line will be partly, or wholly, reflected back in the opposite direction when the travelling signal encounters a discontinuity in the characteristic impedance of the line, or if the far end of the line is not terminated in its characteristic impedance. This can happen, for instance, if two lengths of dissimilar transmission lines are joined together. Metal-mesh optical filters are optical filters made from stacks of metal meshes and dielectric. They are used as part of an optical path to filter the incoming light to allow frequencies of interest to pass while reflecting other frequencies of light. A frequency-selective surface (FSS) is any thin, repetitive surface designed to reflect, transmit or absorb electromagnetic fields based on the frequency of the field. In this sense, an FSS is a type of optical filter or metal-mesh optical filters in which the filtering is accomplished by virtue of the regular, periodic pattern on the surface of the FSS. Though not explicitly mentioned in the name, FSS's also have properties which vary with incidence angle and polarization as well - these are unavoidable consequences of the way in which FSS's are constructed. Frequency-selective surfaces have been most commonly used in the radio frequency region of the electromagnetic spectrum and find use in applications as diverse as the aforementioned microwave oven, antenna radomes and modern metamaterials. Sometimes frequency selective surfaces are referred to simply as periodic surfaces and are a 2-dimensional analog of the new periodic volumes known as photonic crystals. ## References •  This article incorporates  public domain material from the General Services Administration document "Federal Standard 1037C" (in support of MIL-STD-188 ). • Bogatin, Eric (2004). Signal Integrity - Simplified. Upper Saddle River, New Jersey: Pearson Education, Inc. ISBN   0-13-066946-6. Figure 8-2 and Eqn. 8-1 Pg. 279
2019-03-22 18:03:45
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 10, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.6385048627853394, "perplexity": 654.3405119881895}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-13/segments/1552912202688.89/warc/CC-MAIN-20190322180106-20190322202106-00372.warc.gz"}
https://afrouzi.com/DRIPs.jl/dev/
# Dynamic Rational Inattention Problems (DRIPs) DRIPs.jl is a Julia software package that provides a fast and robust method for solving LQG Dynamic Rational Inattention models using the methods developed by Afrouzi and Yang (2020). ## Installation To add the package, execute the following in Julia REPL: using Pkg; Pkg.add("DRIPs"); To import and use the package, execute: using DRIPs;
2021-01-15 22:54:51
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.3177003562450409, "perplexity": 6648.7204275095955}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-04/segments/1610703497681.4/warc/CC-MAIN-20210115224908-20210116014908-00301.warc.gz"}
https://web2.0calc.com/questions/question_78758
+0 # Question 0 227 2 What is 2/3 - 3/2 Guest May 23, 2017 #2 +2180 +2 $$-\frac{5}{6}=-0.8\overline3$$ $$\frac{2}{3}-\frac{3}{2}$$ Above, there is the original expession. To evaluate this, we must have common denominators. To do this, we must multiply the numerator and the denominator. If you multiply 3 by 2 and 2 by 3, then you will have a common denominator: $$\frac{2}{3}*\frac{2}{2}=\frac{4}{6}$$ $$\frac{3}{2}*\frac{3}{3}=\frac{9}{6}$$ Notice how I am not actually changing the value of the fraction. I'm multiplying both fractions by 1, so I am not changing the value of the fraction, just the way the number is represented: $$\frac{4}{6}-\frac{9}{6}=-\frac{5}{6}=-0.8\overline3$$ TheXSquaredFactor  May 23, 2017 #1 0 -5/6 Guest May 23, 2017 edited by Guest  May 23, 2017 #2 +2180 +2 $$-\frac{5}{6}=-0.8\overline3$$ $$\frac{2}{3}-\frac{3}{2}$$ Above, there is the original expession. To evaluate this, we must have common denominators. To do this, we must multiply the numerator and the denominator. If you multiply 3 by 2 and 2 by 3, then you will have a common denominator: $$\frac{2}{3}*\frac{2}{2}=\frac{4}{6}$$ $$\frac{3}{2}*\frac{3}{3}=\frac{9}{6}$$ Notice how I am not actually changing the value of the fraction. I'm multiplying both fractions by 1, so I am not changing the value of the fraction, just the way the number is represented: $$\frac{4}{6}-\frac{9}{6}=-\frac{5}{6}=-0.8\overline3$$ TheXSquaredFactor  May 23, 2017
2018-09-18 20:22:06
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9689001441001892, "perplexity": 317.2228560645733}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-39/segments/1537267155676.21/warc/CC-MAIN-20180918185612-20180918205612-00345.warc.gz"}
https://techwhiff.com/learn/question-5-a-positively-charged-object-is-brought/182592
Question 5. A positively charged object is brought near the head of an initially uncharged electroscope.... Question: Question 5. A positively charged object is brought near the head of an initially uncharged electroscope. Which diagram below best describes the state of the electroscope after the positive objéct is brought near? (1 mark) a) b) c) d) Similar Solved Questions E) Draw and name any tertiary amine. 2 marks] f)Draw a scheme with a curly arrow... e) Draw and name any tertiary amine. 2 marks] f)Draw a scheme with a curly arrow mechanism to represent the reaction of the latter amine (in part (e)) with propanoic acid to give a salt. 3 marks]... Why was a melting point determination not performed on the caffeine product? Why was a melting point determination not performed on the caffeine product?... What is the slope of line x=8? What is the slope of line x=8?... If the market power of firms increases, what happens in the AD/AS model? Aggregate demand shifts... If the market power of firms increases, what happens in the AD/AS model? Aggregate demand shifts to the right. Aggregate supply shifts to the right. Aggregate supply shifts to the left. Aggregate demand shifts to the left.... 1 1. Consider a wind turbine of diameter 52 m, rated capacity 850 kW, installed at... 1 1. Consider a wind turbine of diameter 52 m, rated capacity 850 kW, installed at a site where the average wind power density = 700 W/m2. The total energy output over a year = 3240 MWh. What is the proportion of wind energy captured? a... CH 16 EXERCISES 3,4,8,10,14.15 eBook Show Me How Calculator Entries for Direct Labor and Factory Overhead... CH 16 EXERCISES 3,4,8,10,14.15 eBook Show Me How Calculator Entries for Direct Labor and Factory Overhead Townsend Industries Inc. manufactures recreational vehicles. Townsend uses a job order cost system. The time tickets from November jobs are summarized as follows: Job 11-101 Job 11-102 \$6,240 9,... When 235U fissions, the fission products are not always the same and exactly two neutrons are... When 235U fissions, the fission products are not always the same and exactly two neutrons are not always released. An example of one fission reaction is You may want to use the following table of atomic masses: Table of masses 141Ba 140.9144 u 144Ba 143.9229 u 139Te 138.9347 u 141Cs 140.919... Where can you access information about legislation relating to payroll systems? Identify at least three information... Where can you access information about legislation relating to payroll systems? Identify at least three information sources and the information each can provide.Australian laws only120–150 words... President Trump has raised tariffs on steel and aluminum imported into the United States. Answer both... President Trump has raised tariffs on steel and aluminum imported into the United States. Answer both of the following: 1) What exporting countries are affected and how large are the tariffs? 2) What are the consequences for US industries and consumers? Note any differences among them in how they ar... Kindly help me with designing research questionnaire on the topic below: Research Topic Investigate the effects... Kindly help me with designing research questionnaire on the topic below: Research Topic Investigate the effects of introducing flexible working methods on employee satisfaction and performance of the Namibia Training Authority (NTA), training department. The Objectives of the Study. 1) To identify t... Find the sum of the n terms of the series 1*4*7 + 2*5*8 + 3*6*9.....? Find the sum of the n terms of the series 1*4*7 + 2*5*8 + 3*6*9.....?... 2. The production function of an economy is y = 2-kas, where y is output per... 2. The production function of an economy is y = 2-kas, where y is output per labor and k is capital per labor. The growth rate of the labor force is 2% and the rate of capital depreciation is 5%. There is no Calculate the steady state capital-labor ratio (k*) if the saving rate is 10%! (3 points) Wh... How do I use a graphing calculator to find the real zeros of f(x)=x^4+x^3-11x^2-9x+18? How do I use a graphing calculator to find the real zeros of f(x)=x^4+x^3-11x^2-9x+18?...
2022-11-26 16:28:26
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.3173618018627167, "perplexity": 2953.372062188887}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-49/segments/1669446708010.98/warc/CC-MAIN-20221126144448-20221126174448-00280.warc.gz"}
http://hal.in2p3.fr/view_by_stamp.php?label=IN2P3&langue=fr&action_todo=view&id=in2p3-00284842&version=1
HAL : in2p3-00284842, version 1 arXiv : 0805.4833 Journal of Instrumentation 3 (2008) P08001 Design and Electronics Commissioning of the Physics Prototype of a Si-W Electromagnetic Calorimeter for the International Linear Collider CALICE Collaboration(s) (2008) The CALICE collaboration is studying the design of high performance electromagnetic and hadronic calorimeters for future International Linear Collider detectors. For the electromagnetic calorimeter, the current baseline choice is a high granularity sampling calorimeter with tungsten as absorber and silicon detectors as sensitive material. A physics prototype'' has been constructed, consisting of thirty sensitive layers. Each layer has an active area of 18x18 cm2 and a pad size of 1x1 cm2. The absorber thickness totals 24 radiation lengths. It has been exposed in 2006 and 2007 to electron and hadron beams at the DESY and CERN beam test facilities, using a wide range of beam energies and incidence angles. In this paper, the prototype and the data acquisition chain are described and a summary of the data taken in the 2006 beam tests is presented. The methods used to subtract the pedestals and calibrate the detector are detailed. The signal-over-noise ratio has been measured at 7.63 +/- 0.01. Some electronics features have been observed; these lead to coherent noise and crosstalk between pads, and also crosstalk between sensitive and passive areas. The performance achieved in terms of uniformity and stability is presented. Thème(s) : Physique/Physique/Instrumentations et Détecteurs Mot(s)-clé(s) : Calorimeters – Detector alignment and calibration methods – Detector design and construction technologies and materials Lien vers le texte intégral : http://fr.arXiv.org/abs/0805.4833 in2p3-00284842, version 1 http://hal.in2p3.fr/in2p3-00284842 oai:hal.in2p3.fr:in2p3-00284842 Contributeur : Emmanuelle Vernay <> Soumis le : Mardi 3 Juin 2008, 18:09:51 Dernière modification le : Mercredi 17 Décembre 2008, 09:41:30
2014-07-29 02:41:44
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.691949188709259, "perplexity": 3670.9198939186767}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2014-23/segments/1406510264575.30/warc/CC-MAIN-20140728011744-00084-ip-10-146-231-18.ec2.internal.warc.gz"}
https://mathoverflow.net/questions/204007/best-h%c3%b6lder-exponents-of-surjective-maps-from-the-unit-square-to-the-unit-cube
# Best Hölder exponents of surjective maps from the unit square to the unit cube The Peano's square-filling curve $p:I\to I^2$ turn's out to be Hölder continuous with exponent $1/2$ on the unit interval $I$ (a quick way to see it, is to note that $p$ is a fixed point of a suitable contraction $T:C(I,I^2)\to C(I,I^2)$, and the non-empty, closed subset of curves with modulus of continuity $\omega(t):=ct^{1/2}$ is $T$-invariant, for a suitable choice of $c$, so that $p$ is therein). For the same reason, the more general analogous $n$-cube-filling curves $I\to I^n$ (e.g. described in the same Peano paper) are Hölder continuous with exponent $1/n$. On the other hand, for any $1\le k\le n$, by elementary considerations on Hausdorff measures, no $\alpha$-Hölder continuous map $I^k\to I^n$ with exponent $\alpha > k/n$ can be surjective. The natural questions are therefore: Given $1\le k\le n$, does there exist an $\alpha$-Hölder continuous map $I^k\to I^n$ with exponent $\alpha=k/n$? Otherwise, what is the best exponent $\alpha$ obtainable for such a surjective map? In particular, is there a simple construction for the case $I^2\to I^3$? (Actually, we may focus on this last question, which appears to be the simpler non-trivial case). Summing up the above remarks, the answer is affirmative if $k=1$ or if $k=n$, and we may also note that if for a pair $(k,n)$ there is such a surjective map $q:I^k\to I^n$, then the map $(x_1,\dots,x_m)\mapsto (q(x_1), q(x_2),\dots ,q(x_m))$ is also a surjective map $I^{mk}\to I^{mn}$ with the same exponent $k/n$ of $q$. Also, we may consider compositions of maps, so that affirmative answers for $(k,n)$ and $(n,m)$ imply the affirmative answer for $(k,m)$. Update 08.01.16.   The only answer received so far suggests a nice article, yet not related with this question (the only theorem in that paper that deals with Hölder maps is Thm 2.1, but has nothing or very little to do with the present problem, since it is about $\mathbb{R}$-valued functions, that is $n=1$ (existence of Hölder functions on a metric space which map surjectively onto an interval is a non-trivial problem only for totally disconnected spaces). • That's a great question! It reminds of the open question whether there is an embedding from $\mathbb{R}^k$ with the snowflaked metric $\lVert\cdot\rVert^{\alpha}$ (where $\alpha <1$) into $\mathbb{R}^n$ whenever $k/\alpha< n$. Your question feels a bit of a dual to that. Apr 27 '15 at 9:07 There are such surjections with critical Hölder exponent for any pair of dimensions k < n. Stong showed that there is a bijection $\mathbb Z^k \to \mathbb Z^n$ that is Hölder continuous with exponent $k/n$: R. Stong, Mapping $\mathbb Z^r$ into $\mathbb Z^s$ with Maximal Contraction, Discrete Comput Geom 20:131–138 (1998) A limit construction can then be used to obtain surjections from $\mathbb R^k$ to $\mathbb R^n$ of the same regularity, which also implies the surjection result for cubes. Some details and further interesting discussions about such maps are contained in section 9.1 of the following notes by Semmes: S. Semmes, Where the Buffalo Roam: Infinite Processes and Infinite Complexity, arXiv:math/0302308v1 (2003) (This is not a complete answer, but I cannot comment.) =========== Edit: To clarify, I should first observe, as Willie Wong points out in the comments, that there is a typo in the original question. Simple Hausdorff dimension arguments show that there can be no $\alpha$-Hölder map from $I^k$ onto $I^n$ with $\alpha>k/n$. (The question says the opposite.) The question is therefore: what is the largest value of $\alpha$ for which we can find such a map? =========== It follows immediately from the main result of http://arxiv.org/pdf/1203.0686.pdf that for every $\alpha<k/n$ there is an $\alpha$-Hölder map from $I^k$ onto $I^n$. In the general metric space case of that theorem, a map with optimal Hölder exponent need not exist. In this very specific case, I would bet that it does, but I don't have a specific construction in mind. • The question has a typo. It is always easier to get maps with smaller Hölder exponent. Apr 27 '15 at 12:45 • I can believe that it might follow from the main theorem of that paper, but I don't see that it follows immediately! I am a bit rusty - maybe I am missing a well-known fact about maps between cubes in $\mathbb{R}^n$? The theorem in the paper does not talk about surjective maps - although in the case of maps to the interval, surjectivity is obvious, I can imagine that this need not be the case in higher dimensions. Can you provide more details of why that theorem implies your claim? – jwg Apr 29 '15 at 15:59 • Yes, the only theorem in that paper that deals with Hoelder maps is Thm 2.1, but has nothing or very little to see with the present problem. Jan 8 '16 at 10:25
2021-10-16 23:47:28
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.907881498336792, "perplexity": 158.74551119309217}, "config": {"markdown_headings": true, "markdown_code": false, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-43/segments/1634323585045.2/warc/CC-MAIN-20211016231019-20211017021019-00627.warc.gz"}
http://math.stackexchange.com/questions/88033/formula-for-the-nth-prime-number-discovered
# Formula for the $n$th prime number: discovered? [closed] - ## closed as primarily opinion-based by GEdgar, Dan Rust, Amzoti, Omnomnomnom, MathOverviewAug 11 '13 at 14:00 Many good questions generate some degree of opinion based on expert experience, but answers to this question will tend to be almost entirely based on opinions, rather than facts, references, or specific expertise. If this question can be reworded to fit the rules in the help center, please edit the question. In principle it looks OK, the summation tests for divisibility by odd numbers between $3$ and $\sqrt{N}$. The claims about Riemann hypothesis, Goldbach Conjecture, and so on are, to put it nicely, ambitious. –  André Nicolas Dec 3 '11 at 16:57 @AndréNicolas What claims do you refer to? Note that these formulas are simply well-known algorithms encoded into a more arithmetical programming language. This is clearer when it is made explicit, e.g. see Conway's Fractran language. –  Bill Dubuque Dec 3 '11 at 18:52 @Bill Dubuque: It is best not to attempt to summarize. One need only travel to the web site linked to in the main post. –  André Nicolas Dec 3 '11 at 19:21 Note that "a formula for the n-th prime" has never been considered an open problem. I honestly don't know why so many cranks think it is... –  Charles Dec 4 '11 at 17:16 His equations seem correct after skimming, but they are trivial and do not solve any open problem, they just an extremely roundabout and unhelpful way of stating the definitions. Stating Riemann hypothesis or twin prime conjecture is very far from proving it. He claims he proved them on "about discovery page". See Ten Signs a Claimed Mathematical Breakthrough is Wrong. - We're in no position to judge, I saw this on the website: "NAAS (USA) Awarded A++ = Excellent Grade to article of prime numbers formula by Prof. S.M.R.Hashemi Moosavi" –  Gigili Jan 3 '12 at 15:57 Awards prove nothing. They are mostly given for political reasons. Further, we are in a position to judge since we have the ability to think. –  user16697 Jan 3 '12 at 16:20
2015-09-01 08:05:37
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.7634924054145813, "perplexity": 1693.1407128567625}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 5, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2015-35/segments/1440645167592.45/warc/CC-MAIN-20150827031247-00086-ip-10-171-96-226.ec2.internal.warc.gz"}