url
stringlengths
14
2.42k
text
stringlengths
100
1.02M
date
stringlengths
19
19
metadata
stringlengths
1.06k
1.1k
http://paperity.org/p/85771312/tsunami-source-and-inundation-features-around-sendai-coast-japan-due-to-the-november-22
Tsunami source and inundation features around Sendai Coast, Japan, due to the November 22, 2016 Mw 6.9 Fukushima earthquake Geoscience Letters, Jan 2018 The tsunami source of the 2016 Fukushima Earthquake, which was generated by a normal faulting earthquake mechanism, is estimated by inverting the tsunami waveforms that were recorded by seven tide gauge stations and two wave gauge stations along the north Pacific coast of Japan. Two fault models based on different available moment tensor solutions were employed, and their locations were constrained by using the reverse tsunami travel time from the stations to the epicenter. The comparison of the two fault slip models showed that the fault model with a strike = 49°, dip = 35°, and rake = −89° more accurately simulated the observed tsunami data. This fault model estimated a fault area of 40 km $$\times$$ 32 km. The largest slip was estimated as 4.66 m at a 6.09 km depth, larger slips also concentrated between depths of 6.06 and 10.68 km, and located southwest of the epicenter. Assuming a rigidity of $$2.7\times 10^{10}$$ N/m$$^2$$, the estimated moment magnitude was $$3.35\times 10^{19}$$ Nm (equivalent to Mw = 6.95). In addition, a comparison of nonlinear tsunami simulations using finer bathymetry around Sendai Coast verified that the above fault slip model could better reproduce the tsunami features observed at Sendai Port and its surroundings. Finally, we analyzed the nonlinear tsunami computed from our best fault slip model. Our simulations also corroborated the height of the secondary wave amplitude observed at Sendai Port, which was caused by the reflected tsunami waves from the Fukushima coast, as described in previous studies. Furthermore, we found that the initial positive wave recorded inside Sendai Bay resulted from the addition of the initial incoming wave and the tsunami wave reflected off Sendai Coast, between Natori River and Sendai Port. This is a preview of a remote PDF: https://link.springer.com/content/pdf/10.1186%2Fs40562-017-0100-9.pdf Bruno Adriano, Yushiro Fujii, Shunichi Koshimura. Tsunami source and inundation features around Sendai Coast, Japan, due to the November 22, 2016 Mw 6.9 Fukushima earthquake, Geoscience Letters, 2018, 2, DOI: 10.1186/s40562-017-0100-9
2018-12-10 22:41:23
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.4459683895111084, "perplexity": 4489.448120384014}, "config": {"markdown_headings": false, "markdown_code": false, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-51/segments/1544376823445.39/warc/CC-MAIN-20181210212544-20181210234044-00297.warc.gz"}
https://alexn.org/blog/2021/01/26/tail-recursive-functions-in-scala/
# Tail Recursive Functions (in Scala) Turning imperative algorithms to tail-recursive functions isn’t necessarily obvious. In this article (and video) I’m showing you the trick you need, and in doing so, we’ll discover the Zen of Functional Programming. ## The Trick # Let’s start with a simple function that calculates the length of a list: def len(l: List[_]): Int = l match { case Nil => 0 case _ :: tail => len(tail) + 1 } It’s a recursive function with a definition that is mathematically correct. However, if we try to test it, this will fail with a StackOverflowError: len(List.fill(100000)(1)) The problem is that the input list is too big. And because the VM still has work to do after that recursive call, needing to do a + 1, the call isn’t in “tail position”, so the call-stack must be used. A StackOverflowError is a memory error, and in this case it’s a correctness issue, because the function will fail on reasonable input. First let’s describe it as a dirty while loop instead: def len(l: List[_]): Int = { var count = 0 var cursor = l while (cursor != Nil) { count += 1 cursor = cursor.tail } count } THE TRICK for turning such functions into tail-recursions is to turn those variables, holding state, into function parameters. def len(l: List[_]): Int = { // Using an inner function to encapsulate this implementation @tailrec def loop(cursor: List[_], count: Int): Int = cursor match { // Our end condition, copied after that while case Nil => count case _ :: tail => // Copying the same logic from that while statement loop(cursor = tail, count = count + 1) } // Go, go, go loop(l, 0) } Now this version is fine. Note the use of the @tailrec annotation — all this annotation does is to make the compiler throw an error in case the function is not actually tail-recursive. That’s because that call is error-prone, and it needs repeating, this is an issue of correctness. Let’s do a more complex example to really internalize this. Let’s calculate the N-th number in the Fibonacci sequence — here’s the memory unsafe recursive version: def fib(n: Int): BigInt = if (n <= 0) 0 else if (n == 1) 1 else fib(n - 1) + fib(n - 2) fib(0) // 0 fib(1) // 1 fib(2) // 1 fib(3) // 2 fib(4) // 3 fib(5) // 5 fib(100000) // StackOverflowError (also, really slow) First turn this into a dirty while loop: def fib(n: Int): BigInt = { // Kids, don't do this at home 😅 if (n <= 0) return 0 // Going from 0 to n, instead of vice-versa var a: BigInt = 0 // instead of fib(n - 2) var b: BigInt = 1 // instead of fib(n - 1) var i = n while (i > 1) { val tmp = a a = b b = tmp + b i -= 1 } b } Then turn its 3 variables into function parameters: def fib(n: Int): BigInt = { @tailrec def loop(a: BigInt, b: BigInt, i: Int): BigInt = // first condition if (i <= 0) 0 // end of while loop else if (i == 1) b // logic inside while loop statement else loop(a = b, b = a + b, i = i - 1) loop(0, 1, n) } ## (Actual) Recursion # Tail-recursions are just loops. But some algorithms are actually recursive, and can’t be described via a while loop that uses constant memory. What makes an algorithm actually recursive is usage of a stack. In imperative programming, for low-level implementations, that’s how you can tell if recursion is required … does it use a manually managed stack or not? But even in such cases we can use a while loop, or a @tailrec function. Doing so has some advantages. Let’s start with a Tree data-structure: sealed trait Tree[+A] case class Node[+A](value: A, left: Tree[A], right: Tree[A]) extends Tree[A] case object Empty extends Tree[Nothing] Defining a fold, which we could use to sum-up all values for example, will be challenging: def foldTree[A, R](tree: Tree[A], seed: R)(f: (R, A) => R): R = tree match { case Empty => seed case Node(value, left, right) => // Recursive call for the left child val leftR = foldTree(left, f(seed, value))(f) // Recursive call for the right child foldTree(right, leftR)(f) } This is the simple version. And it should be clear that the size of the call-stack will be directly proportional to the height of the tree. And turning it into a @tailrec version means we need to manually manage a stack: def foldTree[A, R](tree: Tree[A], seed: R)(f: (R, A) => R): R = { @tailrec def loop(stack: List[Tree[A]], state: R): R = stack match { // End condition, nothing left to do case Nil => state // Ignore empty elements case Empty :: tail => loop(tail, state) // Step in our loop case Node(value, left, right) :: tail => // Adds left and right nodes to stack, evolves the state loop(left :: right :: tail, f(state, value)) } // Go, go, go! loop(List(tree), seed) } If you want to internalize this notion — recursion == usage of a stack — a great exercise is the backtracking algorithm. Implement it with recursive functions, or with dirty loops and a manually managed stack, and compare. The plot thickens for backtracking solutions using 2 stacks 🙂 Does this manually managed stack buy us anything? Well yes, if you need such recursive algorithms, such a stack can take up your whole heap memory, which means it can handle a bigger input. But note that with the right input, your process can still blow up, this time with an out-of-memory error (OOM). NOTE — in real life, shining examples of algorithms using manually managed stacks are Cats-Effect’s IO and Monix’s Task, since they literally replace the JVM’s call-stack 😄 ## Zen of Functional Programming? # In FP, you turn variables into (immutable) function parameters. And state gets evolved via function calls 💡 That’s it, that’s all there is to FP (plus the design patterns, and the pain of dealing with I/O 🙂). Enjoy! | Written by Tags: Algorithms | FP | Scala | Video
2022-10-07 11:46:21
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.2532553970813751, "perplexity": 6887.012217761805}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030338073.68/warc/CC-MAIN-20221007112411-20221007142411-00610.warc.gz"}
https://www.gamedev.net/blogs/entry/474890-on-net-serialization-part-7/
• 11 • 13 • 12 • 10 • 11 • entries 146 436 • views 198340 # On .Net Serialization, Part 7 422 views Listing 1 public class NetworkFormatter { public NetworkFormatter(Assembly sharedAssembly); public void Serialize(Stream destinationStream, object graph); internal Type[] GetSerializableTypes(); internal Dictionary GetTypeIdMap(); internal Type GetTypeFromId(int id); internal int GetIdFromType(Type t); private Dictionary typeIdMap; private Type[] serializableTypes; private Assembly sharedAssembly;} We have some refactorings to do. Primarily, if you look over the NetworkFormatter class you will notice that it has a bunch of methods and fields that do not rightly belong to it. In fact, all of the internal methods, and the two fields that go with them, should really be in another class entirely. Simply put, it is not within the domain or responsibility of the NetworkFormatter class to enumerate the serializable types of an assembly, and then to assign them unique IDs. The refactoring should really be a two step process. The first involves creating a new class and moving the methods over to it, you would then rewrite the methods of the NetworkFormatter class in terms of this new class (it shall henceforth be called AssemblyTypeInformation). Then you would refactor the tests out so that they test the AssemblyTypeInformation class, removing the old tests from the NetworkFormatterTests class. However, I'm going to show both steps at the same time. Don't be fooled though, I did it in many smaller steps, it just takes too much space to go over every little step. Anyways, if you've been following my previous entries then you should already have an idea of the process to follow. Listing 2 internal class AssemblyTypeInformation { public AssemblyTypeInformation(Assembly sharedAssembly); public Type GetTypeFromId(int id); public int GetIdFromType(Type t) { try { return typeIdMap[t]; } catch (KeyNotFoundException) { throw new SerializationException( "Type is not from assembly: " + sharedAssembly.FullName ); } } internal List GetSerializableTypes(); internal Dictionary GetTypeIdMap(); private Dictionary typeIdMap; private List serializableTypes; private Assembly sharedAssembly;} public class NetworkFormatter { public NetworkFormatter(Assembly sharedAssembly) { typeInformationStore = new AssemblyTypeInformation(sharedAssembly); } public void Serialize(Stream destinationStream, object graph); private AssemblyTypeInformation typeInformationStore;} So, first things first, writing the AssemblyTypeInformation class. As you can see in Listing 2, the interface is basically the same, in fact, it's pretty much a copy and paste. The constructor doesn't do anything particularily odd, it just calls GetSerializableTypes() and GetTypeIdMap() and store them in the fields, along with the Assembly. The two functions, GetTypeFromId() and GetIdFromType() are made part of our public interface, as they are what will be called by the serilization code. The two other methods, GetSerializableTypes() and GetTypeIdMap(), are left internal so that the test may access them, even though they will only be used internally by the AssemblyTypeInformation class. Looking over this we can also see that the NetworkFormatter no longer needs to maintain a reference to the assembly, so we can also remove that private field. Reducing the interface down to a constructor, the Serialize() method, and a single private field which is a reference to our AssemblyTypeInformation. Listing 3 public void Serialize(Stream destinationStream, object graph) { int id = typeInformationStore.GetIdFromType(graph.GetType()); BinaryWriter writer = new BinaryWriter(destinationStream); writer.Write(id);} Now that we have the IDs for our types generated, we can begin serializing our types out to the stream. First we will get the ID of the type, by the way I've modified GetIdFromType to throw a SerializationException should it not find the type, then we will write it out using a BinaryWriter. All happy with joy we remove the Ignore attribute from our TestSerialization() method, and run it, and it fails. looking at the error output we see: "System.Runtime.Serialization.SerializationException : Type (System.Object) is not from assembly: Kent.Serialization.Network, Version=1.0.1915.28857, Culture=neutral, PublicKeyToken=null". Hrm...seems we have a minor problem. Since Object is not a member of our assembly, it's not being enumerated, and hence not being assigned an ID. The simple fix is to manually add it in the AssemblyTypeInformation.GetSerializableObjects() method. Rebuilding and running our tests again we see a board of green lights. Of course, we have a lot more work to do still. For instance, we need to enumerate the fields of the type, and output them to the stream as well. We also will have to figure out how we wish to deal with nulls. But that's for next time. ## 1 Comment Post the next one!!
2018-04-24 00:52:30
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.17275452613830566, "perplexity": 3053.362241066031}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-17/segments/1524125946314.70/warc/CC-MAIN-20180424002843-20180424022843-00519.warc.gz"}
https://www.physicsforums.com/threads/anti-matter-black-hole.815462/
Anti matter black hole Tags: 1. May 24, 2015 Stephanus Dear PF Forum, In less than 1 second after big bang, baryons were created. And there's asymmetry in it. Can anyone help me? 1. Is it physically possible for a galaxy made entirely from anti matter? 2. If it's true, is it statistically possible for a galaxy made entirely from anti matter? If it's true, then I'd like to know the answer question here. No information can come out black hole. Only their mass, and with it their gravity. 3. Is it possible for us to tell that this black hole originator is matter or anti matter? Or you guys scientist just say, well No information can come out black hole. So we don't know 4. Does "matter" black hole differ from "anti matter" black hole? If question number 3 is true. 5. If question number 4, what if we throw an anti matter black hole to a matter black hole? A. Will they explode? B. Will they gets bigger. By "bigger" I mean their schwarzshild radius, of course its size is always zero if it's a singularity. Because, by logic. The explosion will generate more energy E = (m(Matter Black Hole) + m(Anti matter black hole)) x c2 And this E will eventualy become m, right? m = E/c2 But where is this m come from? Matter? Anti matter? I just want to know if my logic in question 5 is right. Consider this: A star, 1 solar mass comes from the north of our sun and hit our sun. What happen? Okay...., there's blast. Earth are surely destryoed. Solar system is destroyed, some TNO might survive. Okay... So what I'd like to know is, What if a matter+anti matter black hole collide? Will it explode like the sun, or will it getting bigger? But first of all we must be sure that Anti matter galaxy is physically possible, statistically possible, matter and anti matter black hole is different and ... they collide. 2. May 24, 2015 Simon Bridge Yes. There is nothing in principle preventing an antimatter galaxy ... though we would expect that the same mechanism that gives the local assymetry about us would make a whole antimatter galaxy very unlikely. That would follow from the above - unless you choose a model where the antimatter doesn't survive in large enough conglomerations. To work out the statistics would be problematical at best. No. No. (Follows from 3 - included for completeness.) In principle - yes. You are thinking of an antimatter-matter annihilation ... that is a "yes and no". The mass-energy gets combined to one object. It does not matter if the matter and antimatter inside annihilates or not, the resulting energy cannot escape. Yes. When two black holes gobble each other, they get bigger. There is no useful distinction to be made between energy and mass. They are the same thing - this is what $E=mc^2$ means. You end up with one bigger black hole. Galaxy is not a single object - and you have shifted from black holes to stars. The result would be much like what happens when regular matter galaxies collide only with more energetic parts when matter/antmatter stars collide. This is now highly speculative - we can address it here in terms of the light it sheds on current models of matter and gravitation. Bottom line: If you chuck antimatter into a black hole, the antimatter joins the "singularity" as positive energy - increasing the mass of the black hole. Same if you chuck matter into an antimatter black hole. In GR there is no distinction to be made between energy and mass - it all goes into the stress-energy tensor. You are starting to approach concepts which require a better framework than the one you have. See: http://preposterousuniverse.com/grnotes/grtinypdf.pdf [Broken] http://www.ita.uni-heidelberg.de/~dullemond/lectures/cosmology_2011/Chapter_3.pdf Last edited by a moderator: May 7, 2017 3. May 24, 2015 Stephanus Thank you very much Simon Bridge for answering each question. Thanks Steven 4. May 25, 2015 ChrisVer Black Holes don't have matter to collide... It's vacuum up to the singularity... Even if you have matter/antimatter annihilation there - resulting in other particles- those particles can't escape... Yet it's not certain if you can talk about particle interactions in a black hole as you do in our everyday experience of particle physics.. 5. May 25, 2015 Stephanus And the size of this singularity is... 0 cm? or 0 km? 6. May 25, 2015 ChrisVer 0cm=0km... 7. May 25, 2015 ChrisVer The singularity is a point... 8. May 25, 2015 Stephanus Yes, yes I know. It's just a pun. So, they say no information comes out of black hole. They say that gravity propagates at the speed of one of the 4 forces, electromagnetic force. And the only information that we can get from black hole is it's mass? Light can't escape black hole, while gravity which propagates at the speed of light can escape? 9. May 25, 2015 Stephanus I'm sorry did you say/type a point in 4D? And when I hit reply the 4D is disappear. Is it really something from 4D? I imagine a flat paper in 2 dimension and we somehow penetrate a cone and what they see in 2D is a circle/hyperbole/parabole/ellips and perhaps just a point. Is it like that for a black hole?. A 4D object enters 3D space? Or it's just 10. May 25, 2015 ChrisVer In fact I think I should correct it even further and drop the notion of a point out of this conversation... the singularity is not even part of your spacetime. 11. May 25, 2015 ChrisVer As for the gravity escaping, I don't think there is any kind of escape from the BH... It's how the geometry of your spacetime looks like when you are trying to solve the Schwarczhild metric's in Einstein Field Equations. 12. May 25, 2015 Staff: Mentor Neither. See below. Not really. Strictly speaking, it's not part of spacetime at all, as you say later on. But you can take the limit as $r \rightarrow 0$ of the set of all points in the spacetime at the same $r$. The limit of this set is not a point; it's a spacelike line (the limit of an infinite set of 2-spheres as the radius of the 2-spheres goes to zero). It doesn't have to. See this Usenet Physics FAQ article: http://math.ucr.edu/home/baez/physics/Relativity/BlackHoles/black_gravity.html 13. May 25, 2015
2017-12-16 05:31:06
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.5113112926483154, "perplexity": 1230.881788555793}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-51/segments/1512948583808.69/warc/CC-MAIN-20171216045655-20171216071655-00108.warc.gz"}
https://www.ias.ac.in/listing/bibliography/reso/Smarajit_Karmakar
• Smarajit Karmakar Articles written in Resonance – Journal of Science Education • Physics of Disordered Systems: Works of Giorgio Parisi and Nobel Prize 2021 This year’s Nobel Prize in Physics has been jointly awarded to Syukuro Manabe from Princeton University, Princeton, NJ, USA and Klaus Hasselman from Max Planck Institute for Meteorology, Hamburg, Germany and Giorgio Parisi from the Sapienza University of Rome, Rome, Italy. One-half of the prize has been awarded jointly to Syukuro Manabe and Klaus Hasselman “for the physical modelling of Earth’s climate, quantifying variability and reliably predicting global warming.” The other half went to Giorgio Parisi for the discovery of the interplay of disorder and fluctuations in physical systems from atomic to planetary scales.” This article will discuss some of the most influential works of Giorgio Parisi on disordered systems, along with his work on climate change that might have played an important role in developing a good physical model of Earth’s climate. The article will also touch upon the important contributions of Parisi to other fields without much details to highlight his scientific versatility. • # Resonance – Journal of Science Education | News • Posted on 13 July 2021 • # Workshop in Algebra, 7–9 September 2021 Posted on 9 August 2021
2022-09-30 23:17:20
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.21678440272808075, "perplexity": 1943.115786740755}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030335504.37/warc/CC-MAIN-20220930212504-20221001002504-00726.warc.gz"}
https://proofwiki.org/wiki/Equivalence_of_Definitions_of_Strictly_Well-Founded_Relation
# Equivalence of Definitions of Strictly Well-Founded Relation ## Theorem The following definitions of the concept of Strictly Well-Founded Relation are equivalent: ### Definition 1 $\RR$ is a strictly well-founded relation on $S$ if and only if every non-empty subset of $S$ has a strictly minimal element under $\RR$. ### Definition 2 $\RR$ is a strictly well-founded relation on $S$ if and only if: $\forall T: \paren {T \subseteq S \land T \ne \O} \implies \exists y \in T: \forall z \in T: \neg \paren {z \mathrel \RR y}$ where $\O$ is the empty set. ### Definition 3 Let $\RR$ be a well-founded relation which is also antireflexive. Then $\RR$ is a strictly well-founded relation on $S$. ## Proof ### $(1)$ if and only if $(2)$ By definition, $y \in T$ is a strictly minimal element (under $\RR$) of $T$ if and only if: $\forall z \in T: z \not \mathrel \RR y$ Thus it is seen that definition $1$ means exactly the same as definition $2$. $\Box$ ### $(1)$ iff $(3)$ Let $\RR$ be a Strictly Well-Founded Relation by definition $1$. From Strictly Well-Founded Relation is Well-Founded, $\RR$ is a well-founded relation on $S$. From Strictly Well-Founded Relation is Antireflexive, $\RR$ is antireflexive. Thus $\RR$ is a Strictly Well-Founded Relation by definition $3$. $\Box$ ### $(3)$ implies $(2)$ Let $\RR$ be a Strictly Well-Founded Relation by definition $3$. Then by definition: $\RR$ is a well-founded relation on $S$ $\RR$ is antireflexive. Thus we have by definition of well-founded relation on $S$: $\forall T \subseteq S: T \ne \O: \exists z \in T: \forall y \in T \setminus \set z: \tuple {y, z} \notin \RR$ But because $\RR$ is antireflexive: $\tuple {z, z} \notin \RR$ Hence it follows that: $\forall T \subseteq S: T \ne \O: \exists z \in T: \forall y \in \paren {T \setminus \set z} \cup \set z: \tuple {y, z} \notin \RR$ That is, from Set Difference Union Second Set is Union: $\forall T \subseteq S: T \ne \O: \exists z \in T: \forall y \in \paren {T \cup \set z}: \tuple {y, z} \notin \RR$ But as $\set z \subseteq T$, we have from Union with Superset is Superset that $T \cup \set z = T$. Hence: $\forall T \subseteq S: T \ne \O: \exists z \in T: \forall y \in T: \tuple {y, z} \notin \RR$ Thus $\RR$ is a Strictly Well-Founded Relation by definition $2$. $\blacksquare$
2022-01-24 07:11:25
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9966858625411987, "perplexity": 458.0885503330234}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-05/segments/1642320304515.74/warc/CC-MAIN-20220124054039-20220124084039-00686.warc.gz"}
https://worldbuilding.stackexchange.com/questions/65117/how-to-desiccate-your-world
# How to desiccate your world? Only few millions of people survived, they are struggling to stay alive. Food is hard to find and plants do not grow (mostly). Water cannot be found on surface, the only way to get it is to find an underground river/lake. The world is hot, dry, there are almost no clouds and rain is replaced by dust storms. After years, there are no oceans, only salt deserts. Q: What event/series of events could possibly cause this environment? The cause should: • Not kill most of the people - people should starve to death or kill each other for food • Kill most of the flora • Make the planet hotter and dry • Generate no radiation or contamination of food/water • Be plausible as today. No near-future and no magic. • BONUS: on areas directly exposed to the sun: burn the skin, causing burns on exposed parts. (This is totally optional) Possible cause: My initial thought was some kind of pathogen agent that kills most of the flora, but I really don't know if it would cause earth to become an almost desert world. I also considered a Coronal Mass Ejection, but I don't like the fact that it would kill a lot of people and could contaminate water/food. I'd like a slow apocalypse. Edit: Heat, dust, no water... this is what the outcome should be, not how the process should behave. You could just frezee the entire world if you can explain that, after an Ice age, the earth would be an almost desert world. • Not on this Earth. Earth has just about the same amount of water it always had (for suitably chosen values of "always"). It is very hard to get rid of the water. Warmth actually helps the hydrological cycle; you may consider plunging Earth into a deep freeze. Dec 19, 2016 at 11:55 • We definitely can't remove the oceans whilst making it hotter. You can have less fresh water if you raise the sea level. Dec 19, 2016 at 11:57 • Heat is the wrong tack. You might get more water in the atmosphere, but the water doesn't actually go anywhere. The best way to actually get rid of the water would be through some cataclysm that drastically reduces the planet's mass or otherwise throws the gravitational equilibrium into disarray. So science-experiment gone awry, perhaps? Or impact with a very large meteor (or medium-sizded moon)? Dec 19, 2016 at 13:03 • Dec 19, 2016 at 14:30 • Most obvious cause is a Trump presidency. Dec 19, 2016 at 22:20 I would go with the pathogen idea. The more plants the pathogen, or pathogens, kill the better but at least the grasses and trees must die. Grasses and trees trap water and soil. Remove these and the next rain washes away any soil preventing food growth. In a drought period grasses store water underground so animals can survive by eating grass. Kill grass and water storage goes. You heard of the dustbowl in 1930s America? That was caused by killing the grasses. By removing grasses and trees you will make much of the Deep South into a desert. In fact, most of the sub-tropical zone will probably become desert. (Climate zones shown on the image below.) Removing trees will also create problems in the tropical and temperate zones. These areas are dominated by forest. In the tropical zone, which is mostly rainforest, the death of the trees will cause all the top soil to wash away. Rainforest soil has practically no nutrients below the top layer so most plant life here will be destroyed. Losing the rainforest will rapidly increase global warming so temperatures will increase a lot giving you your higher global temperatures. Less trees will also reduce river lag times increasing flooding and, since most cities lie on rivers in the temperate zone, a lot of the worlds large cities may have to be abandoned. Even the poplar region won't be safe. Killing trees increases global warming and will melt the sea ice so bad news for polar bears. This will increase sea levels driving people inland and destroying some food growing areas. The apocalypse will kill some people straight away in floods and dust storms but most people will survive the initial consequences and live on, slowly starving to death under the hot sun watching the nutrient poor soil blow away in the scorching winds. At least they will have clean-ish water though. • This would not force people to live underground, though, and use subterranean water catchments. The clean water on the surface could be managed by people and used for stable agriculture (via building and landscape architecture) Dec 20, 2016 at 17:32 • @NewAlexandria I suspect a lot of the world would move underground to avoid the dust storms and washed away soil that is making it hard to breathe and clogging up water supplies. Dec 20, 2016 at 17:34 • Pragmatically speaking, the engineering challenges of living in surface-based protected environments outweighs the physiological, mental, and toxicology challenges of living underground for extended durations (or generations). Dec 21, 2016 at 16:30 • It doesn't actually say they should live underground though. Just that all clean water should be underground. Dec 21, 2016 at 16:44 • "At least they will have clean-ish water though" Surface water in the above scenario wouldn't be toxic unless there was a local geological issue (e.g. arsenic). I'm only emphasizing this if a GM/designer has any chance to be called out on accuracy. Surface water would exist, coming in torrents, and maybe in some places more than others — i.e. water kingdoms. Dec 21, 2016 at 20:49 # By making it cold The colder it gets, the drier it gets. OK so this isn't strictly what you wanted, but the driest periods the planet has experienced were the ice ages. The colder you make it the more of the water ends up locked into the ice caps. Sea levels drop by hundreds of meters, the ice expands down into the temperate regions. It's much easier to remove the water from circulation with cold than heat, Snowball Earth is surprisingly dry. # Hotter and drier You can make the land drier by making the planet hotter, however the ultimate outcome of this is to make the sea levels rise rather than fall. You can get to a dust-bowl scenario on land reasonably easily this way. Inland seas and lakes would be lost but you would still get fertile regions in the more temperate zones and on coasts. To get rid of the water on a hot Earth you have to make it go somewhere. This leaves you only with the Sandworm option or similar, i.e. "A wizard did it". # Both Go through a phase where it's hot enough to dustbowl the land, then drop the temperatures to ice age and freeze out the oceans. You'll probably kill everyone and everything this way. Internet research says that ice ages can cut in very quickly, but all the sources I've seen are questionable. • Internet research says that ice ages can cut in very quickly Look at the documentary titled The Day After Tomorrow which documents an ice-age taking effect within days/weeks. It shows global warming melting global icebergs, breaking the global weather cycle - thus leading to a catastrophic iceage. Dec 19, 2016 at 23:29 • Well, you can lock the water up in ringwoodite or some similar crystal structure. The planet is getting hotter anyway, so the two need not be related, but this is a feasible solution to "it has to go somewhere". This could begin happening due to some massive upheaval of the mantle in the mid-Pacific, allowing the trapping of water in some crystallization of cooling minerals. Dec 19, 2016 at 23:38 • Ummmm..."The Day After Tomorrow" is not what I'd call a scientific documentary which presents information from a careful survey of well-accepted scientific conclusions. More like "a ridiculous over-blown Hollywood disaster-fest based on cursory reading of 'People' magazine, at best". Dec 19, 2016 at 23:44 Introduce sandworms. Sorry for spoiler tag, but if you haven't read Dune saga, I don't want to spoil it for you. Sandworms had a stage in their lifecycle when they acquired water and trapped it in the underground spice melange masses. They were the main reason for sand, for dry Arrakis. There was enough water on Arrakis for it to have Earth-like environment. In the Chapterhouse: Dune the process of drying out a planet is shown with all it's details. If you can introduce a life form that would trap water and not release it, you could dry out the planet surface pretty well. Dry planet would probably appear hotter, too. Or at least have higher temperature variation, with hotter days and colder nights. Plants would die, and thus would no longer stop the ground from being eroded. Oceans would get shallower and more salty. Looks like this will fulfill all your points pretty well. Only... It'd be somehow derivative. Of course, it doesn't have to be a literal sandworm. Any organism that hoards water and creates cysts in hard to reach places would do. Especially if it'll render water poisonous, thus preventing roots to suck it back out. Creating such organism might be possible. At least I don't see it less feasible than pathogen able to kill all flora but leaving fauna intact. • Also pretty improbable. The issue being that the creature in question has to reach an equilibrium point with it's surroundings. If they consume all the water then either they're going to run out of water and start dying off, thus releasing the water again or they don't need the water at all in the first place, in which case why do they absorb it? Always bugged me about Dune, that. Dec 19, 2016 at 10:46 • @JoeBloggs all this was explained in Dune and in Chapterhouse. Simple analogy: all newborns can digest casein, many adults cannot. Baby sandworms can use water and need it, adult ones can't. Dec 19, 2016 at 10:47 • @Mołot This kinda seems like magic? Dec 19, 2016 at 10:51 • Pathogen able to kill all the flora isn't, either, and was considered by OP, so why is that an issue? Dec 19, 2016 at 11:05 • A water-hungry organism could do the job. Even if it doesn't exist now, it could be plausible in an alternate history(close to reality) so this answer is not bad at all. Dec 19, 2016 at 11:18 I would go with "electric wind", which is what's currently being cited as the cause of Venus's desiccation. In essence, strongly charged electrical wind in the upper atmosphere performed an electrolytic separation of atmospheric water, and drove the ionized oxygen and hydrogen off into space. On Earth, you could imagine this process interrupting the hydrological cycle, preventing or drastically curtailing rainfall. It's not realistic to truly desiccate a planet on non-geological time scales, but if you prevent rain from falling, you'd effectively kill off any land-based plant life. If you wanted, you could say it was kicked off by a massive solar storm (CME) hitting Earth and charging the atmosphere, which you could also use for your bonus objective of burning those in direct sunlight - this one had enough UV radiation to decimate the ozone layer, and the electric wind phenomenon that it started keeps new ozone from forming in the upper atmosphere, so now people get sunburns and eventually develop skin cancers from prolonged epxosure to direct sunlight. CMEs are not actually deadly to people as you seem to think, based on the comment in your question, and we have historical records of being hit by some pretty big ones, most notably in 1859. People did fine, electrical equipment, however, did not. • Dissociating the hydrogen and oxygen and letting the hydrogen be ripped off by solar wind is the only way to permanently get rid of the water. Dec 19, 2016 at 22:32 • I'll go with the pathogen idea and this one. If I could accept both answers I would. Thank you ;) Dec 20, 2016 at 8:10 What you are describing will actually happen to Earth, but your characters will need to be incredibly patient, since this is projected to happen about one billion years in the future. As the Sun burns through it's hydrogen fuel, more and more "helium ash" accumulates in the core. The helium will not fuse under present conditions, so the radiative pressure from the core gradually decreases while the gravitational pressure remains constant. The sun's core is squeezed with increasing pressure and the rate of fusion also increases at a slow but steady amount, increasing the radiative output of the Sun (to maintain equilibrium). The Sun is thought to have increased in brightness by up to 30% since the birth of the Solar System. The ever increasing heat energy reaching the Earth increases evaporation, and gradually the atmosphere becomes saturated with moisture. By 1,000,000,000 AD, the Stratosphere is saturated with moisture, and then an irreversible chain reaction takes place. High energy ultraviolet radiation striking the atmosphere starts striking the water molecules at high altitude, breaking the water apart into hydrogen and oxygen. Hydrogen is such a light gas that the extra energy imparted by the breaking of water molecules will allow it to achieve escape velocity, and leave the Earth's atmosphere for good. Since the heat output of the sun is continually filing the atmosphere with water vapour, this process will continue unimpeded, while the hydrosphere gradually evaporates. This process includes other strange effects, such as CO2 leaving the atmosphere in @ 500 million years as the carbon cycle is interrupted and plant life becoming extinct, but even then, we are looking so far in the future that post humans will have evolved into post-post humans, and perhaps have developed some sort of mega engineering like Star Lifting or moving planets around to deal with issues like this. Since I can assume you want this to happen now, rather than 500 MY from now, you will need to find a way to accelerate the Suns stellar evolution or artificially increase the Sun's output. Since the rate of fusion inside a star is moderated by the mass of the star (larger stars have more mass and are therefore hotter), you either need to dump trillions of tons of mass into the star (more than the current mass of the Solar System), or do something like dropping a neutron star or mini black hole into the Sun's core to "pull" more mass into the core. How this happens is an exercise for the reader. Alien invasion About a hundred years ago tens of thousands of Monoliths descended. They can't be scratched or dented, they appear to be smooth black oblongs with dimensions that scale according to the cosmic 4 sequence. After a while scientists realised that the Monoliths had a peculiar property: They seemed to attract atmospheric H20 and, through functions unknown, capture it. Once some unknown criteria is met the Monoliths rise and depart our planet and new ones descend. A constant stream of Monoliths are now entering and leaving Earth's atmosphere, each stealing some precious water as they go. Nobody is sure why the Monoliths were sent, but the effect is undeniable: Slowly but surely all the water on Earth is being removed. If the space above the monolith is covered for any reason the monolith will slowly and inexorably rise, pushing all before it, and a replacement monolith descends elsewhere. As the sun continues to make water evaporate and the Monoliths continue to capture it humanity is becoming more desperate, enclosing the last water stores in closed-systems deep beneath the earth. Plants can be grown in hermetically sealed domes, but as the global average humidity continues to drop all but the most hardy plant and animal life is beginning to die off, and the Monoliths have started to land in increasing numbers nearer and nearer to the domes. We don't know why this started, or if it will stop, but we do know that this thirst will kill us. • Disclaimer: Monoliths may not be an original idea. Dec 19, 2016 at 11:20 • I really like this idea, and, even if it's not what I was looking for, I still appreciate it. Dec 19, 2016 at 13:16 Within a few year (decades or centuries) it is virtually impossible due to the really HUGE amount of water stored in the oceans. Human activity can do the trick: Imagine a hydrogen based economy (with hydrogen powered cars, ships, planes etc.) on a really big scale that looses permanently some hydrogen to the atmosphere and finally to interplanetary space. This can do it, but we need to consume much more energy than we do today ... and humans will notice the falling sea level hopefully soon enough to change their economy! Without worrying too much about physical mechanics: • You could postulate a world where [some massive mantle-cracking geological trauma] has caused rifts in the earth's plates, and other fissuring — which has allow the oceans and groundwater to largely 'drain' into the layers of the earth. A few advantages to this mostly-imaginary physics: 1. It could happen fast: Quirk of the earth's core rotation. A celestial event. Etc. 2. Draining/absorbing of the oceans would take a while, and allow the slow-death / adaptation you mention. 3. It would enable you to have a wealth of 'new world' events that players wouldn't presume: • All the water in the mantle has to change, eventually. There's too much heat for it to remain there forever. • random deep-earth catchments could be discovered. Empires built on access to secret caches • Geysers of mythological proportions could occur suddenly. The watery equivalent of volcanos. • New forms of earthquakes would occur. These tremors could be more electrical / geo-magnetic, since the earth has more layers of water in it now (which is a dielectric, like an electrical battery). • so, think earthquakes that cause electrical storms; change animal migration patterns; cause earth-light phenomenon; blackouts and other cognitive / perceptual changes in people & animals • the now dry lakes, seas, and oceans are likely candidates for deep-earth movements of water, which cause huge sinkholes of sand, perhaps so expansive that they are like sand-rivers. • all of it can culminate in a world-breaking rebirth event, where the waters return to the surface. Can be as epic (sudden in time) or long & drawn-out as needed. Seems like fun times full of mythological heights. • Some people think this actually happened - Google catastrophic plate tectonics. If the geysers hit high enough in the stratosphere they could generate enough electricity to create their own electric wind as described above and lose the hydrogen to space while leaving the remaining atmosphere relatively intact. Dec 19, 2016 at 22:41 • I hadn't heard of it. It's a nice footnote, so thanks, but I think (out of the catastrophe models / physics, Immanuel Velikovsky's work is the most compelling. Dec 20, 2016 at 17:27 It is going to happen in Earth in about a billion years: Runaway greenhouse effect. Increased Sun activity will cause more water to evaporate until greenhouse effect caused by water vapor will increase temperature to a point where oceans will boil. The problem with this scenario is that it will get too hot for your purpose - several hundreds of degrees - and humans won't be able to live like in a desert. It will be more like an oven or a furnace than a desert. And if you don't want to wait a billion years, you can trigger runaway greenhouse effect by building up huge amounts of greenhouse gases in the atmosphere to raise temperatures a few tens of degrees and water vapor will complete the work. ## The thing standing between the Earth as we know it and your apocalyptic wasteland is the Earth's magnetic field. The Earth contains a spinning nickel core that interacts electromagnetically, this redirects charged particles and hinders radiation. Were you to weaken this process somehow, possibly by stopping the Earth's rotation, you could cause the field to weaken. When a planet has a weak magnetic field it is no longer shielded against the electromagnetic radiation put off by stars near it. Take for instance, Mercury. Mercury has very little atmosphere in part because it lacks a magnetic field, this means the solar radiation is intense enough to strip off any gas or water vapor as well as heat the surface enough to boil liquids. The potential scenario could be that the Earth ceases to spin, at once or over time, (so 1 day = 1 year). In this scenario the Earth's oceans and atmosphere would gradually be stripped away by radiation leaving behind a half radioactive oven, half sunless icebox with little or no breathable atmosphere. The other potential scenario involves not eliminating Earth's protection, but rather increasing the strength of the radiation and charged particles it repels. If the Sun became more powerful, which it eventually will, a similar process would occur. You can choose how advanced this process would be, you'll have to choose a time during which the atmosphere and some surface water still exists to some extent, and radiation and lack of atmosphere is intense enough to kill without protective clothing and provisions but not enough to be completely uninhabitable (like the surface of Mercury) without lots of expensive equipment. You could also offset the Earth's orbit. If you skewed it a bit, it could get closer to the Sun, but only temporarily. This would not be drastic enough realistically, but it is a powerful climate change mechanism. Regular oscillation in the Earth's orbit is the principal cause of ice ages. Moving the Earth closer to the sun is a plausible idea though. Collapse the planetary magnetic field. Collapsing the magnetic field also ends the magnetosphere / Van Allen belts. Those protect the upper atmosphere from erosion. Over time, the atmosphere will lose pressure. Liquid water will evaporate and this will have a short term compensating effect, but eventually you will be left with a very thin atmosphere and no water. For an actual example of this process, see Mars. • It'll still take a good few hundreds of millions of years for all Earth's oceans to be lost into space. This is fortunate, because the Earth's magnetic field does fade for brief periods between pole reversals, which happen somewhat regularly every few hundred thousand years. Dec 19, 2016 at 18:10 Giant Space Tree! In this novella Niven posits a spacefaring lifeform which shoots seeds across interstellar voids sucking up all of the water on a planet before launching new seeds out and perpetuating its lifecycle. One has already gotten Mars, and Earth is next. Niven's take on the concept is rather absurdist. That whole collection of short stories has a humorous bent to it. The central idea of that particular short story is pretty intriguing though, and a serious take on it might yield interesting results. The most plausible cause for all of that to happen is Earth gets closer to the Sun. Only 1 inch closer will be enough. You could make a scenario where some object(asteroid) hits the Earth and Earth gets closer to the Sun. All the things mentioned above would happen as a consequence of this event. • An asteroid fast enough to change the Earth's orbit would kill everybody on the planet ( if not everybody, at least 90% of human population.. ) Dec 22, 2016 at 17:16 Ozone strippers, dump enough CFCs into the upper atmosphere and the UV that punches through will kill most of the life on Earth, the added UV will also destroy water molecules reasonably rapidly, the hydrogen escapes the atmosphere and the oxygen decomposes the dead Carbon and Nitrogen from the land causing runaway warming as nitrous and carbon oxides build up, the end result looks a lot like Venus as eventually the oxygen runs out of carbon and starts to oxidise sulfur from the land surface and the gas pressure just goes up and up.
2022-05-22 07:19:19
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.33046531677246094, "perplexity": 1887.2015008014716}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-21/segments/1652662545090.44/warc/CC-MAIN-20220522063657-20220522093657-00061.warc.gz"}
http://math.stackexchange.com/questions/207448/least-squares-derivation
# Least Squares Derivation I was reading this to review the derivation of the ordinary least squares estimator but I'm having trouble differentiating (4). Can someone please help explain why $\dfrac{\partial (\hat{\beta}'X'X\hat{\beta})}{\partial \hat{\beta}} = 2X'X\hat{\beta}$ I understand all the other steps. Thanks! - Notice first that you might guess this formula because if $X$ and $\hat{\beta}$ are $1 \times 1$ then it reduces to the formula for the derivative of $x^2$ from calculus. Let's derive a multivariable product rule that will help us here. Suppose $f:\mathbb{R}^n \to \mathbb{R}$ and suppose $$f(x) = \langle g(x), h(x) \rangle$$ for some functions $g:\mathbb{R}^n \to \mathbb{R}^m$ and $h:\mathbb{R}^n \to \mathbb{R}^m$. Then if $\Delta x \in \mathbb{R}^n$ is small (and $g$ and $h$ are differentiable at $x$), we have \begin{align*} f(x + \Delta x) &\approx \langle g(x) + g'(x) \Delta x, h(x) + h'(x) \Delta x \rangle \\ &= \langle g(x),h(x) \rangle + \langle g(x), h'(x) \Delta x \rangle + \langle g'(x) \Delta x, h(x) \rangle + \langle g'(x) \Delta x, h'(x) \Delta x \rangle \\ &\approx \langle g(x),h(x) \rangle + \langle g(x), h'(x) \Delta x \rangle +\langle g'(x) \Delta x, h(x) \rangle \\ &= \langle g(x),h(x) \rangle + \langle h'(x)^T g(x), \Delta x \rangle + \langle g'(x)^T h(x), \Delta x \rangle \\ &= f(x) + \langle h'(x)^T g(x) + g'(x)^T h(x), \Delta x \rangle. \end{align*} Comparing this result with $$f(x + \Delta x) \approx f(x) + \langle \nabla f(x), \Delta x \rangle$$ we discover that $$\nabla f(x) = h'(x)^T g(x) + g'(x)^T h(x).$$ This is our product rule. (I'm using the convention that the gradient is a column vector, which is not completely standard.) Now let $g(x) = Ax$ for some matrix $A$. So $g'(x) = A$. What's the gradient of the function \begin{align*} f(x) &= \langle g(x),g(x) \rangle \\ &= \langle Ax, Ax \rangle \\ &= x^T A^T A x \quad \text{?} \end{align*} By our product rule the answer is \begin{align*} \nabla f(x) &= 2g'(x)^T g(x) \\ &= 2 A^T A x. \end{align*} This is the result that you wanted to derive. - First we need to note that $X'X$ is a symmetric matrix as $(X'X)'=X'(X')'=X'X$. Now,Using basic matrix differentiation,we know that for symmetric matrix $A$,$\dfrac{\partial (x'Ax)}{\partial x} = 2Ax = 2x'A$ where $x$ is a vector and dimensions are proper. hence result follows trivially. Note:Actually we can say more,if $A$ is any $n \times n$ matrix, $x$ is $n \times 1$ vector,then, $\dfrac{\partial (x'Ax)}{\partial x} = x'(A'+A)$ Proof:$$x'Ax = \sum_{j=1}^n \sum_{i=1}^n a_{ij}x_ix_j$$ Differentiating wrt to $x_k$,we get, $$\dfrac{\partial (x'Ax)}{\partial x_k}= \sum_{j=1}^n a_{kj}x_j + \sum_{i=1}^n a_{ik}x_i\ ; \forall k = 1,...,n$$ hence,$$\dfrac{\partial (x'Ax)}{\partial x} = x'A'+x'A$$ Hence result follows. More details on Matrix differentiation can be found on http://en.wikipedia.org/wiki/Matrix_calculus . Thanks. -
2015-07-31 22:08:54
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 1.0000098943710327, "perplexity": 247.069899225123}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2015-32/segments/1438042988312.76/warc/CC-MAIN-20150728002308-00311-ip-10-236-191-2.ec2.internal.warc.gz"}
http://www.ncatlab.org/nlab/show/classifying+morphism
Yoneda lemma # Contents ## Idea A classifying map or classifying morphism for a given object is a morphism into a classifying space that classified this object. ## Examples • For subobjects one typically speaks of characteristic maps or characteristic functions. The corresponding classifiyng space is a subobject classifier . See there for more. Created on June 26, 2011 16:46:44 by Urs Schreiber (89.204.153.100)
2013-06-18 05:26:31
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9617452621459961, "perplexity": 2896.615906467235}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368706933615/warc/CC-MAIN-20130516122213-00002-ip-10-60-113-184.ec2.internal.warc.gz"}
http://tex.stackexchange.com/questions/52515/forcing-a-figure-to-appear-on-the-next-page?answertab=votes
# Forcing a figure to appear on the next page I have a figure which takes up an entire page of space. When I use \includegraphics to put it in my document, instead of appearing on the next page, it appears at the end of the section. However, I want the full-page figure to appear on the following page. In other words, in the below MWE, I want: (page 1) all of TEXT A, followed by the amount of TEXT B necessary to fill the rest of the page (page 2) the full-page figure (page 3) the remainder of TEXT B \documentclass{article} \usepackage{graphicx} % for \includegraphics \usepackage{lipsum} % for filler text \begin{document} \section{First Section} \lipsum[1-4] % TEXT A \includegraphics{full_page_image} \lipsum[1-4] % TEXT B \section{Second Section} \end{document} - – Martin Scharrer Apr 19 '12 at 6:09 You can use the \afterpage{<content>} macro from the afterpage package to place things directly after the current page. I'm not sure if you only want the image or a real figure with maybe a caption. For an image only: \documentclass{article} \usepackage[demo]{graphicx} % for \includegraphics \usepackage{lipsum} % for filler text \usepackage{afterpage} \begin{document} \section{First Section} \lipsum[1-4] % TEXT A \afterpage{\noindent\includegraphics[width=\textwidth,height=\textheight]{image}} \lipsum[1-4] % TEXT B \section{Second Section} \end{document} Using a figure environment. The issue here is that earlier figures might be still unplaced and be then be placed before. \documentclass{article} \usepackage[demo]{graphicx} % for \includegraphics \usepackage{lipsum} % for filler text \usepackage{afterpage} \begin{document} \section{First Section} \lipsum[1-4] % TEXT A \afterpage{% \begin{figure}[p]% \includegraphics[width=.99\textwidth,height=.99\textheight]{image}% %\caption{..} \end{figure}% \clearpage } \lipsum[1-4] % TEXT B \section{Second Section} \end{document} This is similar to Werner's answer, but the \afterpage{..\clearpage} ensures that the figure is placed at the next page (Ok, this might be normally the case anyway). The figure could also be outside \afterpage and then an \afterpage{\clearpage} would be enough to flush it, but the code above is the safest IMHO. - For this you need to make the image float. So, you should use (substituting \rule{\textwidth}{\textheight} for your fullpage image): \documentclass{article} %\usepackage{graphicx} % for \includegraphics \usepackage{lipsum} % for filler text \begin{document} \section{First Section} \lipsum[1-4] % TEXT A \begin{figure}[p] \rule{\textwidth}{\textheight} \end{figure} \lipsum[1-4] % TEXT B \section{Second Section} \end{document} Note that it is perfectly fine inserting a figure environment (that floats) without specifying a \caption. -
2016-05-02 19:47:49
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9482764005661011, "perplexity": 3726.3930478810526}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2016-18/segments/1461860117405.91/warc/CC-MAIN-20160428161517-00049-ip-10-239-7-51.ec2.internal.warc.gz"}
http://git.astri.umk.pl/pl/content/population-tev-pulsar-wind-nebulae-hess-galactic-plane-survey
Centrum Astronomii Wydział Fizyki, Astronomii i Informatyki Stosowanej, Uniwersytet Mikołaja Kopernika ## The population of TeV pulsar wind nebulae in the H.E.S.S. Galactic Plane Survey H. E. S. S. Collaboration, :, Abramowski A., Aharonian F., Akhperjanian A. G., Backes M., Balzer A., Becherini Y., Becker Tjus J., Berge D., i 156 współautor(ów), w tym z CA, Katarzyński K., Rudak B. 2017 The nine-year H.E.S.S. Galactic Plane Survey (HGPS) yielded the most uniform observation scan of the inner Milky Way in the TeV gamma-ray band to date. The sky maps and source catalogue of the HGPS allow for a systematic study of the population of TeV pulsar wind nebulae found throughout the last decade. To investigate the nature and evolution of pulsar wind nebulae, for the first time we also present several upper limits for regions around pulsars without a detected TeV wind nebula. Our data exhibit a correlation of TeV surface brightness with pulsar spin-down power $\dot{E}$. This seems to be caused both by an increase of extension with decreasing $\dot{E}$, and hence with time, compatible with a power law $R_\mathrm{PWN}(\dot{E}) \sim \dot{E}^{-0.65 \pm 0.20}$, and by a mild decrease of TeV gamma-ray luminosity with decreasing $\dot{E}$, compatible with $L_{1-10\,\mathrm{TeV}} \sim \dot{E}^{0.59 \pm 0.21}$. We also find that the offsets of pulsars with respect to the wind nebula centres with ages around 10 kyr are frequently larger than can be plausibly explained by pulsar proper motion and could be due to an asymmetric environment. In the present data, it seems that a large pulsar offset is correlated with a high apparent TeV efficiency $L_{1-10\,\mathrm{TeV}}/\dot{E}$. In addition to 14 HGPS sources considered as firmly identified pulsar wind nebulae and 5 additional pulsar wind nebulae taken from literature, we find 10 HGPS sources that form likely TeV pulsar wind nebula candidates. Using a model that subsumes the present common understanding of the very high-energy radiative evolution of pulsar wind nebulae, we find that the trends and variations of the TeV observables and limits can be reproduced to a good level, drawing a consistent picture of present-day TeV data and theory. ###### Słowa kluczowe Astrophysics - High Energy Astrophysical Phenomena
2017-12-12 16:05:18
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9173728823661804, "perplexity": 5238.309641280308}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-51/segments/1512948517350.12/warc/CC-MAIN-20171212153808-20171212173808-00159.warc.gz"}
https://www.competoid.com/quiz_answers/14-0-12911/Question_answers/
• $$\Large \left(9.95\right)^{2} \times \left(2.01\right)^{2} = 2 \times \left(?\right)^{2}$$= A) 12 B) 15 C) 25 D) 20 Correct Answer: D) 20 ##### Description for Correct answer $$\Large \left(9.95\right)^{2} \times \left(2.01\right)^{3} = 2 \times \left(?\right)^{2}$$ Here, 9.95 is approximated to 10. 2.01 is approximated to 2 because calculating the squares or cubes of whole number is much easier than to calculate the square or cube of decimal numbers $$\Large 2 \times \left(?\right)^{2} = \left(10\right)^{2} \times \left(2\right)^{3} = 100 \times 8 = 800$$ = $$\Large \left(?\right)^{2} = 800$$ = $$\Large \left(?\right)^{2} = 400$$ Therefore, $$\Large ? = 20$$
2018-05-26 23:15:41
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8832136392593384, "perplexity": 1759.1490595318562}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": false}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-22/segments/1526794867949.10/warc/CC-MAIN-20180526225551-20180527005551-00546.warc.gz"}
http://tex.stackexchange.com/questions/15608/latex-combined-hyper-reference-and-footnote-lacks-verbatim-argument-behaviour
# LaTeX Combined hyper reference and footnote lacks verbatim argument behaviour [duplicate] Possible Duplicate: Getting those %#!^& signs in the footnote! Here's my first version of a clever command for URL-footnotes. It combines `pdftex`'s hyperlink behaviour with a `\footnote` displaying the URL. ``````\usepackage[pdftex]{hyperref} \newcommand{\hrefn}[2]{\href{#1}{#2}\footnote{See {\tt #1}}} %HyperRef and Footnote in one `````` However it doesn't treat the input as verbatim as `\href` does. How do I prevent my command `\hrefn` from treating characters such as `#` and `~` as special control characters? I already `\usepackage{underscore}` so `_` are not a problem. /Nordlöw - ## migrated from stackoverflow.comApr 12 '11 at 10:45 This question came from our site for professional and enthusiast programmers. ## marked as duplicate by Martin Scharrer♦Apr 12 '11 at 11:02 This question has been asked before and already has an answer. If those answers do not fully address your question, please ask a new question. Put a \ in front of the character you want to be preserved. –  alexy13 Apr 12 '11 at 10:34 ## 1 Answer The `url` package has a command `\urldef` which allows you to define robust verbatim URLs that can be used in footnotes. Also, since you’re using LaTeX, you shouldn’t be using `\tt` – use `\ttfamily` or `\texttt` instead. - Great answer! Thanks! –  Nordlöw Apr 13 '11 at 6:46
2014-09-22 02:35:32
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8335686326026917, "perplexity": 3268.168986376937}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": false}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2014-41/segments/1410657136523.61/warc/CC-MAIN-20140914011216-00318-ip-10-234-18-248.ec2.internal.warc.gz"}
https://socratic.org/questions/how-do-you-solve-sqrt-2x-5-sqrt-x-2-3
# How do you solve: sqrt(2x+5) - sqrt(x-2) = 3? Apr 25, 2015 $\sqrt{2 x + 5} - \sqrt{x - 2} = 3$ 1. First isolate one of the square roots: $\sqrt{2 x + 5} = 3 + \sqrt{x - 2}$ 2. Then square each side: ${\left(\sqrt{2 x + 5}\right)}^{2} = \left(3 + \sqrt{x - 2}\right) \left(3 + \sqrt{x - 2}\right)$ $2 x + 5 = 9 + 6 \sqrt{x - 2} + \left(x - 2\right)$ 3. Simplify the equations leaving the square root on one sided: $x - 2 = 6 \sqrt{x - 2}$ 4. Square each side: $\left(x - 2\right) \left(x - 2\right) = 36 \left(x - 2\right)$ 5. Divide each side by (x-2): $x - 2 = 36$ so $x = 38$ All ways check you answer in the original problem: $\sqrt{2 x + 5} - \sqrt{x - 2} = 3$ $\sqrt{2 \left(38\right) + 5} - \sqrt{38 - 2} = 3$ $\sqrt{81} - \sqrt{36} = 3$ $9 - 6 = 3$ $3 = 3$
2019-12-07 01:14:35
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 13, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.7962554693222046, "perplexity": 5201.272639362834}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-51/segments/1575540491871.35/warc/CC-MAIN-20191207005439-20191207033439-00518.warc.gz"}
http://www.math.toronto.edu/courses/apm346h1/20181/PDE-textbook/Chapter6/S6.A.html
6.A. Linear second order ODEs ## 6.A. Linear second order ODEs ### Introduction This is not a required reading but at some moment you would like to see how problems we discuss here for PDEs are solved for ODEs (consider it as a toy-model) We consider ODE $$Ly:=y'' + a_1(x)y + a_2(x)y'=f(x). \label{eq-6.A.1}$$ Let $\{y_1(x),y_2(x)\}$ be a fundamental system of solutions of the corresponding homogeneous equation $$Ly:=y'' + a_1(x)y + a_2(x)y'=0). \label{eq-6.A.2}$$ Recall that then Wronskian $$W(y_1,y_2; x):= \left| \begin{matrix} y_1(x) & y_2(x)\\ y'_1(x) &y'_2(x)\end{matrix}\right| \label{eq-6.A.3}$$ does not vanish. ### Cauchy problem (aka IVP) Consider equation (\ref{eq-6.A.1}) with the initial conditions $$y(x_0)=b_1, \qquad y'(x_0)=b_2. \label{eq-6.A.4}$$ Without any loss of the generality one can assume that \begin{aligned} &y_1(x_0)=1, &&y'_1(x_0)=0,\\ &y_2(x_0)=0, &&y'_1(x_0)=1. \end{aligned} \label{eq-6.A.5} Indeed, replacing $\{y_1(x),y_2(x)\}$ by $\{z_1(x),z_2(x)\}$ with $z_j=\alpha_{j1}y_1+\alpha_{j2}y_2$ we reach (\ref{eq-6.A.5}) by solving the systems \begin{align*} &\alpha_{11}y_1(x_0)+\alpha_{12}y_2(x_0)=1, &&& &\alpha_{21}y_1(x_0)+\alpha_{22}y_2(x_0)=0\\ &\alpha_{11}y'_1(x_0)+\alpha_{12}y'_2(x_0)=0, &&& &\alpha_{21}y'_1(x_0)+\alpha_{22}y'_2(x_0)=1 \end{align*} which have unique solutions because $W(y_1,y_2;x_0)\ne 0$. Then the general solution to (\ref{eq-6.A.2}) is $y=C_1y_1+C_2y_2$ with constants $C_1,C_2$. To find the general solution to (\ref{eq-6.A.1}) we apply method of variations of parameters; then \begin{aligned} &C'_1y_1+C'_2y_2=0,\\ &C'_1y'_1+C'_2y'_2=f(x) \end{aligned} \label{eq-6.A.6} and then $$C'_1= -\frac{1}{W} y_2f,\qquad C'_2= \frac{1}{W} y_12f \label{eq-6.A.7}$$ and \begin{aligned} &C_1(x)= -\int _{x_0}^x \frac{1}{W(x')} y_2(x')f(x')\,dx'+c_1,\\ &C_2(x)= \ \ \int _{x_0}^x \frac{1}{W(x')} y_1(x')f(x')\,dx'+c_2 \end{aligned} \label{eq-6.A.8} and $$y(x)=\int _{x_0}^x G(x;x')f(x')\,dx'+b_1y_1(x)+b_2y_2(x) \label{eq-6.A.9}$$ with $$G(x;x')=\frac{1}{W(x')} \bigl(y_2(x)y_1(x')-y_1(x)y_2(x')\bigr) \label{eq-6.A.10}$$ and $c_1=b_1$, $c_2=b_2$ found from initial data. Definition 1. $G(x,x')$ is a Green function (called in the case of IVP also Cauchy function). This formula (\ref{eq-6.A.9}) could be rewritten as $$y(x)=\int _{x_0}^x G(x;x')f(x')\,dx'+G'_x(x;x_0)b_1 +G(x;x_0)b_2. \label{eq-6.A.11}$$ ### BVP Consider equation (\ref{eq-6.A.1}) with the boundary conditions $$y(x_1)=b_1,\qquad y(x_2)=b_2 \label{eq-6.A.12}$$ where $x_1< x_2$ are the ends of the segment $[x_1,x_2]$. Consider first homogeneous equation (\ref{eq-6.A.2}); then $y=c_1y_1+c_2y_2$ and (\ref{eq-6.A.12}) becomes \begin{equation*} \begin{aligned} &c_1y_1(x_1)+c_2y_2(x_1)=b_1,\\ &c_1y_1(x_2)+c_2y_2(x_2)=b_2 \end{aligned} \end{equation*} and this system is solvable for any $b_1,b_2$ and this solution is unique if and only if determinant is not $0$: $$\left|\begin{matrix} y_1(x_1) & y_2(x_1)\\y_1(x_2) & y_2(x_2)\end{matrix}\right|\ne 0. \label{eq-6.A.13}$$ Assume that this condition is fulfilled. Then without any loss of the generality one can assume that $$y_1(x_1)=1,\quad y_1(x_2)=0, \quad y_2(x_1)=0,\quad y_2(x_2)=1; \label{eq-6.A.14}$$ otherwise as before we can replace them by their linear combinations. Consider inhomogeneous equation. Solving it by method of variations of parameters we have again (\ref{eq-6.A.7}) but its solution we write in a form slightly different from (\ref{eq-6.A.8}) \begin{aligned} &C_1(x)= -\int _{x_1}^x \frac{1}{W(x')} y_2(x')f(x')\,dx'+c_1,\\ &C_2(x)= -\int _x^{x_2} \frac{1}{W(x')} y_1(x')f(x')\,dx'+c_2. \end{aligned} \label{eq-6.A.15} Then $$y(x)=\int _{x_1}^{x_2} G(x;x')f(x')\,dx'+c_1y_1(x)+c_2y_2(x) \label{eq-6.A.16}$$ where G(x;x')=-\frac{1}{W(x')} \left\{\begin{aligned} &y_2(x')y_1(x) && x_1<x'<x,\\ &y_1(x')y_2(x)&& x<x'<x_2. \end{aligned}\right. \label{eq-6.A.17} From boundary conditions one can check easily that $c_1=b_1$, $c_2=b_2$. One can also $y_1(x)= -G'_{x'}(x;x')|_{x'=x_1}$, $y_2(x)=-G'_{x'}(x;x')|_{x'=x_2}$ and therefore \begin{multline} y(x)=\int _{x_1}^{x_2} G(x;x')f(x')\,dx'\\ -G'_{x'}(x;x')|_{x'=x_1} b_1+ G'_{x'}(x;x')|_{x'=x_2} b_2.\qquad\qquad \label{eq-6.A.18} \end{multline} Definition 2. $G(x,x')$ is a Green function. ### BVP. II Assume now that (\ref{eq-6.A.13}) is violated. Then we cannot expect that the problem is uniquely solvable but let us salvage what we can. Without any loss of the generality we can assume now that $$y_2(x_1)= y_2(x_2)=0; \label{eq-6.A.19}$$ Using for a solution the same formulae (\ref{eq-6.A.8}), (\ref{eq-6.A.10}) but with $x_0$ replaced by $x_1$, plugging into boundary conditions and using (\ref{eq-6.A.19}) we have \begin{equation*} c_1y_1(x_1)=b_1, \qquad\bigl(c_1-\int_{x_1}^{x_2}\frac{1}{W(x')}y_2(x')\,dx'\bigr)y_1(x_2)=b_2 \end{equation*} which could be satisfied if and only iff $$\int_{x_1}^{x_2}\frac{1}{W(x')}y_2(x')\,dx'-\frac{b_1}{y_1(x_1)}+\frac{b_2}{y_1(x_2)}=0 \label{eq-6.A.20}$$ but solution is not unique: it is defined modulo $c_2 y_2(x)$. Remark 1. More general boundary conditions $$\alpha_1 y'(x_1)+\beta_1y(x_1)=b_1,\qquad \alpha_2 y'(x_2)+\beta_2y(x_2)=b_2 \label{eq-6.A.21}$$ could be analyzed in a similar way.
2021-01-22 23:06:01
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 20, "x-ck12": 0, "texerror": 0, "math_score": 0.9789131879806519, "perplexity": 1366.0750576650503}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": false}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-04/segments/1610703531429.49/warc/CC-MAIN-20210122210653-20210123000653-00179.warc.gz"}
https://akkompaniator.com/viewtopic/a1fc99-discrete-data-is-from-qualities-that-can-be-measured
# discrete data is from qualities that can be measured qalandarbhai is waiting for your help. There are dogs living in 12 of them. If you enjoyed this page, please consider bookmarking Simplicable. Counted. The definition of raw data with examples. In theory, a second could be divided into infinite points in time. The _____ _____ __ _____ is characterized by data that consist of names, labels, or categories only. Measurement system is adequate (for continuous data): gage R&R has to be within 10 percent (10 percent to 30 percent allowed if the process is not critical) of the total study variation, the number of distinct categories has to be greater than four. What is the total monthly cost when it is the same for both gyms? …. The definition of imperialism with examples. y = 5.5x + 10 There are 40 houses on my street. Find the rate per cent​, Koi ONLINE HAI mere FOLLOWERS main se GOOD MORNING ❤​, I love you baby my jaanu please make me as your BF Please​, please help me please ask a question for me that I want a GF please please please please bus itna kar do na please ​give my email address sandeepmohap A definition of qualitative data with examples. CONTINUOUS Continuous data are numerical data that can theoretically be measured in infinitely small units. Questions: 15 | Attempts: 4485 | Last updated: Oct 2, 2019 . The common job levels used in a modern organization. This site is using cookies under cookie policy. A set of data is said to be ordinal if they can be put it in an order. Continuous variables such as time, temperature and distance can theoretically be measured at infinitely small points. Discrete data represent items that can be counted; they take on possible values that can be listed out. Measured data is regarded as being better than counted data. The potential for a design to fail to satisfy the requirements for a project. To understand the slight differences between the types of data in question, one should consider the current definitions thereof. Note that ordinal data can be counted and set in order but it cannot be measured. Nominal level of measurement. It is called a variable because the value may vary between data units in a population, and may change in value over time. A definition of data in use with examples. Continuous Data. The speed of a train includes all the speeds it could possible travel. On the other hand, quantitative data is one that contains numerical values and uses range. Control chart for attributes. All rights reserved. How many classes could Anna take so that the total cost for the month would be the same? Visit our, Copyright 2002-2020 Simplicable. A definition of reference data with examples. Would the amount of milk a cow yields over the course of a year be considered discrete data or continuous data? is used to monitor characteristics that have discrete values and can be counted. Both. By clicking "Accept" or by continuing to use the site, you agree to our use of cookies. Discrete data can further be of different types-Source : https://stats.stackexchange.com - Ordinal Data. A definition of data in rest with examples. Discrete data is from qualities that can be, [tex] \frac{x + 32}{x + 6} \leqslant 6 \\ \\ \frac{4}{x} + \frac{1}{ {x}^{2} } = {1}{5 {x}^{2} } \\ \\ 1 + \frac{2}{x + 1} < \frac{2}{ Definitely, we can classify a given data according to various characteristics, depending on the purpose of our study. Continuous data is measured. Discrete variables are elements that … Discrete data may be treated as ordered categorical data in statistical analysis, but some information is lost in doing so. Discrete data key characteristics: You can count the data. What percent of the houses have dogs? nominal. The first type is discrete ordinal data. Ordinal represents the “order.” Ordinal data is known as qualitative data or categorical data. The difference between qualitative data and quantitative data. The definition of data infrastructure with examples. Discrete data is countable while continuous data is measurable. Report violations. © 2010-2020 Simplicable. Stepping Upcharges a monthly fee, plus an additional fee per class. Cookies help us deliver our site. This material may not be published, broadcast, rewritten, redistributed or translated. find the coordinates of the point of trisection of the line segment joining the (3,-1) and (6,8), Using Substitution to Solve a System of Equations A definition of data uncertainty with examples. Discrete data is counted, Continuous data is measured . An overview of the basic types of socialism. Evidently, there is the various basis of classification. is used to monitor characteristics that can be measured and have a continuum of values, such as height weight, or volume. Discrete data is from qualities that can be Measured. Continuous data is data that falls in a continuous sequence. How Much Do You Know About Data Processing Cycle? A definition of atomic data with examples. This data is measured on a continual scale like distance, time, weight, length etc. The list of possible values may be fixed (also called finite ); or it may go from 0, 1, 2, on to infinity (making it countably infinite ). However, try telling Photoshop you can't measure color with numbers. The most popular articles on Simplicable in the past day. Geographical classification. The situation in which a plot of data falls outside preset control limits. When we classify data according to different locations, it is termed as a geographical classification of adat. 7. How Much Do You Know About Data Processing Cycle? She compares two gyms to determine which would be the best deal y = 7.5x Variable data is continuous data, this means that the data values can be any real number like 2.12, 3.33, -3.3 etc. Continuous or Discrete Data . Discrete data is from qualities that can be Both Counted None of the above Measured Get the answers you need, now! Qualitative data is information about qualities; information that can't actually be measured. The difference between discrete and continuous data can be drawn clearly on the following grounds: Discrete data is the type of data that has clear spaces between values. A variable is any characteristics, number, or quantity that can be measured or counted. It is sub-classified as discrete and continuous data. A definition of transactional data with examples. It can be grouped, named and also ranked. Trivia Quiz . Information that can be measured at infinite points. Discrete data is also known as attribute data. 7. A definition of master data with examples. The simple interest on certain sum 625 is 100, and the number of years is equal to the rate percent per annum. All Rights Reserved. 1. Discrete Data. It is more precise and contains more information. X- Bar chart. Fit Fast charges a set fee per class. Substitute: 7.5x = 5.5x + 10 ### Похожие записи • Нет похожих записей вверх
2023-01-28 03:17:04
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.4674854874610901, "perplexity": 1023.2698436165954}, "config": {"markdown_headings": true, "markdown_code": false, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2023-06/segments/1674764499470.19/warc/CC-MAIN-20230128023233-20230128053233-00348.warc.gz"}
https://ftp.aimsciences.org/article/doi/10.3934/nhm.2008.3.395
Article Contents Article Contents # Spectral plot properties: Towards a qualitative classification of networks • We introduce a tentative classification scheme for empirical networks based on global qualitative properties detected through the spectrum of the Laplacian of the graph underlying the network. Our method identifies several distinct types of networks across different domains of applications, indicates hidden regularity properties and provides evidence for processes like node duplication behind the evolution or construction of a given class of networks. Mathematics Subject Classification: Primary: 05C75, 47A75; Secondary: 90C35, 68R10. Citation:
2022-12-08 17:20:19
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 1, "x-ck12": 0, "texerror": 0, "math_score": 0.22413426637649536, "perplexity": 1225.7835349096363}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-49/segments/1669446711344.13/warc/CC-MAIN-20221208150643-20221208180643-00011.warc.gz"}
https://www.physicsforums.com/threads/is-it-possible-for-different-particle-antiparticl-pair-to-annihilate-with-each-other.642449/
# Is it possible for different particle-antiparticl pair to annihilate with each other? ## Main Question or Discussion Point I've been searching about particle-antiparticle annihilation and I've been wondering whether it was possible for particle to annihilate with antiparticle that is not its pair? Can annihilation occur with collision of different particle-antiparticle quarks(e.g. up antiquark and strange quark), with different particle-antiparticle leptons(e.g. electron antineutrino and muon) or with particle-antiparticle quark and lepton(e.g. bottom quark and antitau)? Related High Energy, Nuclear, Particle Physics News on Phys.org mfb Mentor Usually not, as this would violate some conserved quantum numbers (charge, lepton numbers and so on). In addition, the "classic" annihilation via the electromagnetic force requires particle + corresponding antiparticle. However, the weak interaction can allow some processes which could be considered as annihilation of different particles: Neutral kaons (strange + anti-down quark or anti-strange + down-quark) can decay to two photons, for example. In some cases yes, It depends on wheter the annihilation violates conservation laws. For example, in the quark sector flavor conservation is violated by intercation with the W boson. Therefore, for example $u\bar{s}->W^{+}->e^{+}\nu_{e}$ is possible In the lepton sector, in the limit of massless neutrinos, flavor is conserved and a muon can't annihilate with a electron neutrino( or anti electron neutrino). quarks and leptons can't annihilate with eachother due to baryon and lepton number conservation. ( which is simply an outcome of other conservation laws (charge, color etc..) doesn't have to be assumed) Usually not, as this would violate some conserved quantum numbers (charge, lepton numbers and so on). In addition, the "classic" annihilation via the electromagnetic force requires particle + corresponding antiparticle. However, the weak interaction can allow some processes which could be considered as annihilation of different particles: Neutral kaons (strange + anti-down quark or anti-strange + down-quark) can decay to two photons, for example. So, do you mean, without weak interaction, such annihilation is impossible? mfb Mentor Some reactions "particle + different antiparticle -> 2 photons" are possible, but they have to include the weak interaction (together with the electromagnetic interaction). One up-type quark (up, charm, top) plus a different up-type antiquark can annihilate to 2 photons. One down-type quark (down, strange, bottom) plus a different down-type antiquark can annihilate to 2 photons. Hmm.. I think if we neglect neutrino mixing, that was all.
2020-08-05 08:28:16
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.819471001625061, "perplexity": 2578.4434362117645}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-34/segments/1596439735916.91/warc/CC-MAIN-20200805065524-20200805095524-00194.warc.gz"}
https://electronics.stackexchange.com/questions/444832/microstrip-low-pass-filter-design-issues-with-sonnet?r=SearchResults
# microstrip low pass filter design issues with sonnet I am trying to replicate the microstrip low-pass filter in SonnetLite from the book by Jia-Sheng Hong - Microstrip Filters for RF Microwave Applications, 2nd Edition (Wiley Series in Microwave and Optical Engineering) (2011, Wiley) However, the S12 graph shows that it is not a low pass filter at all. Could anyone advise why ? I have attached the SonnetLite design file here. box resonance warning analysis logs Run 1: Sat Jun 22 00:27:07 2019. SL291418.107194. Em version 15.53-Lite (32-bit Windows) on DESKTOP-DG59KC2 local. Project: C:\Users\kevin\Documents\sonnet\15.53-Lite\three-pole microstrip lowpass filter using open-circuited stubs\three-pole microstrip lowpass filter using open-circuited stubs.son. Frequency: 0.001 GHZ Sonnet Warning EG2680: Circuit has potential box resonances. Circuit: primary structure The estimated box resonance frequencies are listed below. The calculations assume lossless materials, with a box filled with only the specified dielectric stackup materials. 4.140 GHZ TE Mode 1,0,1 4.673 GHZ TE Mode 1,0,2 Only lowest order modes considered. See the chapter on package resonances in the Sonnet User's Guide. Date: Sat Jun 22 00:27:07 2019 Sonnet Warning EG2680: Circuit has potential box resonances. Circuit: left box wall SOC magnetic wall standard The estimated box resonance frequencies are listed below. The calculations assume lossless materials, with a box filled with only the specified dielectric stackup materials. 4.167 GHZ TE Mode 0,1,1 4.700 GHZ TE Mode 0,1,2 Only lowest order modes considered. See the chapter on package resonances in the Sonnet User's Guide. Date: Sat Jun 22 00:27:07 2019 Post-Analysis: Errors detected: 0 Warnings detected: 2. Analysis completed Sat Jun 22 00:27:10 2019. • have you actually got terminals connected to your filter? In other words, are you using sonnetlite correctly? That layout is a three pole lowpass filter, there's no question it ought to behave at least qualitatively like the book says. – Neil_UK Jun 22 at 8:38 • @Neil_UK have you actually got terminals connected to your filter? <-- What do you mean ? – kevin Jun 22 at 8:44 • I don't know sonnet specifically. How does it connect to the circuit under test? Does it have 50 ohm ports, or generalized transmission lines? It says it's found box resonances, so it's doing something. But does it know that it has to push current and voltage into those copper things that you've labelled 1 and 2. Or is there a ground plane under it, or implied at the boundaries of the box. The one thing I do know about 3d solvers in general is that they can be a pig to set up. Get a simple 'hello world' circuit working first, like a length of plain transmission line. – Neil_UK Jun 22 at 8:48 • Someone helped me to solve most of the problems, and I have a working low pass filter now. But I am still very concerned about the correctness of the dielectric layer configuration. Any advise ? – kevin Jun 22 at 11:22 • You appear to have only 0.1mm of air above your conductor, but if that's real I would expect a much worse S11 from your filter. Does it ignore the thickness above and you enter the box height somewhere else? Having said that, the S11 is pretty rough, if sonnet is now set up properly I'd do another iteration on the dimensions. However, do model a simple bit of 50 ohm line from input to output, as a ground zero, make sure you get a reasonable S11. – Neil_UK Jun 22 at 13:36
2019-08-25 01:26:01
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.5819640755653381, "perplexity": 3693.1691000789388}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-35/segments/1566027322160.92/warc/CC-MAIN-20190825000550-20190825022550-00409.warc.gz"}
http://superuser.com/questions/766602/how-do-i-persuade-programs-open-an-actual-lnk-file-in-windows-7
How do I “persuade” programs open an actual .lnk file in Windows 7? A .lnk file in Windows is an actual file intended to be a shortcut to another file. However, I really do want to view the contents on the .lnk file itself. I'm finding it literally impossible to do so; no matter what I try, my applications are opening the contents of the file it points to (drag/drop into text or hex editor, file | open from text or hex editor, etc.) Is there some way I can tell a program to actually open the .lnk file instead of the file it points to? - You can always rename it to .txt or something. Usually this doesn't cause it to lose any data. –  Chipperyman Jun 9 '14 at 23:31 @Chipperyman Except that that doesn't work. You cannot easily rename .lnk files with a new extension. –  fredsbend Jun 11 '14 at 6:13 Opening shortcuts In order to edit a shortcut you obviously need to open it first, and that proves to be tricky. In some cases you can force programs into loading shortcut files by using a command-line argument: "X:\Path\to\program.exe" "X:\my shortcut.lnk" Whether the link target or the actual shortcut file is loaded depends on the program, though. Here's a list (in no particular order) of some free hex editors which supports them out of the box: • HxD Open dialog Yes Drag-and-drop No • Open dialog No Drag-and-drop Yes Workaround In case you're unable to load the content of a shortcut file, you can open a command prompt and rename the .lnk file to a different, non-existent extension such as .lne: cd /d "X:\Folder\containing\shortcuts" ren "my shortcut.lnk" "my shortcut.lne" If you have multiple files you can also rename all of them at once: ren *.lnk *.lne You will be then able to treat those shortcuts just like regular files. When you're done, make sure to rename them back to restore their usual functionality. A shortcut, or shell link, contains metadata information used to access a specific link target. It's parsed and interpreted by the Windows shell. From the official documentation: The shell link structure stores various information that is useful to end users, including: • A keyboard shortcut that can be used to launch an application. • A descriptive comment. • Settings that control application behavior. • Optional data stored in extra data sections. Shortcuts are stored as binary files, and can't be edited using a standard text editor. A typical .lnk file looks something like this internally: 00000000 4C 00 00 00 01 14 02 00 00 00 00 00 C0 00 00 00 L...........À... 00000010 00 00 00 46 DC 03 00 02 20 00 00 00 C6 EF 52 BE ...FÜ... ...ÆïR¾ 00000020 10 04 CA 01 C6 EF 52 BE 10 04 CA 01 60 45 8A 67 ..Ê.ÆïR¾..Ê.EŠg 00000030 20 04 CA 01 00 9A 04 00 00 00 00 00 01 00 00 00 .Ê..š.......... The first twenty bytes are always the following ones: 4C 00 00 00 01 14 02 00 00 00 00 00 C0 00 00 00 00 00 00 46 - Using HxD to open it via File | Open actually seems to have opened the .lnk file. Thanks. –  Jez Jun 10 '14 at 13:24 @Jez I've updated my post to include some extra information. Let me know if you have any further questions. –  and31415 Jun 11 '14 at 23:08 I've tried this and it works for me on Windows 8.1: Opening LNK files in Notepad: • Just drag and drop them into the Notepad window. If you open them from the Open dialog, Notepad will open the EXE file pointed to by the LNK file. Opening LNK files in HxD hex editor: • Open them as you would any file using the Open dialog (FileOpen) Opening LNK files using the command prompt: • Navigate to the folder containing the LNK files and type the command: TYPE SHORTCUTNAME.LNK Opening LNK files in just about any program: • Start the command prompt, navigate to the folder where the program is located, use the command: PROGRAM_NAME.EXE "path to LNK file" - The whole point of a .lnk file is for Windows to treat it as a link to another file so it should be hard to edit! Perhaps it would help if you described WHY you want to edit it. You can change the settings of a .lnk file by right-clicking and choosing Properties. If you really want to edit it, you need a special tool. There are a few of these including: NB: I've not tried any of these, just Googled them. UPDATE: Don't know why I didn't think of this before but you can edit the properties via PowerShell. From this previous answer on Stack Overflow: Copy-Item $sourcepath$destination ## Get the lnk we want to use as a template $shell = New-Object -COM WScript.Shell$shortcut = $shell.CreateShortcut($destination) ## Open the lnk $shortcut.TargetPath = "C:\path\to\new\exe.exe" ## Make changes$shortcut.Description = "Our new link" ## This is the "Comment" field $shortcut.Save() ## Save As this uses the Shell COM object, you could also do this with WSH or even VBA in Office! - I want to edit its contents, preferably in a hex editor, because I think it might be corrupt and I don't trust Explorer to properly tell me its contents. – Jez Jun 9 '14 at 20:32 I suppose that recreating it is out then? If so, try one of the editors though I'm not sure what would have corrupted it. – Julian Knight Jun 9 '14 at 20:34 Well, it has never been hard to edit, at least in Windows XP. It was in fact harder to convince a program to treat it similarly to a symlink. Running any console app, e.g. edit, with the argument of path to shortcut will open the shortcut file. The programs who treat the shortcut similarly to a symlink do parse it themselves (maybe via shell functions). Has Windows resorted to using symlinks looking like shortcuts after XP? – Ruslan Jun 10 '14 at 10:02 The price of progress! Not much point in having a mechanism to define links that most apps then ignore. I don't know of many reasons to need to edit .lnk's directly. – Julian Knight Jun 10 '14 at 12:56 @JulianKnight I had a use once for generating them programmatically for placement in a folder that acted as an index. I had to assume no rights to install software, but we were already using VBA. Modifying a template .lnk proved easier than generating one from scratch. – Chris H Jun 10 '14 at 15:05 .LNK files are interpreted by the shell. If you open up a command prompt and invoke your editing tool (let's just say Notepad for example) using the .LNK file as an argument, that should bypass the shell and open up the contents of the .LNK file itself. notepad.exe shortcut.lnk - Nope, that doesn't work. It opens up the file the .lnk points to. – Jez Jun 9 '14 at 21:23 What application are you trying to open the link up in? – Wes Sayeed Jun 9 '14 at 21:41 That is incorrect. I've tried this answer and it does work. – Vinayak Jun 9 '14 at 22:40 @Vinayak That's a pretty bold statement. It may work for you in the situation you are using it in but it might not work for Jez. – Chipperyman Jun 10 '14 at 2:15 If you use CMD to run a program with the link file as a parameter, that parameter is passed verbatim to the program. It is up to the program to decide how to handle the link. I have tested this with FRHED, the freeware (and portable) hex editor: when you run it from the command line, passing a link as parameter, it prompts you whether you want to open the file linked to (Yes), the link itself (No), or Cancel. Oddly, if you use Open within the FRHED File menu, it opens the target file without asking. On XP I have FRHED in my SendTo context menu, and that works the same way as CMD. I imagine Win7 is similar (I use a Win7 system for a dedicated application, and I will do simple tests on it, as above, but I don't mess with its configuration). - That isn't Windows prompting you. That's Frhed asking you what to do – Vinayak Jun 9 '14 at 22:52 Quite right: silly of me - I'll change my answer. – AFH Jun 9 '14 at 23:06 I find putting Notepad into my SendTo menu to be very useful, letting me open any file (including shortcuts) in Notepad. – Scott Jun 10 '14 at 20:35 Final (?) observation: any DOS-based view or edit program will always open the link, never the target, since the DOS file open function does not know anything about links, so makes no special handling for them, unlike the Windows file open. – AFH Jun 10 '14 at 21:19 .lnk files are just files until a higher-level component such as Explorer.EXE assigns a maning to them. At a lower (NTFS) level, they still have a normal structure including data stream. In particular, the whole content is in the foo.lnk::$DATA stream. Not all higher-level tools will recognize that syntax. If they just assume it's a weird filename and pass it on, they will get the .lnk contents. E.g. on the command line MORE < foo.lnk::\$DATA > con` would print the data, but it's a bit gibberish (parts are binary) - If you have reason to edit such files often, add a shortcut to notepad.exe to your SendTo folder (In Win 7: C:\Users\USER\AppData\Roaming\Microsoft\Windows\SendTo). This makes "Send to notepad.exe" available from your Rt. Click context menu. The .ink file will open, and can be edited, and saved in notepad.exe. -
2015-06-02 07:58:29
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.40370890498161316, "perplexity": 2493.814699166171}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2015-22/segments/1433195035525.0/warc/CC-MAIN-20150601214355-00040-ip-10-180-206-219.ec2.internal.warc.gz"}
https://math.stackexchange.com/questions/720140/proposition-3-5-1-in-bruns-and-herzog-cohen-macaulay-rings
# Proposition 3.5.1 in Bruns and Herzog, Cohen-Macaulay Rings Bruns and Herzog in their book Cohen-Macaulay Rings, page 128 consider a local Noetherian ring $(R,m,k)$, an $R$-module $M$ and they define the functor $\Gamma_m(\cdot)$ as $\Gamma_m(M) = \varinjlim \operatorname{Hom}_R(R/m^s,M)$. Proposition 3.5.1 says that this functor is left exact. Question: Since $\operatorname{Hom}_R(R/m^s,\cdot)$ is a left exact functor and $\varinjlim$ is an exact functor, does it not follow immediately that $\varinjlim \operatorname{Hom}_R(R/m^s,\cdot)$ is a left exact functor? Why do we need a proof as the one given by B&H? I think you're right But the proof of B&H is an elementary proof that don't use exactness of $\varinjlim$ and it is kind of instructive.
2020-09-23 07:06:35
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9423871636390686, "perplexity": 117.24494359107065}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-40/segments/1600400209999.57/warc/CC-MAIN-20200923050545-20200923080545-00449.warc.gz"}
http://ask.cvxr.com/t/sqrt-of-x-2-a-2-in-cvx/6403
# Sqrt of (x^2-a^2) in cvx (ya) #1 Hi, Sir, I want to write D-sqrt(x^2-A^2) in cvx. Could you please help me writing it in cvx. Thanks How to express sqrt(1/x+1) by CVX? (Erling D.Andersen) #2 How does that appear in the model? In the objective? In a constraint? And what is constants and what is variables? (ya) #3 Thanks for reply. It appears in objective function x is optimization variable. In the domain of my interest its 2nd derivative is positive. My problem is just to make it compatible with cvx and ‘a’ is constant. (Erling D.Andersen) #4 Something like: \begin{array}[lc] \mbox{min} & D -t \\ st. &(x+a)(x-a) \geq t^2 \\ & x-a \geq 0 \\ \end{array} should do it. The two constraints can expressed using a rotated quadratic cone. Note we are actually saying |t| \leq (x+a)^{0.5} (x-a)^{0.5} so t has to be less than the geometric average of (x+a) and (x-a). 1 Like (ya) #5 Thanks for your prompt reply. I am just curious if this function appear in constraint then how would we right it. (Erling D.Andersen) #6 Look at the rotated_lorentz entry at http://cvxr.com/cvx/doc/funcref.html#sets .
2019-07-23 06:05:54
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.998940646648407, "perplexity": 3834.5005385080676}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": false}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-30/segments/1563195528869.90/warc/CC-MAIN-20190723043719-20190723065719-00239.warc.gz"}
https://physics.stackexchange.com/questions/76396/can-somebody-help-me-find-the-tension-in-this-string
Can somebody help me find the tension in this string? [closed] My friends and I are debating over the correct answer for this particular physics problem. There's a pendulum with a $2kg.$ mass and a $5m.$ string attached to a ceiling at $53^{\circ}$ to the vertical. It's released. At point B, $37^{\circ}$ from the vertical, we are asked to find the tension in the string. Both of us agree that $v = 2\sqrt{5}$. My equation was $T\cos 37^{\circ} = 20 + F_c\cos{37^{\circ}}$, which gives me $T = 33N$ I got the upward component of $T$ and set it equal to the weight of the block + the downward component of the centripetal force. My friend's equation was $T = 20\cos{37^{\circ}} + F_c$ which makes $T = 24N$. She got the component of the weight at $37^{\circ}$. Conceptually, I think we're both correct, but we get different answers. Could somebody answer who's right and why? Note: we assume $\cos 37^{\circ} = 0.6, \cos 53^{\circ} = 0.8, g = 10m^2/s$ closed as off-topic by Emilio Pisanty, Qmechanic♦Sep 6 '13 at 23:57 This question appears to be off-topic. The users who voted to close gave this specific reason: • "Homework-like questions should ask about a specific physics concept and show some effort to work through the problem. We want our questions to be useful to the broader community, and to future users. See our meta site for more guidance on how to edit your question to make it better" – Emilio Pisanty, Qmechanic If this question can be reworded to fit the rules in the help center, please edit the question. Firstly, we need to identify the fact that the motion of pendulum about the point of suspension is accelerated circular motion. Therefore, there has to be two forces acting on the pendulum's bob :- 1)Centripetal force radially inwards, responsible for centripetal acceleration 2) tangential force necessary for the tangential acceleration which increases the speed. In your friend's analysis, the net forces in the radial directions are equated to centripetal force which is correct since the radial acceleration caused by the centripetal force is the only acceleration in that direction which is accounted for in the equation of the centripetal force. But the calculation is wrong and it amounts to $20N$. But in your analysis, you equated net force in the vertical direction to the vertical component of the centripetal force which is not correct, since centripetal force is not the entire story. This is because, in vertical direction there is a component of tangential acceleration which is not accounted for in your analysis. Therefore, you have to include an additional term for the vertical component of tangential acceleration in the downward direction. Your equation will now be $$T\cos 37=F_c\cos 37 + 20-2a_t\cos 53$$ where $a_t=g\sin 37$ since the only force which can bring about tangential acceleration is the tangential component of weight. This gives $20 N$. In your friend's analysis, the analysis is correct but she probably did some mistake in her calculations which still amount to $20N$. Please correct the calculation and amend your equation then both the methods yield the same result. The weight at the moment of interest is travelling in a 5 metre radius circle at $2\sqrt5 \, m/s$ So the centripetal force needed is $F_C$:$$F_C=\frac{2\times 20}{5}=8$$ The force of gravity on the object is $20 \, newtons$ The radial component of this force is $20 \times cos(37^\circ)=12 \, Newtons$ The tension must balance this component of gravity and supply an extra 8 Newtons. $T = 20 \, Newtons$
2019-10-15 10:59:23
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.6568463444709778, "perplexity": 238.88182650653104}, "config": {"markdown_headings": false, "markdown_code": false, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-43/segments/1570986658566.9/warc/CC-MAIN-20191015104838-20191015132338-00504.warc.gz"}
https://swimswam.com/fina-rules-lochtes-new-underwater-technique-illegal-in-im-races/
# FINA rules Lochte’s new underwater technique illegal in IM races Jared Anderson ##### by Jared Anderson 101 August 29th, 2015 FINA has clarified its rules after a bit of a legal skirmish at the 2015 World Championships, officially announcing that Ryan Lochte‘s new twist on underwater kicking will be illegal in IM races moving forward. The issue is with Lochte’s recent decision to do his underwater dolphin kicking on his back, even on freestyle races. It’s a technique Lochte and coach David Marsh came up with and debuted this summer. Lochte is generally faster kicking on his back, and the decision helped him win his fourth consecutive world title in the 200 IM in Kazan. Though the technique is perfectly legal in freestyle (and of course, backstroke), there was concern that it might not fly under FINA’s IM rules. The difference is that “freestyle” permits athletes to do basically any style of stroke, including any of the three other competitive strokes. But in an individual medley race specifically, the “freestyle” leg doesn’t allow for any of the three other competitive strokes to be used – that makes the race a full, four-stroke medley by preventing athletes from repeating one of the other strokes. During the World Championships, there were rumblings that officials would define Lochte’s underwater kicks on his back as being “backstroke” under the legal definition of the rules, and that he would be disqualified for repeating a stroke in his 200 IM. Lochte took a risk and swam his race with the new style anyway, and wasn’t disqualified. In a post-race interview, he said he hadn’t heard of a rule prohibiting underwater kicking on one’s back, but also predicted that the rule would be changed in the future. Turns out, Lochte was right. Germany’s SwimSportNews reports today that FINA has clarified its IM rule, noting that Lochte’s technique will be considered illegal and disqualified in any future IM races. FINA’s rationale is that “backstroke” is defined by a swimmer traveling lying on his or her back. So in underwater kicking on his back, Lochte is technically swimming backstroke for the first 15 meters of his freestyle lengths. That’s still perfectly legal in freestyle races, but will no longer be allowed in IM races. You can watch video of Lochte’s race here. ### In This Story Subscribe Notify of Inline Feedbacks ok 5 years ago ah fina, preventing evolution of the sport, now what if this caught on at the world champs and everyone in kazan did it, I wonder what they would have done then. CT Swim Fan 5 years ago That was fast. Now they should start trying to clean up the cheating in breaststroke. Bill S 5 years ago When kitajima added a dolphin kick they changed the rule to allow innovation. When Van Der Burgh added multiple dolphin kicks they turned there head. Now with this they are killing it before it takes effect. How can we have great coaches again like Counsilmen if the sport kills innovation? Mark 5 years ago Using your logic, why have a yardage limit on underwater?? David 4 years ago They don’t in breaststroke Kathy 5 years ago Or what if Ryan were one of their known drug cheats that they wanted $wept under the rug from one of the federations they want to appea$e? David Berkoff 5 years ago Surprise surprise. FINA makes yet another bad rule on a technical matter and sucks all creativity out of the sport all the while showing absolute incompetence regarding the integrity of the sport. Swammerjammer 5 years ago Kudos to Ryan for pushing the envelope.
2020-10-27 04:36:35
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.173910990357399, "perplexity": 8091.125964811052}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-45/segments/1603107893011.54/warc/CC-MAIN-20201027023251-20201027053251-00023.warc.gz"}
https://www.cut-the-knot.org/triangle/InequalityCourtesyCeva.shtml
. An Inequality in Acute Triangle, Courtesy of Ceva's Theorem Lemma Assume that in an acute $\Delta ABC,\;$ $AA_0,\;$ $BB_0,\;$ $CC_0\;$ are concurrent cevians. Then $8\cdot BA_0\cdot CB_0\cdot AC_0\le abc.$ For convenience, denote $BA_0=x_1,\;$ $A_0C=y_1,\;$ $CB_0=x_2,\;$ $B_0A=y_2,\;$ $AC_0=x_3,\;$ $C_0B=y_3.\;$ Then, by Ceva's theorem, $\displaystyle\frac{x_1}{y_1}\cdot \frac{x_2}{y_2}\cdot \frac{x_3}{y_3} = 1.$ We have to prove that $8x_1x_2x_3\le (x_1+y_1)(x_2+y_2)(x_3+y_3).\;$ In other words, we need to show that $\displaystyle \frac{x_1+y_1}{x_1}\cdot\frac{x_2+y_2}{x_2}\cdot\frac{x_3+y_3}{x_3}\ge 8,$ or, $\displaystyle \left(1+\frac{y_1}{x_1}\right)\cdot\left(1+\frac{y_2}{x_2}\right)\cdot\left(1+\frac{y_3}{x_3}\right)\ge 8,$ Multiplying out, this is reduced to $\displaystyle 1+\frac{y_1}{x_1}+\frac{y_2}{x_2}+\frac{y_3}{x_3}+ +\frac{y_1y_2}{x_1x_2}+\frac{y_2y_3}{x_2x_3}+\frac{y_3y_1}{x_3x_1}+\frac{y_1y_2y_3}{x_1x_2x_3}\ge 8.$ Making multiple uses of Ceva's theorem, this is equivalent to $\displaystyle 1+\frac{y_1}{x_1}+\frac{y_2}{x_2}+\frac{y_3}{x_3}+ +\frac{x_3}{y_3}+\frac{x_1}{y_1}+\frac{x_2}{y_2}+1\ge 8$ and, in turn, to $\displaystyle \left(\frac{y_1}{x_1}+\frac{x_1}{y_1}\right)+\left(\frac{y_2}{x_2}++\frac{x_2}{y_2}\right)+\left(\frac{y_3}{x_3}+ +\frac{x_3}{y_3}\right)\ge 6$ which is true by the AM-GM inequality applied thrice. Proof Since the triples of angle bisectors, altitudes, and symmedians are all concurrent cevians, we may apply the lemma to each triple: $8\cdot AB'\cdot BC'\cdot CA'\le abc\\ 8\cdot AB''\cdot BC''\cdot CA''\le abc\\ 8\cdot AB'''\cdot BC'''\cdot CA'''\le abc.$ Adding up gives the desired result. Acknowledgment The inequality with the solution has been posted by Dan Sitaru at the CutTheKnotMath facebook page.
2021-11-30 18:09:52
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 1, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.7704417705535889, "perplexity": 1059.4809379550395}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-49/segments/1637964359065.88/warc/CC-MAIN-20211130171559-20211130201559-00512.warc.gz"}
https://darrenjw.wordpress.com/tag/code/
Unbiased MCMC with couplings Yesterday there was an RSS Read Paper meeting for the paper Unbiased Markov chain Monte Carlo with couplings by Pierre Jacob, John O’Leary and Yves F. Atchadé. The paper addresses the bias in MCMC estimates due to lack of convergence to equilibrium (the “burn-in” problem), and shows how it is possible to modify MCMC algorithms in order to construct estimates which exactly remove this bias. The requirement is to couple a pair of MCMC chains so that they will at some point meet exactly and thereafter remain coupled. This turns out to be easier to do that one might naively expect. There are many reasons why we might want to remove bias from MCMC estimates, but the primary motivation in the paper was the application to parallel MCMC computation. The idea here is that many pairs of chains can be run independently on any available processors, and the unbiased estimates from the different pairs can be safely averaged to get an (improved) unbiased estimate based on all of the chains. As a discussant of the paper, I’ve spent a bit of time thinking about this idea, and have created a small repository of materials relating to the paper which may be useful for others interested in understanding the method and how to use it in practice. The repo includes a page of links to related papers, blog posts, software and other resources relating to unbiased MCMC that I’ve noticed on-line. Earlier in the year I gave an internal seminar at Newcastle giving a tutorial introduction to the main ideas from the paper, including runnable R code implementations of the examples. The talk was prepared as an executable R Markdown document. The R Markdown source code is available in the repo, but for the convenience of casual browsers I’ve also included a pre-built set of PDF slides. Code examples include code for maximal coupling of two (univariate) distributions, coupling Metropolis-Hastings chains, and coupling a Gibbs sampler for an AR(1) process. I haven’t yet finalised my written discussion contribution, but the slides I presented at the Read Paper meeting are also available. Again, there is source code and pre-built PDF slides. My discussion focused on seeing how well the technique works for Gibbs samplers applied to high-dimensional latent process models (an AR(1) process and a Gaussian Markov random field), and reflecting on the extent to which the technique really solves the burn-in/parallel MCMC problem. The repo also contains a few stand-alone code examples. There are some simple tutorial examples in R (largely derived from my tutorial introduction), including implementation of (univariate) independent and reflection maximal couplings, and a coupled AR(1) process example. The more substantial example concerns a coupled Gibbs sampler for a GMRF. This example is written in the Scala programming language. There are a couple of notable features of this implementation. First, the code illustrates monadic coupling of probability distributions, based on the Rand type in the Breeze scientific library. This provides an elegant way to max couple arbitrary (continuous) random variables, and to create coupled Metropolis(-Hastings) kernels. For example, a coupling of two distributions can be constructed as def couple[T](p: ContinuousDistr[T], q: ContinuousDistr[T]): Rand[(T, T)] = { def ys: Rand[T] = for { y <- q w <- Uniform(0, 1) ay <- if (math.log(w) > p.logPdf(y) - q.logPdf(y)) Rand.always(y) else ys } yield ay val pair = for { x <- p w <- Uniform(0, 1) } yield (math.log(w) <= q.logPdf(x) - p.logPdf(x), x) pair flatMap { case (b, x) => if (b) Rand.always((x, x)) else (ys map (y => (x, y))) } } and then draws can be sampled from the resulting Rand[(T, T)] polymorphic type as required. Incidentally, this also illustrates how to construct an independent maximal coupling without evaluating any raw likelihoods. The other notable feature of the code is the use of a parallel comonadic image type for parallel Gibbs sampling of the GMRF, producing a (lazy) Stream of coupled MCMC samples. MCMC as a Stream Introduction This weekend I’ve been preparing some material for my upcoming Scala for statistical computing short course. As part of the course, I thought it would be useful to walk through how to think about and structure MCMC codes, and in particular, how to think about MCMC algorithms as infinite streams of state. This material is reasonably stand-alone, so it seems suitable for a blog post. Complete runnable code for the examples in this post are available from my blog repo. A simple MH sampler For this post I will just consider a trivial toy Metropolis algorithm using a Uniform random walk proposal to target a standard normal distribution. I’ve considered this problem before on my blog, so if you aren’t very familiar with Metropolis-Hastings algorithms, you might want to quickly review my post on Metropolis-Hastings MCMC algorithms in R before continuing. At the end of that post, I gave the following R code for the Metropolis sampler: metrop3<-function(n=1000,eps=0.5) { vec=vector("numeric", n) x=0 oldll=dnorm(x,log=TRUE) vec[1]=x for (i in 2:n) { can=x+runif(1,-eps,eps) loglik=dnorm(can,log=TRUE) loga=loglik-oldll if (log(runif(1)) < loga) { x=can oldll=loglik } vec[i]=x } vec } I will begin this post with a fairly direct translation of this algorithm into Scala: def metrop1(n: Int = 1000, eps: Double = 0.5): DenseVector[Double] = { val vec = DenseVector.fill(n)(0.0) var x = 0.0 var oldll = Gaussian(0.0, 1.0).logPdf(x) vec(0) = x (1 until n).foreach { i => val can = x + Uniform(-eps, eps).draw val loglik = Gaussian(0.0, 1.0).logPdf(can) val loga = loglik - oldll if (math.log(Uniform(0.0, 1.0).draw) < loga) { x = can oldll = loglik } vec(i) = x } vec } This code works, and is reasonably fast and efficient, but there are several issues with it from a functional programmers perspective. One issue is that we have committed to storing all MCMC output in RAM in a DenseVector. This probably isn’t an issue here, but for some big problems we might prefer to not store the full set of states, but to just print the states to (say) the console, for possible re-direction to a file. It is easy enough to modify the code to do this: def metrop2(n: Int = 1000, eps: Double = 0.5): Unit = { var x = 0.0 var oldll = Gaussian(0.0, 1.0).logPdf(x) (1 to n).foreach { i => val can = x + Uniform(-eps, eps).draw val loglik = Gaussian(0.0, 1.0).logPdf(can) val loga = loglik - oldll if (math.log(Uniform(0.0, 1.0).draw) < loga) { x = can oldll = loglik } println(x) } } But now we have two version of the algorithm. One for storing results locally, and one for streaming results to the console. This is clearly unsatisfactory, but we shall return to this issue shortly. Another issue that will jump out at functional programmers is the reliance on mutable variables for storing the state and old likelihood. Let’s fix that now by re-writing the algorithm as a tail-recursion. @tailrec def metrop3(n: Int = 1000, eps: Double = 0.5, x: Double = 0.0, oldll: Double = Double.MinValue): Unit = { if (n > 0) { println(x) val can = x + Uniform(-eps, eps).draw val loglik = Gaussian(0.0, 1.0).logPdf(can) val loga = loglik - oldll if (math.log(Uniform(0.0, 1.0).draw) < loga) metrop3(n - 1, eps, can, loglik) else metrop3(n - 1, eps, x, oldll) } } This has eliminated the vars, and is just as fast and efficient as the previous version of the code. Note that the @tailrec annotation is optional – it just signals to the compiler that we want it to throw an error if for some reason it cannot eliminate the tail call. However, this is for the print-to-console version of the code. What if we actually want to keep the iterations in RAM for subsequent analysis? We can keep the values in an accumulator, as follows. @tailrec def metrop4(n: Int = 1000, eps: Double = 0.5, x: Double = 0.0, oldll: Double = Double.MinValue, acc: List[Double] = Nil): DenseVector[Double] = { if (n == 0) DenseVector(acc.reverse.toArray) else { val can = x + Uniform(-eps, eps).draw val loglik = Gaussian(0.0, 1.0).logPdf(can) val loga = loglik - oldll if (math.log(Uniform(0.0, 1.0).draw) < loga) metrop4(n - 1, eps, can, loglik, can :: acc) else metrop4(n - 1, eps, x, oldll, x :: acc) } } Factoring out the updating logic This is all fine, but we haven’t yet addressed the issue of having different versions of the code depending on what we want to do with the output. The problem is that we have tied up the logic of advancing the Markov chain with what to do with the output. What we need to do is separate out the code for advancing the state. We can do this by defining a new function. def newState(x: Double, oldll: Double, eps: Double): (Double, Double) = { val can = x + Uniform(-eps, eps).draw val loglik = Gaussian(0.0, 1.0).logPdf(can) val loga = loglik - oldll if (math.log(Uniform(0.0, 1.0).draw) < loga) (can, loglik) else (x, oldll) } This function takes as input a current state and associated log likelihood and returns a new state and log likelihood following the execution of one step of a MH algorithm. This separates the concern of state updating from the rest of the code. So now if we want to write code that prints the state, we can write it as @tailrec def metrop5(n: Int = 1000, eps: Double = 0.5, x: Double = 0.0, oldll: Double = Double.MinValue): Unit = { if (n > 0) { println(x) val ns = newState(x, oldll, eps) metrop5(n - 1, eps, ns._1, ns._2) } } and if we want to accumulate the set of states visited, we can write that as @tailrec def metrop6(n: Int = 1000, eps: Double = 0.5, x: Double = 0.0, oldll: Double = Double.MinValue, acc: List[Double] = Nil): DenseVector[Double] = { if (n == 0) DenseVector(acc.reverse.toArray) else { val ns = newState(x, oldll, eps) metrop6(n - 1, eps, ns._1, ns._2, ns._1 :: acc) } } Both of these functions call newState to do the real work, and concentrate on what to do with the sequence of states. However, both of these functions repeat the logic of how to iterate over the sequence of states. MCMC as a stream Ideally we would like to abstract out the details of how to do state iteration from the code as well. Most functional languages have some concept of a Stream, which represents a (potentially infinite) sequence of states. The Stream can embody the logic of how to perform state iteration, allowing us to abstract that away from our code, as well. To do this, we will restructure our code slightly so that it more clearly maps old state to new state. def nextState(eps: Double)(state: (Double, Double)): (Double, Double) = { val x = state._1 val oldll = state._2 val can = x + Uniform(-eps, eps).draw val loglik = Gaussian(0.0, 1.0).logPdf(can) val loga = loglik - oldll if (math.log(Uniform(0.0, 1.0).draw) < loga) (can, loglik) else (x, oldll) } The "real" state of the chain is just x, but if we want to avoid recalculation of the old likelihood, then we need to make this part of the chain’s state. We can use this nextState function in order to construct a Stream. def metrop7(eps: Double = 0.5, x: Double = 0.0, oldll: Double = Double.MinValue): Stream[Double] = Stream.iterate((x, oldll))(nextState(eps)) map (_._1) The result of calling this is an infinite stream of states. Obviously it isn’t computed – that would require infinite computation, but it captures the logic of iteration and computation in a Stream, that can be thought of as a lazy List. We can get values out by converting the Stream to a regular collection, being careful to truncate the Stream to one of finite length beforehand! eg. metrop7().drop(1000).take(10000).toArray will do a burn-in of 1,000 iterations followed by a main monitoring run of length 10,000, capturing the results in an Array. Note that metrop7().drop(1000).take(10000) is a Stream, and so nothing is actually computed until the toArray is encountered. Conversely, if printing to console is required, just replace the .toArray with .foreach(println). The above stream-based approach to MCMC iteration is clean and elegant, and deals nicely with issues like burn-in and thinning (which can be handled similarly). This is how I typically write MCMC codes these days. However, functional programming purists would still have issues with this approach, as it isn’t quite pure functional. The problem is that the code isn’t pure – it has a side-effect, which is to mutate the state of the under-pinning pseudo-random number generator. If the code was pure, calling nextState with the same inputs would always give the same result. Clearly this isn’t the case here, as we have specifically designed the function to be stochastic, returning a randomly sampled value from the desired probability distribution. So nextState represents a function for randomly sampling from a conditional probability distribution. A pure functional approach Now, ultimately all code has side-effects, or there would be no point in running it! But in functional programming the desire is to make as much of the code as possible pure, and to push side-effects to the very edges of the code. So it’s fine to have side-effects in your main method, but not buried deep in your code. Here the side-effect is at the very heart of the code, which is why it is potentially an issue. To keep things as simple as possible, at this point we will stop worrying about carrying forward the old likelihood, and hard-code a value of eps. Generalisation is straightforward. We can make our code pure by instead defining a function which represents the conditional probability distribution itself. For this we use a probability monad, which in Breeze is called Rand. We can couple together such functions using monadic binds (flatMap in Scala), expressed most neatly using for-comprehensions. So we can write our transition kernel as def kernel(x: Double): Rand[Double] = for { innov <- Uniform(-0.5, 0.5) can = x + innov oldll = Gaussian(0.0, 1.0).logPdf(x) loglik = Gaussian(0.0, 1.0).logPdf(can) loga = loglik - oldll u <- Uniform(0.0, 1.0) } yield if (math.log(u) < loga) can else x This is now pure – the same input x will always return the same probability distribution – the conditional distribution of the next state given the current state. We can draw random samples from this distribution if we must, but it’s probably better to work as long as possible with pure functions. So next we need to encapsulate the iteration logic. Breeze has a MarkovChain object which can take kernels of this form and return a stochastic Process object representing the iteration logic, as follows. MarkovChain(0.0)(kernel). steps. drop(1000). take(10000). foreach(println) The steps method contains the logic of how to advance the state of the chain. But again note that no computation actually takes place until the foreach method is encountered – this is when the sampling occurs and the side-effects happen. Metropolis-Hastings is a common use-case for Markov chains, so Breeze actually has a helper method built-in that will construct a MH sampler directly from an initial state, a proposal kernel, and a (log) target. MarkovChain. metropolisHastings(0.0, (x: Double) => Uniform(x - 0.5, x + 0.5))(x => Gaussian(0.0, 1.0).logPdf(x)). steps. drop(1000). take(10000). toArray Note that if you are using the MH functionality in Breeze, it is important to make sure that you are using version 0.13 (or later), as I fixed a few issues with the MH code shortly prior to the 0.13 release. Summary Viewing MCMC algorithms as infinite streams of state is useful for writing elegant, generic, flexible code. Streams occur everywhere in programming, and so there are lots of libraries for working with them. In this post I used the simple Stream from the Scala standard library, but there are much more powerful and flexible stream libraries for Scala, including fs2 and Akka-streams. But whatever libraries you are using, the fundamental concepts are the same. The most straightforward approach to implementation is to define impure stochastic streams to consume. However, a pure functional approach is also possible, and the Breeze library defines some useful functions to facilitate this approach. I’m still a little bit ambivalent about whether the pure approach is worth the additional cognitive overhead, but it’s certainly very interesting and worth playing with and thinking about the pros and cons. Complete runnable code for the examples in this post are available from my blog repo. Summary stats for ABC Introduction In the previous post I gave a very brief introduction to ABC, including a simple example for inferring the parameters of a Markov process given some time series observations. Towards the end of the post I observed that there were (at least!) two potential problems with scaling up the simple approach described, one relating to the dimension of the data and the other relating to the dimension of the parameter space. Before moving on to the (to me, more interesting) problem of the dimension of the parameter space, I will briefly discuss the data dimension problem in this post, and provide a couple of references for further reading. Summary stats Recall that the simple rejection sampling approach to ABC involves first sampling a candidate parameter $\theta^\star$ from the prior and then sampling a corresponding data set $x^\star$ from the model. This simulated data set is compared with the true data $x$ using some (pseudo-)norm, $\Vert\cdot\Vert$, and accepting $\theta^\star$ if the simulated data set is sufficiently close to the true data, $\Vert x^\star - x\Vert <\epsilon$. It should be clear that if we are using a proper norm then as $\epsilon$ tends to zero the distribution of the accepted values tends to the desired posterior distribution of the parameters given the data. However, smaller choices of $\epsilon$ will lead to higher rejection rates. This will be a particular problem in the context of high-dimensional $x$, where it is often unrealistic to expect a close match between all components of $x$ and the simulated data $x^\star$, even for a good choice of $\theta^\star$. In this case, it makes more sense to look for good agreement between particular aspects of $x$, such as the mean, or variance, or auto-correlation, depending on the exact problem and context. If we can find a finite set of sufficient statistics, $s(x)$ for $\theta$, then it should be clear that replacing the acceptance criterion with $\Vert s(x^\star) - s(x)\Vert <\epsilon$ will also lead to a scheme tending to the true posterior as $\epsilon$ tends to zero (assuming a proper norm on the space of sufficient statistics), and will typically be better than the naive method, since the sufficient statistics will be of lower dimension and less “noisy” that the raw data, leading to higher acceptance rates with no loss of information. Unfortunately for most problems of practical interest it is not possible to find low-dimensional sufficient statistics, and so people in practice use domain knowledge and heuristics to come up with a set of summary statistics, $s(x)$ which they hope will closely approximate sufficient statistics. There is still a question as to how these statistics should be weighted or transformed to give a particular norm. This can be done using theory or heuristics, and some relevant references for this problem are given at the end of the post. Implementation in R Let’s now look at the problem from the previous post. Here, instead of directly computing the Euclidean distance between the real and simulated data, we will look at the Euclidean distance between some (normalised) summary statistics. First we will load some packages and set some parameters. require(smfsb) require(parallel) options(mc.cores=4) data(LVdata) N=1e7 bs=1e5 batches=N/bs message(paste("N =",N," | bs =",bs," | batches =",batches)) Next we will define some summary stats for a univariate time series – the mean, the (log) variance, and the first two auto-correlations. ssinit <- function(vec) { ac23=as.vector(acf(vec,lag.max=2,plot=FALSE)$acf)[2:3] c(mean(vec),log(var(vec)+1),ac23) } Once we have this, we can define some stats for a bivariate time series by combining the stats for the two component series, along with the cross-correlation between them. ssi <- function(ts) { c(ssinit(ts[,1]),ssinit(ts[,2]),cor(ts[,1],ts[,2])) } This gives a set of summary stats, but these individual statistics are potentially on very different scales. They can be transformed and re-weighted in a variety of ways, usually on the basis of a pilot run which gives some information about the distribution of the summary stats. Here we will do the simplest possible thing, which is to normalise the variance of the stats on the basis of a pilot run. This is not at all optimal – see the references at the end of the post for a description of better methods. message("Batch 0: Pilot run batch") prior=cbind(th1=exp(runif(bs,-6,2)),th2=exp(runif(bs,-6,2)),th3=exp(runif(bs,-6,2))) rows=lapply(1:bs,function(i){prior[i,]}) samples=mclapply(rows,function(th){simTs(c(50,100),0,30,2,stepLVc,th)}) sumstats=mclapply(samples,ssi) sds=apply(sapply(sumstats,c),1,sd) print(sds) # now define a standardised distance ss<-function(ts) { ssi(ts)/sds } ss0=ss(LVperfect) distance <- function(ts) { diff=ss(ts)-ss0 sum(diff*diff) } Now we have a normalised distance function defined, we can proceed exactly as before to obtain an ABC posterior via rejection sampling. post=NULL for (i in 1:batches) { message(paste("batch",i,"of",batches)) prior=cbind(th1=exp(runif(bs,-6,2)),th2=exp(runif(bs,-6,2)),th3=exp(runif(bs,-6,2))) rows=lapply(1:bs,function(i){prior[i,]}) samples=mclapply(rows,function(th){simTs(c(50,100),0,30,2,stepLVc,th)}) dist=mclapply(samples,distance) dist=sapply(dist,c) cutoff=quantile(dist,1000/N,na.rm=TRUE) post=rbind(post,prior[dist<cutoff,]) } message(paste("Finished. Kept",dim(post)[1],"simulations")) Having obtained the posterior, we can use the following code to plot the results. th=c(th1 = 1, th2 = 0.005, th3 = 0.6) op=par(mfrow=c(2,3)) for (i in 1:3) { hist(post[,i],30,col=5,main=paste("Posterior for theta[",i,"]",sep="")) abline(v=th[i],lwd=2,col=2) } for (i in 1:3) { hist(log(post[,i]),30,col=5,main=paste("Posterior for log(theta[",i,"])",sep="")) abline(v=log(th[i]),lwd=2,col=2) } par(op) This gives the plot shown below. From this we can see that the ABC posterior obtained here is very similar to that obtained in the previous post using the full data. Here the dimension reduction is not that great – reducing from 32 data points to 9 summary statistics – and so the improvement in performance is not that noticable. But in higher dimensional problems reducing the dimension of the data is practically essential. Summary and References As before, I recommend the wikipedia article on approximate Bayesian computation for further information and a comprehensive set of references for further reading. Here I just want to highlight two references particularly relevant to the issue of summary statistics. It is quite difficult to give much practical advice on how to construct good summary statistics, but how to transform a set of summary stats in a “good” way is a problem that is reasonably well understood. In this post I did something rather naive (normalising the variance), but the following two papers describe much better approaches. I still haven’t addressed the issue of a high-dimensional parameter space – that will be the topic of a subsequent post. The complete R script require(smfsb) require(parallel) options(mc.cores=4) data(LVdata) N=1e6 bs=1e5 batches=N/bs message(paste("N =",N," | bs =",bs," | batches =",batches)) ssinit <- function(vec) { ac23=as.vector(acf(vec,lag.max=2,plot=FALSE)$acf)[2:3] c(mean(vec),log(var(vec)+1),ac23) } ssi <- function(ts) { c(ssinit(ts[,1]),ssinit(ts[,2]),cor(ts[,1],ts[,2])) } message("Batch 0: Pilot run batch") prior=cbind(th1=exp(runif(bs,-6,2)),th2=exp(runif(bs,-6,2)),th3=exp(runif(bs,-6,2))) rows=lapply(1:bs,function(i){prior[i,]}) samples=mclapply(rows,function(th){simTs(c(50,100),0,30,2,stepLVc,th)}) sumstats=mclapply(samples,ssi) sds=apply(sapply(sumstats,c),1,sd) print(sds) # now define a standardised distance ss<-function(ts) { ssi(ts)/sds } ss0=ss(LVperfect) distance <- function(ts) { diff=ss(ts)-ss0 sum(diff*diff) } post=NULL for (i in 1:batches) { message(paste("batch",i,"of",batches)) prior=cbind(th1=exp(runif(bs,-6,2)),th2=exp(runif(bs,-6,2)),th3=exp(runif(bs,-6,2))) rows=lapply(1:bs,function(i){prior[i,]}) samples=mclapply(rows,function(th){simTs(c(50,100),0,30,2,stepLVc,th)}) dist=mclapply(samples,distance) dist=sapply(dist,c) cutoff=quantile(dist,1000/N,na.rm=TRUE) post=rbind(post,prior[dist<cutoff,]) } message(paste("Finished. Kept",dim(post)[1],"simulations")) # plot the results th=c(th1 = 1, th2 = 0.005, th3 = 0.6) op=par(mfrow=c(2,3)) for (i in 1:3) { hist(post[,i],30,col=5,main=paste("Posterior for theta[",i,"]",sep="")) abline(v=th[i],lwd=2,col=2) } for (i in 1:3) { hist(log(post[,i]),30,col=5,main=paste("Posterior for log(theta[",i,"])",sep="")) abline(v=log(th[i]),lwd=2,col=2) } par(op) Introduction to Approximate Bayesian Computation (ABC) Many of the posts in this blog have been concerned with using MCMC based methods for Bayesian inference. These methods are typically “exact” in the sense that they have the exact posterior distribution of interest as their target equilibrium distribution, but are obviously “approximate”, in that for any finite amount of computing time, we can only generate a finite sample of correlated realisations from a Markov chain that we hope is close to equilibrium. Approximate Bayesian Computation (ABC) methods go a step further, and generate samples from a distribution which is not the true posterior distribution of interest, but a distribution which is hoped to be close to the real posterior distribution of interest. There are many variants on ABC, and I won’t get around to explaining all of them in this blog. The wikipedia page on ABC is a good starting point for further reading. In this post I’ll explain the most basic rejection sampling version of ABC, and in a subsequent post, I’ll explain a sequential refinement, often referred to as ABC-SMC. As usual, I’ll use R code to illustrate the ideas. Basic idea There is a close connection between “likelihood free” MCMC methods and those of approximate Bayesian computation (ABC). To keep things simple, consider the case of a perfectly observed system, so that there is no latent variable layer. Then there are model parameters $\theta$ described by a prior $\pi(\theta)$, and a forwards-simulation model for the data $x$, defined by $\pi(x|\theta)$. It is clear that a simple algorithm for simulating from the desired posterior $\pi(\theta|x)$ can be obtained as follows. First simulate from the joint distribution $\pi(\theta,x)$ by simulating $\theta^\star\sim\pi(\theta)$ and then $x^\star\sim \pi(x|\theta^\star)$. This gives a sample $(\theta^\star,x^\star)$ from the joint distribution. A simple rejection algorithm which rejects the proposed pair unless $x^\star$ matches the true data $x$ clearly gives a sample from the required posterior distribution. Exact rejection sampling • 1. Sample $\theta^\star \sim \pi(\theta^\star)$ • 2. Sample $x^\star\sim \pi(x^\star|\theta^\star)$ • 3. If $x^\star=x$, keep $\theta^\star$ as a sample from $\pi(\theta|x)$, otherwise reject. This algorithm is exact, and for discrete $x$ will have a non-zero acceptance rate. However, in most interesting problems the rejection rate will be intolerably high. In particular, the acceptance rate will typically be zero for continuous valued $x$. ABC rejection sampling The ABC “approximation” is to accept values provided that $x^\star$ is “sufficiently close” to $x$. In the first instance, we can formulate this as follows. • 1. Sample $\theta^\star \sim \pi(\theta^\star)$ • 2. Sample $x^\star\sim \pi(x^\star|\theta^\star)$ • 3. If $\Vert x^\star-x\Vert< \epsilon$, keep $\theta^\star$ as a sample from $\pi(\theta|x)$, otherwise reject. Euclidean distance is usually chosen as the norm, though any norm can be used. This procedure is “honest”, in the sense that it produces exact realisations from $\theta^\star\big|\Vert x^\star-x\Vert < \epsilon.$ For suitable small choice of $\epsilon$, this will closely approximate the true posterior. However, smaller choices of $\epsilon$ will lead to higher rejection rates. This will be a particular problem in the context of high-dimensional $x$, where it is often unrealistic to expect a close match between all components of $x$ and the simulated data $x^\star$, even for a good choice of $\theta^\star$. In this case, it makes more sense to look for good agreement between particular aspects of $x$, such as the mean, or variance, or auto-correlation, depending on the exact problem and context. In the simplest case, this is done by forming a (vector of) summary statistic(s), $s(x^\star)$ (ideally a sufficient statistic), and accepting provided that $\Vert s(x^\star)-s(x)\Vert<\epsilon$ for some suitable choice of metric and $\epsilon$. We will return to this issue in a subsequent post. Inference for an intractable Markov process I’ll illustrate ABC in the context of parameter inference for a Markov process with an intractable transition kernel: the discrete stochastic Lotka-Volterra model. A function for simulating exact realisations from the intractable kernel is included in the smfsb CRAN package discussed in a previous post. Using pMCMC to solve the parameter inference problem is discussed in another post. It may be helpful to skim through those posts quickly to become familiar with this problem before proceeding. So, for a given proposed set of parameters, realisations from the process can be sampled using the functions simTs and stepLV (from the smfsb package). We will use the sample data set LVperfect (from the LVdata dataset) as our “true”, or “target” data, and try to find parameters for the process which are consistent with this data. A fairly minimal R script for this problem is given below. require(smfsb) data(LVdata) N=1e5 message(paste("N =",N)) prior=cbind(th1=exp(runif(N,-6,2)),th2=exp(runif(N,-6,2)),th3=exp(runif(N,-6,2))) rows=lapply(1:N,function(i){prior[i,]}) message("starting simulation") samples=lapply(rows,function(th){simTs(c(50,100),0,30,2,stepLVc,th)}) message("finished simulation") distance<-function(ts) { diff=ts-LVperfect sum(diff*diff) } message("computing distances") dist=lapply(samples,distance) message("distances computed") dist=sapply(dist,c) cutoff=quantile(dist,1000/N) post=prior[dist<cutoff,] op=par(mfrow=c(2,3)) apply(post,2,hist,30) apply(log(post),2,hist,30) par(op) This script should take 5-10 minutes to run on a decent laptop, and will result in histograms of the posterior marginals for the components of $\theta$ and $\log(\theta)$. Note that I have deliberately adopted a functional programming style, making use of the lapply function for the most computationally intensive steps. The reason for this will soon become apparent. Note that rather than pre-specifying a cutoff $\epsilon$, I’ve instead picked a quantile of the distance distribution. This is common practice in scenarios where the distance is difficult to have good intuition about. In fact here I’ve gone a step further and chosen a quantile to give a final sample of size 1000. Obviously then in this case I could have just selected out the top 1000 directly, but I wanted to illustrate the quantile based approach. One problem with the above script is that all proposed samples are stored in memory at once. This is problematic for problems involving large numbers of samples. However, it is convenient to do simulations in large batches, both for computation of quantiles, and also for efficient parallelisation. The script below illustrates how to implement a batch parallelisation strategy for this problem. Samples are generated in batches of size 10^4, and only the best fitting samples are stored before the next batch is processed. This strategy can be used to get a good sized sample based on a more stringent acceptance criterion at the cost of addition simulation time. Note that the parallelisation code will only work with recent versions of R, and works by replacing calls to lapply with the parallel version, mclapply. You should notice an appreciable speed-up on a multicore machine. require(smfsb) require(parallel) options(mc.cores=4) data(LVdata) N=1e5 bs=1e4 batches=N/bs message(paste("N =",N," | bs =",bs," | batches =",batches)) distance<-function(ts) { diff=ts[,1]-LVprey sum(diff*diff) } post=NULL for (i in 1:batches) { message(paste("batch",i,"of",batches)) prior=cbind(th1=exp(runif(bs,-6,2)),th2=exp(runif(bs,-6,2)),th3=exp(runif(bs,-6,2))) rows=lapply(1:bs,function(i){prior[i,]}) samples=mclapply(rows,function(th){simTs(c(50,100),0,30,2,stepLVc,th)}) dist=mclapply(samples,distance) dist=sapply(dist,c) cutoff=quantile(dist,1000/N) post=rbind(post,prior[dist<cutoff,]) } message(paste("Finished. Kept",dim(post)[1],"simulations")) op=par(mfrow=c(2,3)) apply(post,2,hist,30) apply(log(post),2,hist,30) par(op) Note that there is an additional approximation here, since the top 100 samples from each of 10 batches of simulations won’t correspond exactly to the top 1000 samples overall, but given all of the other approximations going on in ABC, this one is likely to be the least of your worries. Now, if you compare the approximate posteriors obtained here with the “true” posteriors obtained in an earlier post using pMCMC, you will see that these posteriors are really quite poor. However, this isn’t a very fair comparison, since we’ve only done 10^5 simulations. Jacking N up to 10^7 gives the ABC posterior below. This is a bit better, but really not great. There are two basic problems with the simplistic ABC strategy adopted here, one related to the dimensionality of the data and the other the dimensionality of the parameter space. The most basic problem that we have here is the dimensionality of the data. We have 16 (bivariate) observations, so we want our stochastic simulation to shoot at a point in a 16- or 32-dimensional space. That’s a tough call. The standard way to address this problem is to reduce the dimension of the data by introducing a few carefully chosen summary statistics and then just attempting to hit those. I’ll illustrate this in a subsequent post. The other problem is that often the prior and posterior over the parameters are quite different, and this problem too is exacerbated as the dimension of the parameter space increases. The standard way to deal with this is to sequentially adapt from the prior through a sequence of ABC posteriors. I’ll examine this in a future post as well, once I’ve also posted an introduction to the use of sequential Monte Carlo (SMC) samplers for static problems. For further reading, I suggest browsing the reference list of the Wikipedia page for ABC. Also look through the list of software on that page. In particular, note that there is a CRAN package, abc, providing R support for ABC. There is a vignette for this package which should be sufficient to get started. Java math libraries and Monte Carlo simulation codes Java libraries for (non-uniform) random number simulation Anyone writing serious Monte Carlo (and MCMC) codes relies on having a very good and fast (uniform) random number generator and associated functions for generation of non-uniform random quantities, such as Gaussian, Poisson, Gamma, etc. In a previous post I showed how to write a simple Gibbs sampler in four different languages. In C (and C++) random number generation is easy for most scientists, as the (excellent) GNU Scientific Library (GSL) provides exactly what most people need. But it wasn’t always that way… I remember the days before the GSL, when it was necessary to hunt around on the net for bits of C code to implement different algorithms. Worse, it was often necessary to hunt around for a bit of free FORTRAN code, and compile that with an F77 compiler and figure out how to call it from C. Even in the early Alpha days of the GSL, coverage was patchy, and the API changed often. Bad old days… But those days are long gone, and C programmers no longer have to worry about the problem of random variate generation – they can safely concentrate on developing their interesting new algorithm, and leave the rest to the GSL. Unfortunately for Java programmers, there isn’t yet anything quite comparable to the GSL in Java world. I pretty much ignored Java until Java 5. Before then, the language was too limited, and the compilers and JVMs were too primitive to really take seriously for numerical work. But since the launch of Java 5 I’ve been starting to pay more interest. The language is now a perfectly reasonable O-O language, and the compilers and JVMs are pretty good. On a lot of benchmarks, Java is really quite comparable to C/C++, and Java is nicer to code, and has a lot of impressive associated technology. So if there was a math library comparable to the GSL, I’d be quite tempted to jump ship to the Java world and start writing all of my Monte Carlo codes in Java. But there isn’t. At least not yet. When I first started to take Java seriously, the only good math library with good support for non-uniform random number generation was COLT. COLT was, and still is, pretty good. The code is generally well-written, and fast, and the documentation for it is reasonable. However, the structure of the library is very idiosyncratic, the coverage is a bit patchy, and there doesn’t ever seem to have been a proper development community behind it. It seems very much to have been a one-man project, which has long since stagnated. Unsurprisingly then, COLT has been forked. There is now a Parallel COLT project. This project is continuing the development of COLT, adding new features that were missing from COLT, and, as the name suggests, adding concurrency support. Parallel COLT is also good, and is the main library I currently use for random number generation in Java. However, it has obviously inherited all of the idiosyncrasies that COLT had, and still doesn’t seem to have a large and active development community associated with it. There is no doubt that it is an incredibly useful software library, but it still doesn’t really compare to the GSL. I have watched the emergence of the Apache Commons Math project with great interest (not to be confused with Uncommons Math – another one-man project). I think this project probably has the greatest potential for providing the Java community with their own GSL equivalent. The Commons project has a lot of momentum, the Commons Math project seems to have an active development community, and the structure of the library is more intuitive than that of (Parallel) COLT. However, it is early days, and the library still has patchy coverage and is a bit rough around the edges. It reminds me a lot of the GSL back in its Alpha days. I’d not bothered to even download it until recently, as the random number generation component didn’t include the generation of gamma random quantities – an absolutely essential requirement for me. However, I noticed recently that the latest release (2.2) did include gamma generation, so I decided to download it and try it out. It works, but the generation of gamma random quantities is very slow (around 50 times slower than Parallel COLT). This isn’t a fundamental design flaw of the whole library – generating Gaussian random quantities is quite comparable with other libraries. It’s just that an inversion method has been used for gamma generation. All efficient gamma generators use a neat rejection scheme. In case anyone would like to investigate for themselves, here is a complete program for gamma generation designed to be linked against Parallel COLT: import java.util.*; import cern.jet.random.tdouble.*; import cern.jet.random.tdouble.engine.*; class GammaPC { public static void main(String[] arg) { DoubleRandomEngine rngEngine=new DoubleMersenneTwister(); Gamma rngG=new Gamma(1.0,1.0,rngEngine); long N=10000; double x=0.0; for (int i=0;i<N;i++) { for (int j=0;j<1000;j++) { x=rngG.nextDouble(3.0,25.0); } System.out.println(x); } } } and here is a complete program designed to be linked against Commons Math: import java.util.*; import org.apache.commons.math.*; import org.apache.commons.math.random.*; class GammaACM { public static void main(String[] arg) throws MathException { RandomDataImpl rng=new RandomDataImpl(); long N=10000; double x=0.0; for (int i=0;i<N;i++) { for (int j=0;j<1000;j++) { x=rng.nextGamma(3.0,1.0/25.0); } System.out.println(x); } } } The two codes do the same thing (note that they parameterise the gamma distribution differently). Both programs work (they generate variates from the same, correct, distribution), and the Commons Math interface is slightly nicer, but the code is much slower to execute. I’m still optimistic that Commons Math will one day be Java’s GSL, but I’m not giving up on Parallel COLT (or C, for that matter!) just yet… Getting started with parallel MCMC Introduction to parallel MCMC for Bayesian inference, using C, MPI, the GSL and SPRNG Introduction This post is aimed at people who already know how to code up Markov Chain Monte Carlo (MCMC) algorithms in C, but are interested in how to parallelise their code to run on multi-core machines and HPC clusters. I discussed different languages for coding MCMC algorithms in a previous post. The advantage of C is that it is fast, standard and has excellent scientific library support. Ultimately, people pursuing this route will be interested in running their code on large clusters of fast servers, but for the purposes of development and testing, this really isn’t necessary. A single dual-core laptop, or similar, is absolutely fine. I develop and test on a dual-core laptop running Ubuntu linux, so that is what I will assume for the rest of this post. There are several possible environments for parallel computing, but I will focus on the Message-Passing Interface (MPI). This is a well-established standard for parallel computing, there are many implementations, and it is by far the most commonly used high performance computing (HPC) framework today. Even if you are ultimately interested in writing code for novel architectures such as GPUs, learning the basics of parallel computation using MPI will be time well spent. MPI The whole point of MPI is that it is a standard, so code written for one implementation should run fine with any other. There are many implementations. On Linux platforms, the most popular are OpenMPI, LAM, and MPICH. There are various pros and cons associated with each implementation, and if installing on a powerful HPC cluster, serious consideration should be given to which will be the most beneficial. For basic development and testing, however, it really doesn’t matter which is used. I use OpenMPI on my Ubuntu laptop, which can be installed with a simple: sudo apt-get install openmpi-bin libopenmpi-dev That’s it! You’re ready to go… You can test your installation with a simple “Hello world” program such as: #include <stdio.h> #include <mpi.h> int main (int argc,char **argv) { int rank, size; MPI_Init (&argc, &argv); MPI_Comm_rank (MPI_COMM_WORLD, &rank); MPI_Comm_size (MPI_COMM_WORLD, &size); printf( "Hello world from process %d of %d\n", rank, size ); MPI_Finalize(); return 0; } You should be able to compile this with mpicc -o helloworld helloworld.c and run (on 2 processors) with mpirun -np 2 helloworld GSL If you are writing non-trivial MCMC codes, you are almost certainly going to need to use a sophisticated math library and associated random number generation (RNG) routines. I typically use the GSL. On Ubuntu, the GSL can be installed with: sudo apt-get install gsl-bin libgsl0-dev A simple script to generate some non-uniform random numbers is given below. #include <gsl/gsl_rng.h> #include <gsl/gsl_randist.h> int main(void) { int i; double z; gsl_rng *r; r = gsl_rng_alloc(gsl_rng_mt19937); gsl_rng_set(r,0); for (i=0;i<10;i++) { z = gsl_ran_gaussian(r,1.0); printf("z(%d) = %f\n",i,z); } exit(EXIT_SUCCESS); } This can be compiled with a command like: gcc gsl_ran_demo.c -o gsl_ran_demo -lgsl -lgslcblas and run with ./gsl_ran_demo SPRNG When writing parallel Monte Carlo codes, it is important to be able to use independent streams of random numbers on each processor. Although it is tempting to “fudge” things by using a random number generator with a different seed on each processor, this does not guarantee independence of the streams, and an unfortunate choice of seeds could potentially lead to bad behaviour of your algorithm. The solution to this problem is to use a parallel random number generator (PRNG), designed specifically to give independent streams on different processors. Unfortunately the GSL does not have native support for such parallel random number generators, so an external library should be used. SPRNG 2.0 is a popular choice. SPRNG is designed so that it can be used with MPI, but also that it does not have to be. This is an issue, as the standard binary packages distributed with Ubuntu (libsprng2, libsprng2-dev) are compiled without MPI support. If you are going to be using SPRNG with MPI, things are simpler with MPI support, so it makes sense to download sprng2.0b.tar.gz from the SPRNG web site, and build it from source, carefully following the instructions for including MPI support. If you are not familiar with building libraries from source, you may need help from someone who is. Once you have compiled SPRNG with MPI support, you can test it with the following code: #include <stdio.h> #include <stdlib.h> #include <mpi.h> #define SIMPLE_SPRNG #define USE_MPI #include "sprng.h" int main(int argc,char *argv[]) { double rn; int i,k; MPI_Init(&argc,&argv); MPI_Comm_rank(MPI_COMM_WORLD,&k); init_sprng(DEFAULT_RNG_TYPE,0,SPRNG_DEFAULT); for (i=0;i<10;i++) { rn = sprng(); printf("Process %d, random number %d: %f\n", k, i+1, rn); } MPI_Finalize(); exit(EXIT_SUCCESS); } which can be compiled with a command like: mpicc -I/usr/local/src/sprng2.0/include -L/usr/local/src/sprng2.0/lib -o sprng_demo sprng_demo.c -lsprng -lgmp You will need to edit paths here to match your installation. If if builds, it can be run on 2 processors with a command like: mpirun -np 2 sprng_demo If it doesn’t build, there are many possible reasons. Check the error messages carefully. However, if the compilation fails at the linking stage with obscure messages about not being able to find certain SPRNG MPI functions, one possibility is that the SPRNG library has not been compiled with MPI support. The problem with SPRNG is that it only provides a uniform random number generator. Of course we would really like to be able to use the SPRNG generator in conjunction with all of the sophisticated GSL routines for non-uniform random number generation. Many years ago I wrote a small piece of code to accomplish this, gsl-sprng.h. Download this and put it in your include path for the following example: #include <mpi.h> #include <gsl/gsl_rng.h> #include "gsl-sprng.h" #include <gsl/gsl_randist.h> int main(int argc,char *argv[]) { int i,k,po; gsl_rng *r; MPI_Init(&argc,&argv); MPI_Comm_rank(MPI_COMM_WORLD,&k); r=gsl_rng_alloc(gsl_rng_sprng20); for (i=0;i<10;i++) { po = gsl_ran_poisson(r,2.0); printf("Process %d, random number %d: %d\n", k, i+1, po); } MPI_Finalize(); exit(EXIT_SUCCESS); } A new GSL RNG, gsl_rng_sprng20 is created, by including gsl-sprng.h immediately after gsl_rng.h. If you want to set a seed, do so in the usual GSL way, but make sure to set it to be the same on each processor. I have had several emails recently from people who claim that gsl-sprng.h “doesn’t work”. All I can say is that it still works for me! I suspect the problem is that people are attempting to use it with a version of SPRNG without MPI support. That won’t work… Check that the previous SPRNG example works, first. I can compile and run the above code with mpicc -I/usr/local/src/sprng2.0/include -L/usr/local/src/sprng2.0/lib -o gsl-sprng_demo gsl-sprng_demo.c -lsprng -lgmp -lgsl -lgslcblas mpirun -np 2 gsl-sprng_demo Parallel Monte Carlo Now we have parallel random number streams, we can think about how to do parallel Monte Carlo simulations. Here is a simple example: #include <math.h> #include <mpi.h> #include <gsl/gsl_rng.h> #include "gsl-sprng.h" int main(int argc,char *argv[]) { int i,k,N; double u,ksum,Nsum; gsl_rng *r; MPI_Init(&argc,&argv); MPI_Comm_size(MPI_COMM_WORLD,&N); MPI_Comm_rank(MPI_COMM_WORLD,&k); r=gsl_rng_alloc(gsl_rng_sprng20); for (i=0;i<10000;i++) { u = gsl_rng_uniform(r); ksum += exp(-u*u); } MPI_Reduce(&ksum,&Nsum,1,MPI_DOUBLE,MPI_SUM,0,MPI_COMM_WORLD); if (k == 0) { printf("Monte carlo estimate is %f\n", (Nsum/10000)/N ); } MPI_Finalize(); exit(EXIT_SUCCESS); } which calculates a Monte Carlo estimate of the integral $\displaystyle I=\int_0^1 \exp(-u^2)du$ using 10k variates on each available processor. The MPI command MPI_Reduce is used to summarise the values obtained independently in each process. I compile and run with mpicc -I/usr/local/src/sprng2.0/include -L/usr/local/src/sprng2.0/lib -o monte-carlo monte-carlo.c -lsprng -lgmp -lgsl -lgslcblas mpirun -np 2 monte-carlo Parallel chains MCMC Once parallel Monte Carlo has been mastered, it is time to move on to parallel MCMC. First it makes sense to understand how to run parallel MCMC chains in an MPI environment. I will illustrate this with a simple Metropolis-Hastings algorithm to sample a standard normal using uniform proposals, as discussed in a previous post. Here an independent chain is run on each processor, and the results are written into separate files. #include <gsl/gsl_rng.h> #include "gsl-sprng.h" #include <gsl/gsl_randist.h> #include <mpi.h> int main(int argc,char *argv[]) { int k,i,iters; double x,can,a,alpha; gsl_rng *r; FILE *s; char filename[15]; MPI_Init(&argc,&argv); MPI_Comm_rank(MPI_COMM_WORLD,&k); if ((argc != 3)) { if (k == 0) fprintf(stderr,"Usage: %s <iters> <alpha>\n",argv[0]); MPI_Finalize(); return(EXIT_FAILURE); } iters=atoi(argv[1]); alpha=atof(argv[2]); r=gsl_rng_alloc(gsl_rng_sprng20); sprintf(filename,"chain-%03d.tab",k); s=fopen(filename,"w"); if (s==NULL) { perror("Failed open"); MPI_Finalize(); return(EXIT_FAILURE); } x = gsl_ran_flat(r,-20,20); fprintf(s,"Iter X\n"); for (i=0;i<iters;i++) { can = x + gsl_ran_flat(r,-alpha,alpha); a = gsl_ran_ugaussian_pdf(can) / gsl_ran_ugaussian_pdf(x); if (gsl_rng_uniform(r) < a) x = can; fprintf(s,"%d %f\n",i,x); } fclose(s); MPI_Finalize(); return(EXIT_SUCCESS); } I can compile and run this with the following commands mpicc -I/usr/local/src/sprng2.0/include -L/usr/local/src/sprng2.0/lib -o mcmc mcmc.c -lsprng -lgmp -lgsl -lgslcblas mpirun -np 2 mcmc 100000 1 Parallelising a single MCMC chain The parallel chains approach turns out to be surprisingly effective in practice. Obviously the disadvantage of that approach is that “burn in” has to be repeated on every processor, which limits how much efficiency gain can be acheived by running across many processors. Consequently it is often desirable to try and parallelise a single MCMC chain. As MCMC algorithms are inherently sequential, parallelisation is not completely trivial, and most (but not all) approaches to parallelising a single MCMC chain focus on the parallelisation of each iteration. In order for this to be worthwhile, it is necessary that the problem being considered is non-trivial, having a large state space. The strategy is then to divide the state space into “chunks” which can be updated in parallel. I don’t have time to go through a real example in detail in this blog post, but fortunately I wrote a book chapter on this topic almost 10 years ago which is still valid and relevant today. The citation details are: Wilkinson, D. J. (2005) Parallel Bayesian Computation, Chapter 16 in E. J. Kontoghiorghes (ed.) Handbook of Parallel Computing and Statistics, Marcel Dekker/CRC Press, 481-512. The book was eventually published in 2005 after a long delay. The publisher which originally commisioned the handbook (Marcel Dekker) was taken over by CRC Press before publication, and the project lay dormant for a couple of years until the new publisher picked it up again and decided to proceed with publication. I have a draft of my original submission in PDF which I recommend reading for further information. The code examples used are also available for download, including several of the examples used in this post, as well as an extended case study on parallelisation of a single chain for Bayesian inference in a stochastic volatility model. Although the chapter is nearly 10 years old, the issues discussed are all still remarkably up-to-date, and the code examples all still work. I think that is a testament to the stability of the technology adopted (C, MPI, GSL). Some of the other handbook chapters have not stood the test of time so well. For basic information on getting started with MPI and key MPI commands for implementing parallel MCMC algorithms, the above mentioned book chapter is a reasonable place to start. Read it all through carefully, run the examples, and carefully study the code for the parallel stochastic volatility example. Once that is understood, you should find it possible to start writing your own parallel MCMC algorithms. For further information about more sophisticated MPI usage and additional commands, I find the annotated specification: MPI – The complete reference to be as good a source as any.
2020-01-28 10:04:23
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 58, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.5438796877861023, "perplexity": 1239.4568246210729}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-05/segments/1579251778168.77/warc/CC-MAIN-20200128091916-20200128121916-00319.warc.gz"}
http://ndl.iitkgp.ac.in/document/MDl5cHdNUUlnd0lnZHNoQXlvOG5lUDRMN1pYeWZlNWZKMURlN2N6NVArcz0
### On stochastic integrals on the extended fock space and its riggings.On stochastic integrals on the extended fock space and its riggings. Access Restriction Open Author Kachanovsky, N. A. Source CiteSeerX Content type Text File Format PDF Subject Domain (in DDC) Computer science, information & general works ♦ Data processing & computer science Subject Keyword Isometrical Isomorphism ♦ Equivalence Class ♦ Extended Fock Space ♦ So-called Extended Fock Space Ext ♦ Symmetric Fock Space ♦ So-called Generalized Meixner Measure ♦ Symmetric Tensor Power ♦ Classical Ito ♦ Wiener-ito Sigal ♦ Exact Definition ♦ Function Ext L2 ♦ Meixner Random Process ♦ Ito Integral ♦ Meixner Process ♦ Meixner Mea-sure ♦ Complex-valued Symmetric Function ♦ Orthogonal Independent Increment ♦ Ext L2 ♦ Square Integrable Normal Martingale ♦ Hilbert Space ♦ Complex-valued Square ♦ Probability Space ♦ Flow Ft ♦ Schwartz Distribution Space Abstract Let µ = µα,β be the so-called generalized Meixner measure ([1, 2]) on the Schwartz distributions space D ′ = D′(R+) (subject to parametrs α and β, µ can be, in particular, the Gaussian, Poissonian, Gamma, Pascal or Meixner mea-sure). Denote by L2(D′, µ) the space of complex-valued square integrable with respect to µ functions on D′. One can show ([1]) that L2(D′, µ) can be identified with the so-called extended Fock space Γext = n=0 H(n)extn!, here the Hilbert spaces H(n)ext depend on the product αβ and consist of (equivalence classes generated by) complex-valued symmetric functions (see the exact definition in [1, 2]). Denote by I: Γext → L2(D′, µ) the corresponding (generalized Wiener-Itô-Sigal) isometrical isomorphism. Note that Γext can be considered as an extension of the symmetric Fock space n=0 L2(R+)⊗̂nn!, here ⊗ ̂ denotes a symmetric tensor power. On the probability space (D′, C(D′), µ), where C(D′) is the generated by cylinder sets σ-algebra on D′, we consider a Meixner random process M · = I(0, 1[0,·), 0,...). It follows from results of [1] thatM is a locally square integrable normal martingale with orthogonal independent increments with respect to the flow of (full and right-continuous by definition) σ-algebras Ft: = σ{Mu: u ≤ t}. We define the Ito ̂ integral on the extended Fock space as the I−1-image of the classical Ito ̂ stochastic integral with respect to the Meixner process. More exactly, we say that a function f ∈ Γext⊗L2(R+) is Ito ̂ integrable if (I ⊗ 1)f ∈ L2(D′, µ)⊗ L2(R+) is adapted with respect to the flow {Ft}t≥0 of generaled by M σ-algebras. In this case we define the Ito ̂ integral I(f) ∈ Γext by the formula I(f): = I−1 Educational Role Student ♦ Teacher Age Range above 22 year Educational Use Research Education Level UG and PG ♦ Career/Technical Study
2020-09-28 02:48:40
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8643026947975159, "perplexity": 7000.301521257639}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-40/segments/1600401583556.73/warc/CC-MAIN-20200928010415-20200928040415-00639.warc.gz"}
https://www.bartleby.com/solution-answer/chapter-20-problem-7p-intermediate-accounting-reporting-and-analysis-3rd-edition/9781337788281/sales-type-lease-with-receipts-at-end-of-year-lamplighter-company-the-lessor-agrees-to-lease/64fa8169-8c55-11e9-8385-02ee952b546e
# Sales-Type Lease with Receipts at End of Year Lamplighter Company, the lessor, agrees to lease equipment to Tilson Company, the lessee, beginning January 1, 2019. The lease terms, provisions, and related events are as follows: The lease is noncancelable and has a term of 8 years. The annual rentals are $32,000, payable at the end of each year. Tilson agrees to pay all executory costs directly to a third party. The interest rate implicit in the lease is 14%. The cost of the equipment to the lamplighter is$110,000. The fair value of the equipment is $155,000. Lamplighter incurs no material initial direct costs. Lamplighter expects to collect all lease payments. Lamplighter estimates that the fair value at the end of the lease term will be$20,000 and that the economic life of the equipment is 9 years. This value is not guaranteed by the Tilson. Required: 1. Calculate the selling price implied by the lease. 2. Next Level State why this is a sales-type lease. 3. Prepare a table summarizing the lease receipts and interest income earned by the lessor for this sales-type lease. 4. Prepare an accretion schedule for the unguaranteed residual asset. 5. Prepare journal entries for lamplighter for the years 2019, 2020, and 2021. 6. Next Level Prepare partial balance sheets for lamplighter for December 31, 2019, and December 31, 2020, showing how the accounts should be disclosed. Use the change in present value approach to classify the lease receivable as current and noncurrent. ### Intermediate Accounting: Reporting... 3rd Edition James M. Wahlen + 2 others Publisher: Cengage Learning ISBN: 9781337788281 ### Intermediate Accounting: Reporting... 3rd Edition James M. Wahlen + 2 others Publisher: Cengage Learning ISBN: 9781337788281 #### Solutions Chapter 20, Problem 7P Textbook Problem ## Expert Solution ### Want to see the full answer? Check out a sample textbook solution. ### Want to see this answer and more? Experts are waiting 24/7 to provide step-by-step solutions in as fast as 30 minutes!* *Response times vary by subject and question complexity. Median response time is 34 minutes and may be longer for new subjects.
2021-04-17 15:42:14
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.17635105550289154, "perplexity": 6099.703456686928}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-17/segments/1618038460648.48/warc/CC-MAIN-20210417132441-20210417162441-00412.warc.gz"}
https://studydaddy.com/question/acct-553-week-1-homework
QUESTION # ACCT 553 Week 1 Homework This paperwork of ACCT 553 Week 1 Homework includes: Briefly discuss the purpose of the Sixteenth Amendment. Explain the two "safe harbors" available to an Individual taxpayer to avoid a penalty for underpayment of estimated tax. Explain the distinction between an "above the line" deduction (i.e. FOR AGI) and a below the line deduction (i.e. FROM AGI). Which one is more valuable? What is an Installment Sale? Is it a form of income "deferral" ? When can't you elect this form of reporting? • @ Tutor has posted answer for $12.99. See answer's preview$12.99
2018-06-24 01:35:04
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.6686437726020813, "perplexity": 11389.882211794438}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-26/segments/1529267865995.86/warc/CC-MAIN-20180624005242-20180624025242-00353.warc.gz"}
http://www.naukowiec.org/tablice/Matematyka/wartosci-funkcji-trygonometrycznych-wybranych-katow_387.html
Miłość, która mnie uszczęśliwia to taka, którą mogę dzielić z tobą. Don Miguel Ruiz ## Wartości funkcji trygonometrycznych wybranych kątów W tabli poniżej przedstawiono wartości funkcji trygonometrycznych wybranych kątów przedstawionych w radianach i stopniach. $$\alpha$$ $$\text{sin} \: \alpha$$ $$\text{cos} \: \alpha$$ $$\text{tg} \: \alpha$$ $$\text{ctg} \: \alpha$$ $$\text{radiany}$$ $$\text{stopnie}$$ $$0$$ $$0$$ $$0$$ $$1$$ $$0$$ $$-$$ $$\dfrac{\pi}{12}$$ $$15$$ $$\dfrac{\sqrt{6} - \sqrt{2}}{4}$$ $$\dfrac{\sqrt{6} + \sqrt{2}}{4}$$ $$2 - \sqrt{3}$$ $$2 + \sqrt{3}$$ $$\dfrac{\pi}{10}$$ $$18$$ $$\dfrac{\sqrt{5} - 1}{4}$$ $$\dfrac{\sqrt{10 + 2 \sqrt{5}}}{4}$$ $$\dfrac{\sqrt{25 - 10 \sqrt{5}}}{5}$$ $$\sqrt{5 + 2 \sqrt{5}}$$ $$\dfrac{\pi}{8}$$ $$22 \dfrac{1}{2}$$ $$\dfrac{\sqrt{2 - \sqrt{2}}}{2}$$ $$\dfrac{\sqrt{2 - \sqrt{2}}}{2}$$ $$\sqrt{2} -1$$ $$\sqrt{2} + 1$$ $$\dfrac{\pi}{6}$$ $$30$$ $$\dfrac{1}{2}$$ $$\dfrac{\sqrt{3}}{2}$$ $$\dfrac{\sqrt{3}}{3}$$ $$\sqrt{3}$$ $$\dfrac{\pi}{4}$$ $$45$$ $$\dfrac{\sqrt{2}}{2}$$ $$\dfrac{\sqrt{2}}{2}$$ $$1$$ $$1$$ $$\dfrac{\pi}{3}$$ $$60$$ $$\dfrac{\sqrt{3}}{2}$$ $$\dfrac{1}{2}$$ $$\sqrt{3}$$ $$\dfrac{\sqrt{3}}{3}$$ $$\dfrac{5}{12} \pi$$ $$75$$ $$\dfrac{\sqrt{6} + \sqrt{2}}{4}$$ $$\dfrac{\sqrt{6} - \sqrt{2}}{4}$$ $$2 + \sqrt{3}$$ $$2 - \sqrt{3}$$ $$\dfrac{\pi}{2}$$ $$90$$ $$1$$ $$0$$ $$-$$ $$0$$ $$\pi$$ $$180$$ $$0$$ $$-1$$ $$0$$ $$-$$ $$\dfrac{3}{2} \pi$$ $$270$$ $$-1$$ $$0$$ $$-$$ $$0$$ $$2 \pi$$ $$360$$ $$0$$ $$1$$ $$0$$ $$-$$ $$\dfrac{\sqrt{2 - \sqrt{2}}}{2}$$
2019-06-19 11:41:49
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.4262438118457794, "perplexity": 184.52321332481668}, "config": {"markdown_headings": true, "markdown_code": false, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-26/segments/1560627998959.46/warc/CC-MAIN-20190619103826-20190619125826-00380.warc.gz"}
https://www.spss-tutorials.com/one-sample-t-test/
SPSS TUTORIALS # One-Sample T-Test – Quick Tutorial & Example A one-sample t-test evaluates if a population mean is likely to be x: some hypothesized value. ## One-Sample T-Test Example A school director thinks his students perform poorly due to low IQ scores. Now, most IQ tests have been calibrated to have a mean of 100 points in the general population. So the question is does the student population have a mean IQ score of 100? Now, our school has 1,114 students and the IQ tests are somewhat costly to administer. Our director therefore draws a simple random sample of N = 38 students and tests them on 4 IQ components: • verb (Verbal Intelligence ) • math (Mathematical Ability ) • clas (Classification Skills ) • logi (Logical Reasoning Skills) The raw data thus collected are in this Googlesheet, partly shown below. Note that a couple of scores are missing due to illness and unknown reasons. ## Null Hypothesis We'll try to demonstrate that our students have low IQ scores by rejecting the null hypothesis that the mean IQ score for the entire student population is 100 for each of the 4 IQ components measured. Our main challenge is that we only have data on a sample of 38 students from a population of N = 1,114. But let's first just look at some descriptive statistics for each component: ## Descriptive Statistics Our first basic conclusion is that our 38 students score lower than 100 points on all 4 IQ components. The differences for verb (99.29) and math (97.97) are small. Those for clas (93.91) and logi (94.74) seem somewhat more serious. Now, our sample of 38 students may obviously come up with slightly different means than our population of N = 1,114. So what can we (not) conclude regarding our population? We'll try to generalize these sample results to our population with 2 different approaches: Both approaches require some assumptions so let's first look into those. ## Assumptions The assumptions required for our one-sample t-tests are 1. independent observations and 2. normality: the IQ scores must be normally distributed in the entire population. Do our data meet these assumptions? First off, 1. our students didn't interact during their tests. Therefore, our observations are likely to be independent. 2. Normality is only needed for small sample sizes, say N < 25 or so. For the data at hand, normality is no issue. For smaller sample sizes, you could evaluate the normality assumption by However, the data at hand meet all assumptions so let's now look into the actual tests. ## Formulas If we'd draw many samples of students, such samples would come up with different means. We can compute the standard deviation of those means over hypothesized samples: the standard error of the mean or $$SE_{mean}$$ $$SE_{mean} = \frac{SD}{\sqrt{N}}$$ for our first IQ component, this results in $$SE_{mean} = \frac{12.45}{\sqrt{38}} = 2.02$$ Our null hypothesis is that the population mean, $$\mu_0 = 100$$. If this is true, then the average sample mean should also be 100. We now basically compute the z-score for our sample mean: the test statistic $$t$$ $$t = \frac{M - \mu_0}{SE_{mean}}$$ for our first IQ component, this results in $$t = \frac{99.29 - 100}{2.02} = -0.35$$ If the assumptions are met, $$t$$ follows a t distribution with the degrees of freedom or $$df$$ given by $$df = N - 1$$ For a sample of 38 respondents, this results in $$df = 38 - 1 = 37$$ Given $$t$$ and $$df$$, we can simply look up that the 2-tailed significance level $$p$$ = 0.73 in this Googlesheet, partly shown below. ## Interpretation As a rule of thumb, we reject the null hypothesis if p < 0.05. We just found that p = 0.73 so we don't reject our null hypothesis: given our sample data, the population mean being 100 is a credible statement. So precisely what does p = 0.73 mean? Well, it means there's a 0.73 (or 73%) probability that t < -0.35 or t > 0.35. The figure below illustrates how this probability results from the sampling distribution, t(37). Next, remember that t is just a standardized mean difference. For our data, t = -0.35 corresponds to a difference of -0.71 IQ points. Therefore, p = 0.73 means that there's a 0.73 probability of finding an absolute mean difference of at least 0.71 points. Roughly speaking, the sample mean we found is likely to occur if the null hypothesis is true. ## Effect Size The only effect size measure for a one-sample t-test is Cohen’s D defined as $$Cohen's\;D = \frac{M - \mu_0}{SD}$$ For our first IQ test component, this results in $$Cohen's\;D = \frac{99.29 - 100}{12.45} = -0.06$$ Some general conventions are that • | Cohen’s D | = 0.20 indicates a small effect size; • | Cohen’s D | = 0.50 indicates a medium effect size; • | Cohen’s D | = 0.80 indicates a large effect size. This means that Cohen’s D = -0.06 indicates a negligible effect size for our first test component. Cohen’s D is completely absent from SPSS except for SPSS 27. However, we can easily obtain it from JASP. The JASP output below shows the effect sizes for all 4 IQ test components. Note that the last 2 IQ components -clas and logi- almost have medium effect sizes. These are also the 2 components whose means differ significantly from 100: p < 0.05 for both means (third table column). ## Confidence Intervals for Means Our data came up with sample means for our 4 IQ test components. Now, we know that sample means typically differ somewhat from their population counterparts. So what are likely ranges for the population means we're after? This is often answered by computing 95% confidence intervals. We'll demonstrate the procedure for our last IQ component, logical reasoning. Since we've 34 observations, t follows a t-distribution with df = 33. We'll first look up which t-values enclose the most likely 95% from the inverse t-distribution. We'll do so by typing =T.INV(0.025,33) into any cell of a Googlesheet, which returns -2.03. Note that 0.025 is 2.5%. This is because the 5% most unlikely values are divided over both tails of the distribution as shown below. Now, our t-value of -2.03 estimates that our 95% of our sample means fluctuate between ± 2.03 standard errors denoted by $$SE_{mean}$$ For our last IQ component, $$SE_{mean} = \frac{12.57}{\sqrt34} = 2.16$$ We now know that 95% of our sample means are estimated to fluctuate between ± 2.03 · 2.16 = 4.39 IQ test points. Last, we combine this fluctuation with our observed sample mean of 94.74: $$CI_{95\%} = [94.74 - 4.39,94.74 + 4.39] = [90.35,99.12]$$ Note that our 95% confidence interval does not enclose our hypothesized population mean of 100. This implies that we'll reject this null hypothesis at α = 0.05. We don't even need to run the actual t-test for drawing this conclusion. ## APA Style Reporting A single t-test is usually reported in text as in “The mean for verbal skills did not differ from 100, t(37) = -0.35, p = 0.73, Cohen’s D = 0.06.” For multiple tests, a simple overview table as shown below is recommended. We feel that confidence intervals for means (not mean differences) should also be included. Since the APA does not mention these, we left them out for now. APA Style Reporting Table Example for One-Sample T-Tests Right. Well, I can't think of anything else that is relevant regarding the one-sample t-test. If you do, don't be shy. Just write us a comment below. We're always happy to hear from you!
2020-08-11 21:05:33
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 2, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.7747768759727478, "perplexity": 1233.228835033626}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-34/segments/1596439738855.80/warc/CC-MAIN-20200811205740-20200811235740-00344.warc.gz"}
https://cstheory.stackexchange.com/questions/16391/how-to-approximate-minimum-clique-edge-cover
# How to approximate minimum clique edge cover I'd like to take an undirected graph and express it (meaning all of its edges) using only cliques (ideally minimizing their sum cardinality). It's clear that actually finding the minimum solution is at least as hard as the set-cover problem, but set-cover has good approximation algorithms. Does anyone know a good approximation algorithm for this? Particularly, if I start with each edge as its own clique and then arbitrarily choose two cliques from my list (whose union also forms a clique) and merge them and do this over and over until I can't merge any more, what's the worst case (as a ratio / the minimal solution)? Is there some heuristic I should use to determine what to merge next? • As an example, in the graph where A B C D form a square and there's a diagonal edge between A and C, the solution with minimal sum cardinality is {ABC, CDA} for a total of 6. Other solutions would be {ABC, CD, AD} with sum cardinality 7 and {AB, BC, CD, AC, AD} with sum cardinality 10 Feb 7 '13 at 0:59 • Maybe I misunderstood something, but would simply taking the $m$ edges minimizes the sum cardinality. It cannot be smaller than $m$, right? Jun 9 '20 at 1:56
2021-10-16 06:28:50
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.40087321400642395, "perplexity": 575.7034508236055}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-43/segments/1634323583423.96/warc/CC-MAIN-20211016043926-20211016073926-00571.warc.gz"}
https://codereview.stackexchange.com/questions/142269/multiplying-two-or-three-dimensional-arrays-with-broadcasting
Multiplying two- or three-dimensional arrays with broadcasting Background Disclaimer: Skip if you are not interested in the background. The code is far far simpler than this. The problem is more about programming than math. Here is the definition of multiplication with broadcasting in human readable language. I am trying to optimize the multiplication of large n-dimensional arrays $A$ and $B$ with shape $(I_0,I_1,I_2, ..., I_{n-1})$ and $(J_0,J_1,J_2,..., J_{n-1})$ with broadcasting, as in matlab and numpy. $A$ and $B$ are stored in row-major order in memory. The two arrays can be multiplied with broadcasting if for each $i$, at least one of the following three statements is true: Case S1: $I_i == J_i$, Case S2: $I_{i} == 1$, Case S3: $J_{i} == 1$. From the shape, we can define stride. Stride is the numbers of linear index required to step through to take one step along each dimension of the n-dimensional array. Linear index is the index of the 1D array in memory that stores the n-dimensional array. The stride for $A$, which is $(P_1, P_2, P_3,...,P_n)$ has the following formula: $P_i = I_{i+1}I_{i+2}I_{i+3}...I_{n}$ except $P_{n} = 1$. Similarly strides for $B$ is $(Q_1, Q_2, Q_3, ... Q_n)$. The general form of the nested loop is: int cal_size(int* shape, int n){ int size = 1; for(int i = 0; i < n; ++i) size*=shape[i]; } int* cal_stride(int* shape, int n){ int size = cal_size(shape, n); int* stride = new int[n]; stride[0] = size/shape[0]; for(int i = 0; i < n; ++i){ stride[i] = stride[i-1]/shape[i]; } } int n; //the number of dimensions (given) int I[] = {I0,I1,I2,...,I_last}; //shape of n-dimensional array A (given) int J[] = {J0,J1,J2,...,J_last}; //shape of n-dimensional array B (given) int A[cal_size(I, n)]; //1D array in memory that stores //the n-dimensional array A in row major order int B[cal_size(J, n)]; //1D array in memory that stores //the n-dimensional array B in row major order int* P = cal_stride(I, n); int* Q = cal_stride(J, n); int i[n]; int j[n]; //The nested loop int Ai, Bi; for(int i[0] = 0; i[0] < I[0]; ++i[0]){ //Case S1 for(int i[1] = 0; i[1] < J[1]; ++i[1]){ //Case S2 for(int i[2] = 0; i[2] < I[2]; ++i[2]){ //Case S3 ... Ai = i[0]*P[0] //for case S1 or S3 +i[1]*0 //for case S2 +i[2]*P[2] //for case S1 or S3 ... +i[n-1]*P[n-1]; //for case S1 or S3 //The above line converts the multi-index //(i[0],i[1],i[2],...,i[n-1]) over the n-dimensional //array to the index of the 1D array //in memory. Bi = i[0]*Q[0] //for case S1 or S2 +i[1]*Q[1] //for case S2 or S2 +i[2]*0 //for case S3 ... +i[n-1]*Q[n-1]; //for case S1 or S2 A[Ai] *= B[Bi]; }}}}}...} (i think i can deal with this general part with meta-programming. For now, I am just concerned with optimization.) Question Is it possible to rewrite the following loop to give better performance? Can memoization help? I am not interested in unrolling because the actual array has 10,000+ elements. #include <iostream> using namespace std; //version 1 int main(void){ int A[24]; int B[12]; int iA,iB; //version 1. //(only one version is used. but i show all versions i can think of.) for(int i = 0; i < 2; ++i){ for(int j = 0; j < 3; ++j){ for(int k = 0; k < 4; ++k){ iA = i*12 + j *4 + k*1; iB = 4*j + k*1; A[iA]*=B[iB]; cout << iA <<"," <<iB <<endl; } } } //version 2. for(int i = 0; i < 2; ++i){ for(int j = 0; j < 12; ++j){ iA = i*12 + j; iB = j; A[iA]*=B[iB]; cout << iA <<"," <<iB <<endl; } } //version 3 for(int iA = 0; iA < 2*3*4; iA+=12){ for(int iB = 0; iB < 3*4; ++iA, ++iB){ A[iA]*=B[iB]; cout << iA <<"," <<iB <<endl; } } } Assembly instruction generated by gcc compiler: gcc godbolt of version 1 gcc godbolt of version 3 (version 1 and 3 give same assembly code after fixing the bug) Wait a minute... in the actual case, the shapes are not known during compliation time. Is version 1 and 3 still the same in this case? • Can you please reformulate your question in a mathematically exact way. Why can A and B have different dimensionality with different number of elements. Do you only multiply the elements or do you want a full matrix multiplication? – miscco Sep 23 '16 at 20:02 • It is simpler than full matrix multiplication. I seriously doubt most programmer would want the formal mathematical definition. But it is here. Hopefully this won't make everyone face palm and start to throw error. – rxu Sep 23 '16 at 20:48 • hopefully this can work with openmp without too much trouble. – rxu Sep 24 '16 at 0:32 Bug Version 3 does half as many multiplications than the other versions. The problem is in the loops: for(int iA = 0; iA < 2*3*4; iA+=12){ for(int iB = 0; iB < 3*4; ++iA, ++iB){ As you can see, you have both iA+=12 in the outer loop and ++iA in the inner loop. For this version, if you are incrementing iA in the inner loop, you shouldn't also increment it in the outer loop. The correct way would be: for(int iA = 0; iA < 2*3*4; ){ for(int iB = 0; iB < 3*4; ++iA, ++iB){
2019-10-15 10:42:12
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.30406448245048523, "perplexity": 3918.296927684581}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 5, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-43/segments/1570986657949.34/warc/CC-MAIN-20191015082202-20191015105702-00317.warc.gz"}
https://www.bartleby.com/questions-and-answers/find-the-average-value-of-the-function-over-the-interval-f-x-9-x2-0-3-hint-use-geometry-to-evaluate-/fd1ca861-86b0-4dca-9415-6f722f2f0c9d
# Find the average value of the function over the interval f (x) = 9 − x2, [0, 3] Hint: Use geometry to evaluate the integral. Question Find the average value of the function over the interval f (x) = 9 − x2, [0, 3] Hint: Use geometry to evaluate the integral.
2021-07-25 13:59:28
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.895421028137207, "perplexity": 173.00693295454442}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-31/segments/1627046151672.96/warc/CC-MAIN-20210725111913-20210725141913-00424.warc.gz"}
https://www.mathway.com/examples/calculus/operations-on-functions/finding-roots-zeros?id=30
# Calculus Examples To find the roots of the equation, replace with and solve. Rewrite the equation as . Take the root of both sides of the equation to eliminate the exponent on the left side. The complete solution is the result of both the positive and negative portions of the solution. Simplify the right side of the equation. Rewrite as . Pull terms out from under the radical, assuming positive real numbers. is equal to .
2019-03-19 04:44:10
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9330558180809021, "perplexity": 407.3584476968155}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-13/segments/1552912201885.28/warc/CC-MAIN-20190319032352-20190319054352-00207.warc.gz"}
https://www.kamakuraco.com/2014/04/16/comcast-corporation-a-bond-market-view/
Select Page Donald R. Van Deventer, Ph.D. Don founded Kamakura Corporation in April 1990 and currently serves as its chairman and chief executive officer where he focuses on enterprise wide risk management and modern credit risk technology. His primary financial consulting and research interests involve the practical application of leading edge financial theory to solve critical financial risk management problems. Don was elected to the 50 member RISK Magazine Hall of Fame in 2002 for his work at Kamakura. # Comcast Corporation: A Bond Market View 04/16/2014 08:14 AM Comcast Corporation (CMCSA) has a tender offer outstanding for Time Warner Cable (TWC) that is pending regulatory approval . The outcome of that transaction will have a significant impact on the risk and return to both common shareholders and bond holders. Still, in today’s analysis, we make no assumptions about the future and let the bond market facts lead us to the appropriate conclusions. Those facts presumably reflect a trade-weighted guess as to how the Time Warner Cable bid will end up. Today’s study incorporates Comcast Corporation bond price data as of April 14, 2014 to get an institutional, bond market view of the company. We analyze the potential risk and return to bondholders of Comcast Corporation using 93 trades on 19 bond issues and a trading volume of $40 million in today’s analysis. Conclusion: We believe a majority of analysts would rank Comcast Corporation as “investment grade.” The outcome of the Time Warner Cable transaction is a major potential event and one that will trigger a re-examination of this conclusion. The bond market shows a credit spread to default probability ratio that is below average. While the ratios for Comcast Corporation were nowhere near as low as we saw yesterday for the bonds of Vodafone Group PLC (VOD), 204 bond issues traded on April 14 at a better return to risk ratio than the best bond issued by Comcast Corporation. The Analysis Institutional investors around the world are required to prove to their audit committees, senior management, and regulators that their investments are in fact “investment grade.” For many investors, “investment grade” is an internal definition; for many banks and insurance companies, “investment grade” is also defined by regulators. We consider whether or not a reasonable U.S. bank investor would judge Comcast Corporation to be “investment grade” under the June 13, 2012 rules mandated by the Dodd-Frank Act of 2010. The default probabilities used are described in detail in the daily default probability analysis posted by Kamakura Corporation. The full text of the Dodd-Frank legislation as it concerns the definition of “investment grade” is summarized at the end of our analysis of Citigroup (C) bonds published December 9, 2013. Assuming the recovery rate in the event of default would be the same on all bond issues of the same issuer, a sophisticated investor who has moved beyond legacy ratings seeks to maximize revenue per basis point of default risk from each incremental investment, subject to risk limits on macro-factor exposure on a fully default-adjusted basis. In this note, we also analyze the maturities where the credit spread/default probability ratio is highest for Comcast Corporation Term Structure of Default Probabilities Maximizing the ratio of credit spread to matched-maturity default probabilities requires that default probabilities be available at a wide range of maturities. The graph below shows the current default probabilities for Comcast Corporation ranging from one month to 10 years on an annualized basis. For maturities longer than ten years, we assume that the ten year default probability is a good estimate of default risk. The current annualized default probabilities range from 0.04% at one month to 0.02% at 1 year and 0.27% at ten years. Note that these default probabilities are a small fraction of the default risk we found in yesterday’s post on Vodafone Group PLC. Credit-Adjusted Dividend Yield We explained in a recent post on General Electric Company (GE) how default probabilities and the associated credit spreads for a bond issuer can be used to calculate the credit-adjusted dividend yield on a stock . That analysis makes use of a comparison between the yield on the issuer’s promise to pay$1 in the future versus the yield on a similar promise by the U.S. government to pay $1 at the same time. Using the maximum smoothness approach to both the U.S. Treasury curve and to Comcast Corporation credit spreads, we can generate the zero coupon bond yields on their promise to pay$1 in the future: In tomorrow’s note, we present the credit-adjusted dividend yield on Comcast Corporation using this data. Summary of Recent Bond Trading Activity The National Association of Securities Dealers launched the TRACE (Trade Reporting and Compliance Engine) in July 2002 in order to increase price transparency in the U.S. corporate debt market. The system captures information on secondary market transactions in publicly traded securities (investment grade, high yield and convertible corporate debt) representing all over-the-counter market activity in these bonds. We used all of the 19 bond issues mentioned above in this analysis. The graph below shows 6 different yield “curves” that are relevant to a risk and return analysis of Comcast Corporation bonds. These curves reflect the noise in the TRACE data, as some of the trades are small odd-lot trades. The lowest curve, in dark blue, is the yield to maturity on U.S. Treasury bonds (TLT)(TBT), interpolated from the Federal Reserve H15 statistical release for that day, which exactly matches the maturity of the traded bonds of Comcast Corporation. The next curve, in the lighter blue, shows the yields that would prevail if investors shared the default probability views outlined above, assumed that recovery in the event of default would be zero, and demanded no liquidity premium above and beyond the default-adjusted risk-free yield. The orange dots graph the lowest yields reported by TRACE on that day on Comcast Corporation bonds. The green dots display the trade-weighted average yield reported by TRACE on the same day. The red dots show the maximum yield in each of the Comcast Corporation issues recorded by TRACE. The black dots and connecting black line show the yield consistent with the best fitting trade-weighted credit spread explained below. The data shows that the credit spreads widen fairly steadily as maturity lengthens, the typical pattern for a high quality bond issuer. The high, low and average credit spreads at each maturity are graphed below for Comcast Corporation. We have done nothing to smooth the data reported by TRACE, which includes both large lot and small lot bond trades. For the reader’s convenience, we fitted a cubic polynomial (in black) that explains the trade-weighted average spread as a trade-weighted function of years to maturity. Using default probabilities in addition to credit spreads, we can analyze the number of basis points of credit spread per basis point of default risk at each maturity. For Comcast Corporation, the credit spread to default probability ratio generally ranges from about 2.4 to 15 for maturities of 5 years and under. For the longer maturities, the credit spread to default probability ratio falls in a range from 2.4 to 5.1 times. The credit spread to default probability ratios are shown in graphic form below for Comcast Corporation. How do these reward to risk ratios compare with “normal” levels? The best way to answer that question precisely is to compare them to the credit spread to default probability ratios for all fixed rate non-call senior debt issues with trading volume of more than $5 million and a maturity of at least one year on April 14. The distribution of the credit spreads on the 288 traded bonds that met these criteria on April 14 is first plotted in this histogram: The median credit spread for all 288 trades was 0.826%. The average credit spread was 1.043%. The next graph shows the wide dispersion of the credit spread to default probability ratios on those 288 April 14 trades (only ratios of 40 or less are graphed): The median credit spread to default probability ratio on those 288 trades was 8.6 and the average credit spread to default probability ratio was 12.3. Comcast Corporation’s credit spread to default probability ratios ranked 205th, 237th, and 250thout of all 288 trades on April 14, 2014. Here are the 20 “best trades” done April 14, 2014 that had the highest ratios of credit spread to default probability, along with the same data for the most heavily traded bonds of Comcast Corporation. Credit Default Swap Analysis For the week ended April 4, 2014 (the most recent week for which data is available), the Depository Trust & Clearing Corporation reported there were 16 trades involving$221 million in notional principal on Comcast Corporation, ranking the firm the 155th most actively traded reference name. The weekly number of credit default swap trades on Comcast Corporation since the DTCC began publicizing weekly trade volume is shown here: The notional principal of credit default swap trading on Comcast Corporation over the same period is shown in this graph: Note that other legal entities in the Comcast Corporation organization have traded in the credit default swap market. Those entities are not reflected in the two previous graphs. On a cumulative basis, the default probabilities for Comcast Corporation range from 0.02% at 1 year to 2.69% at 10 years. We give the relative rankings of the company’s default probabilities below. Over the last decade, the 1 year and 5 year annualized default probabilities for Comcast Corporation have been moderately volatile, with the one year default probability exceeding 2.00% in 2008-2009. The annualized 5 year default probabilities exceeded 0.80% at the same time. As explained earlier in this note, the firm’s default probabilities are estimated based on a rich combination of financial ratios, equity market inputs, and macro-economic factors. Over a long period of time, macro-economic factors drive the financial ratios and equity market inputs as well. If we link macro factors to the fitted default probabilities over time, we can derive the net impact of macro factors on the firm, including both their direct impact through the default probability formula and their indirect impact via changes in financial ratios and equity market inputs. The net impact of macro-economic factors driving the historical movements in the default probabilities of Comcast Corporation have been derived using historical data beginning in January 1990. A key assumption of such analysis, like any econometric time series study, is that the business risks of the firm being studied are relatively unchanged during this period. With that caveat, the historical analysis shows that Comcast Corporation default risk responds to changes in 11 risk factors among the 28 world-wide macro factors used by the Federal Reserve in its 2014 Comprehensive Capital Assessment and Review stress testing program, for which the results were announced in March. These macro factors explain 58.4% of the variation in the default probability of Comcast Corporation. The remaining variation is the estimated idiosyncratic credit risk of the firm. Comcast Corporation can be compared with its peers in the same industry sector, as defined by Morgan Stanley (MS) and reported by Compustat. For the U.S. “consumer discretionary, media” sector, Comcast Corporation has the following percentile ranking for its default probabilities among its 124 peers at these maturities: 1 month      44th percentile 1 year         21st percentile 3 years       17th percentile 5 years       15th percentile 10 years     14th percentile For the longer time horizons, Comcast Corporation default probabilities are in the safest quartile of all sector peer group firms. Comparison with Legacy Ratings Taking still another view, the actual and statistically predicted credit ratings for Comcast Corporation both show a rating in the middle of the “investment grade” territory. The statistically predicted rating is one notch below the legacy rating, those of Moody’s (MCO) and Standard & Poor’s (MHFI). The legacy credit ratings of Comcast Corporation have changed just twice in the last decade. Conclusions Before reaching a final conclusion about the “investment grade” status of Comcast Corporation, we look at more market data. First, we look at Comcast Corporation credit spreads versus credit spreads on every bond in the “technology, media, and telecommunications” sector that traded on April 14: Comcast Corporation credit spreads were near the lower end of the range for the peer group. We now look at the matched maturity default probabilities on those traded bonds for both Comcast Corporation and the peer group: Somewhat inconsistent with the percentile rankings above, the default probabilities for Comcast Corporation are on the higher end of the industry peer group. This is because the bonds trading most heavily are generally a much better group of credits than the industry in aggregate. We now turn to the legacy “investment grade” peers. First we compare traded credit spreads on April 14, 2014: Again, Comcast Corporation credit spreads are solidly in the safest half of the investment grade peer group range. Investment grade default probabilities on a matched maturity basis for the bonds traded on April 14 are shown in this graph: Again the default probabilities for Comcast Corporation rank on the high end of the traded investment grade peer group. We believe a majority of analysts would rank Comcast Corporation as “investment grade.” The outcome of the Time Warner Cable transaction is a major potential event and one that will trigger a re-examination of this conclusion. The bond market shows a credit spread to default probability ratio that is below average. While the ratios for Comcast Corporation were nowhere near as low as we saw yesterday for the bonds of Vodafone Group PLC, 204 bond issues traded on April 14 at a better return to risk ratio than the best bond issued by Comcast Corporation. Author’s Note Regular readers of these notes are aware that we generally do not list the major news headlines relevant to the firm in question. We believe that other authors on SeekingAlpha, Yahoo, at The New York Times, The Financial Times, and the Wall Street Journal do a fine job of this. Our omission of those headlines is intentional. Similarly, to argue that a specific news event is more important than all other news events in the outlook for the firm is something we again believe is inappropriate for this author. Our focus is on current bond prices, credit spreads, and default probabilities, key statistics that we feel are critical for both fixed income and equity investors. Donald R. Van Deventer, Ph.D. Don founded Kamakura Corporation in April 1990 and currently serves as its chairman and chief executive officer where he focuses on enterprise wide risk management and modern credit risk technology. His primary financial consulting and research interests involve the practical application of leading edge financial theory to solve critical financial risk management problems. Don was elected to the 50 member RISK Magazine Hall of Fame in 2002 for his work at Kamakura.
2021-05-16 15:17:27
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.18485085666179657, "perplexity": 3617.1796916267363}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 5, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-21/segments/1620243991224.58/warc/CC-MAIN-20210516140441-20210516170441-00574.warc.gz"}
https://www.scielo.br/j/rbs/a/rLfcNscLtnL8wsYSPrKV9jM/?lang=en
# Abstracts The main objective of seed coating technology using polymers is to improve the physical, physiological and sanitary characteristics of seed performance. The objectives of the present study were to determine: the plantability of corn seeds treated with insecticide, fungicide and graphite, covered with a film coating; the dust retention on treated corn seeds; and the leaching of applied products on corn seeds covered by a film coating. Seed plantability was determined by counting the skips and double seeds; dust was determined by using fiberglass paper in mg.100 g-1 of seeds; and the leaching was determined by collecting the material leached in a 10 cm layer of sand after irrigation. The following conclusions were made: seeds covered with film coating effectively reduce skips and double seeds; film coating effectively reduces the formation of dust from the seeds; film coated seeds minimize the leaching of the insecticide applied in seed treatment; and there are differences in effectiveness related to film coating type and dosage. polymer; plantability; dust; leaching polímero; plantabilidade; poeira; lixiviação The use of film coating on the performance of treated corn seed Uso de film coating e desempenho de sementes de milho tratadas Suemar Alexandre Gonçalves Avelar* * Corresponding author < suemaralexandre@yahoo.com.br> ; Fabianne Valéria de Sousa; Guilherme Fiss; Leopoldo Baudet; Silmar Teichert Peske Departamento de Fitotecnia, Universidade Federal de Pelotas (UFPEL), Caixa Postal 354, 96001-970 - Capão do Leão, RS, Brasil ABSTRACT The main objective of seed coating technology using polymers is to improve the physical, physiological and sanitary characteristics of seed performance. The objectives of the present study were to determine: the plantability of corn seeds treated with insecticide, fungicide and graphite, covered with a film coating; the dust retention on treated corn seeds; and the leaching of applied products on corn seeds covered by a film coating. Seed plantability was determined by counting the skips and double seeds; dust was determined by using fiberglass paper in mg.100 g-1 of seeds; and the leaching was determined by collecting the material leached in a 10 cm layer of sand after irrigation. The following conclusions were made: seeds covered with film coating effectively reduce skips and double seeds; film coating effectively reduces the formation of dust from the seeds; film coated seeds minimize the leaching of the insecticide applied in seed treatment; and there are differences in effectiveness related to film coating type and dosage. Index terms: polymer, plantability, dust, leaching. RESUMO Termos para indexação: polímero, plantabilidade, poeira, lixiviação. Introduction The use of high physiological quality seeds is among the best practices for obtaining maximum crop yields. These seeds are more likely to achieve a high performance when exposed to different environmental conditions, expressed in a high percentage and speed of emergence, good stand establishment, good initial seedling development (Tillmann and Miranda, 2006) and increased final production. During seed production several strategies are adopted to ensure a high seed quality. After sowing, seeds are still exposed to biotic and abiotic environmental factors (Delouche, 2005) and agricultural soils have many pathogenic microorganisms that may interact with seeds and seedlings (Munkvold and O'Mara, 2002), and can reduce their performance causing seed rot, seedling death or root rot (Pinto, 2000). Phytophagous insects in the soil can also damage seedlings (Girolami et al., 2009), and significantly reduce the plant population. In this context, seed treatment is an alternative for improving seed and seedling performance. Treatment of corn seeds with the fungicides difeconazole, fludioxinol and Captan has proved to be efficient in controlling Fusarium, although there are differences in efficacy for the different pathogen species, with the former two being generally more effective than Captan (Munkvold and O'Mara, 2002). Seed treatment with insecticides of the neonicotinoid group (systemic action), for example, protect seedlings from the attack of several phytophagous insects (Elbert et al., 2008). To improve the efficiency of seed treatment, the use an adhesive polymer is recommended, creating a "film coating" on the seed surface; this technology allows the application of several products and multiple coatings (Taylor and Harman, 1990). This sophisticated application process, besides allowing a precise distribution of active ingredients on the seed surface without changing its shape and causing a weight gain of at most 2%, may also allow a better adhesion and protection for fungicides and insecticides (Kunkur et al., 2007). Rivas et al. (1998) and Pereira et al. (2005) verified that the use of polymers does not affect the physiological quality of corn seeds or interfere in the chemical treatment effect of high quality seeds. Karam et al. (2007) found that polymers did not affect viability, vigor or the longevity of corn seeds. There are several studies that discuss the influence of polymer use in seed treatment related to seed physiological quality and performance of many species, both for crop establishment in the field and during storage. However, there is a lack of information on how the different polymers interact with the seed physical properties of different species and with the different products used for seed treatment. Therefore, the objectives of the present study were to determine: the plantability of corn seeds treated with insecticide, fungicide and graphite, covered with film coating; the dust retention on treated corn seeds; and the leaching of applied products on corn seeds covered by film coating. Materials and Methods Experiment 1 - Plantability Manually graded corn seeds (using screens with oblong-perforations of 6 x 19 mm and with round perforations of 7.75 mm) were treated as described in Table 1. Coating and seed treatment were manually done using 700 g of seeds per experimental unit with application of the product mix and water, so that the total volume (product + water) reached 1500 mL.100 kg-1. The seeds were treated in plastic bags, adding the products to the seeds and then shaking them for a minute until the complete distribution of the products on the seed surface was achieved. The seed plantability test was done using a seed dispenser mechanism on a bench, composed of an electric motor with a speed regulator, a system of seed distribution with a perforated disc, and a conveyor belt in V-shape, connected to an electric motor set to run at 3.0 km.h-1 with a seeding rate of 5 seeds.m-1 (nominal spacing = distance between seeds of 0.2 m). The assessment of skip percentage was performed with the treadmill in motion, running a distance of 137 linear meters, considering the distance between seeds greater than 1.5 times the nominal spacing and concomitantly the percentage of doubles was evaluated, with a distance smaller than 0.5 times the nominal spacing being considered as double. The angle of repose using a sample of 700 g, set in a box with dimensions of 0.15 x 0.30 x 0.30 m was also determined. The box had a division in the middle, where one side was filled with seeds. After the division was removed allowing the seeds to flow, the angle of repose was calculated by the arc tan a.b-1, being "a" equal to the seed weight height in the corner of the box and "b" the distance where the seed flows horizontally, according to the methodology described by (Mantovani et al.,1999). Experiment 2 - Dust retention The dust retention test was performed in specific equipment, known as the HEUBACH dustmeter, to observe the dust loss in seeds. Previously weighed fiber glass papers were used and taken from the machine where the dust was impregnated in the paper and weighed again to determine dust loss. Two replications for each experimental unit were used with 100 g of seeds. A completely randomized design was used, with a total of 18 treatments and three replications. Mean values were submitted to variance analysis and the Scott-Knott test at a 5% probability level. Experiment 3 - Leaching A quantity of 100 g of corn seeds received the treatments as described in Table 2. The coating and seed treatments were made in plastic bags, according to the methodology described in Experiment 1, except for the total volume (product plus water), which was 2400 mL. After treatment, 50 seeds per experimental unit were sown on the surface of polypropylene cups with a perforated bottom and a capacity of 500 cm3 , and containing a layer of 7 cm of sterilized sand. After sowing, seeds were covered with a layer of 3 cm of sand, simulating a 10 cm layer of soil with seeds sowed at 3 cm depth. Then, the seeds were subjected to rainfall of 50 mm, with the water added to the cups using a 1000 mL graduated beaker. The leachate was collected in 100 cm3 polypropylene cups and sent to a soil analysis laboratory for chemical analysis of zinc in the solution. The zinc was measured by atomic absorption spectrophotometry, and was the element chosen since Furazin 310 TS used for seed treatment contains 210 g of zinc per liter. A completely randomized design with eight treatments and four replications was used, with a total of 32 experimental units. The data were subjected to a variance analysis and when the mean values presented significant differences, data were submitted to the Scott-Knott test for mean separation at the 5% probability level. Experiment 1 - Plantability The seed treatments showed statistical differences as evaluated from the number of seeds.m-1, skips, doubles and angle of repose (Table 3). All treatments showed a distribution of seeds per meter equal or higher than 5 seeds.m-1 as planned (Table 3). However, treatments containing graphite had significantly more seeds per meter than the others, showing the difficulty in achieving the proper seeding rate due to seed flowability. Seeds coated with the polymer PolySeed CF showed a statistically lower percentage of skips compared to seeds treated with graphite, those with no polymer or treated with other polymers at a higher dosage (Table 3), with differences being almost twice as large in some cases. Regardless of the treatment, seeds coated with polymers showed a lower percentage of doubles than those treated with graphite, with or without insecticide and fungicide (Table 3). In this case, some treatments with polymers also showed a difference 50% lower than for seeds coated with graphite. Seed treatment with or without polymers increased the angle of repose when compared to the control and to seeds coated with graphite associated with an insecticide and fungicide. The lower angle of repose of the seeds treated with graphite means that seeds stick less to each other and will flow faster during the sowing procedure. According to the adjustment of the machine distribution system for sowing five seeds per meter in an area of 100 m x 100 m (1 hectare), using a row spacing of 0.8 m, the population density is 62,500 plants.ha-1. In the Graphite + Thiametoxan + Fungicide treatment, on average, six seeds per meter with a density of 75,000 plants.ha-1, were obtained, totalizing an increase of 12,500 plants.ha-1. Considering the seed cost at around US$0.01, there would be an increase of US$125.00.ha-1 over the total cost of crop establishment. On the other hand, the percentage of skips and doubles pointed to a more uniform distribution using the film coating and a non-uniform distribution with graphite, which can result in a suitable plant population and a higher yield, as pointed out by (Schuch et al., 2000). The treatment Graphite + Thiametoxan + Fungicide can show a reduction of 4% in plant density in some areas while in others, the same treatment can increase the plant population by14.6%. The grain yield potential of corn has been found to be dependent on plant density. Doubles and skips are two attributes that affect plant distribution in the field causing an uneven spacing between the plants, which reduces the yield potential of a given crop. Plants do compensate for skips producing more canopy, which is not sufficient if a normal plant is present. On the other hand, with double plants, the competition between them also reduces the yield of the population. Therefore both lower or higher densities result in a yield reduction (Ipsilandis and Vafias, 2005). This fact emphasizes the importance of seeding precision to achieve the best performance and assure high economic returns. The results for graphite agree with Mantovani et al. (1999), who verified that the graphite use in treated corn seeds plays an important role in improving flowability, but do disagree in relation to the higher efficiency obtained by the authors with graphite. The ease of flow in the seed reservoir depends, among other factors, on the coefficient of internal friction of the seeds and between them and the seed reservoir walls. The coefficient of friction and the angle of repose can be assumed to be of the same magnitude (Mantovani et al., 1999). Graphite use caused an increase in double percentages, in some treatments close to 75%, compared to the control, which probably led to the highest number of skips, increasing the lack of uniformity in the distribution. The number of seeds and the doubles seem to be related to the angle of repose. Experiment 2 - Dust retention The polymer treatments showed reduced dust formation from the seeds to a higher or lower extent, depending on dosage and the combination of treatments, when compared with seeds treated with no polymers and those treated with graphite. However, seeds with graphite, when treated with Thiametoxan showed a similar behavior, with some combinations of polymers and products (Table 4). Some seed treatment combinations with film coating showed a very low dust level (0.1 mg.100 seeds g-1), when compared with seeds treated without polymer or with graphite, while others showed a reduction from 20% to 35% in dust, and almost 70% if compared with seed treated only with graphite. The dust formation in treated seeds is related to the adherence of products to the seeds, indicating compatibility between the formulations and product loss after treatment and, consequently, inefficiency because seeds will not have the expected protection (Avelar et al., 2009). The use of a film coating improves seed treatment standards, stability and adhesion, causes a reduction in environmental impact by toxic dust and assures seed protection, besides minimizing the exposure of natural enemies to active ingredients during seeding. Experiment 3 - Leaching In the leaching test, the polymer PolySeed CF was the most effective treatment for fungicide retention on a zinc basis in coated corn seeds, independent of the dosage (Table 5). The retention of product with seed treatment also depends on the adherence of the applied products, compatibility between the different formulations used and the characteristics of the seed coat. The polymer PolySeed CF showed a high efficiency in processing, with a leaching effect similar to untreated seeds, while seeds treated only with the leachate showed a leaching higher than 15 times that of untreated seeds and five times that of treated seeds and PolySeed CF. This result confirmed the compatibility of the product used, reflected in a high adherence of products on the seeds, even after the simulated rainfall. This will assure an adequate protection of the seeds, besides reducing environmental impacts, such as soil contamination and leaching of pesticides into the ground-water. Some polymers did not have the same efficiency, showing, in some cases, a leaching higher than for seeds treated without polymers, like the polymer ColorSeed C3, for example, that presented a leaching twice as high as the control. This may be attributed to an incompatibility between these polymers and the product used, resulting in a reduction in the adherence. When the water washes the soil in which the seeds are sown with these products, the small adherence of these products allows solubilization, causing higher losses. Conclusions Seed coating with polymers improves plantability, reducing the percentage of skips and doubles; polymers reduce the formation of dust from seeds; polymers minimize insecticide leaching from treated corn seed; and there is a difference for treatment efficiency, associated with the polymer and the dosage used. Submitted on 11/11/2010. Accepted for publication on 02/04/2012. • AVELAR, S.A.G.; BAUDET, L.; VILLELA, F.A. The improvement of the seed treatment process, Seed News, v.XIII, n.5, p.8-11, 2009. • DELOUCHE, J.C. Seed quality and performance. Seed News, v.9, n.5, p.34-35, 2005. • ELBERT, A.; HAAS, M.; SPRINGER, B.; THIELERT, W.; NAUEN, R. Applied aspects of neonicotinoid uses in crop protection. Pest Management Science, v.64, n.11, p.1099-1105, 2008. http://onlinelibrary.wiley.com/doi/10.1002/ps.1616/abstract • GIROLAMI, V.; MAZZON, L.; SQUARTINI, A.; MORI, N.; MARZARO, M.; DI BERNARDO, A.; GREATTI, M.; GIORIO, C.; TAPPARO, A. Translocation of neonicotinoid insecticides from coated seeds to seedling guttation drops: a novel way of intoxication for bees. Journal of Economic Entomology, v.102, n.5, p.1808-1815, 2009. http://docserver.ingentaconnect.com/deliver/connect/esa/00220493/v102n5/s11.pdf?expires=1289435766&id=0000&titleid=10264&checksum=9B9AEEFF22E42F69B71153E65B80732D • IPSILANDIS, C.G.; VAFIAS, B.N. Plant density effects on grain yield per plant in maize: Breeding implications. Asian Journal of Plant Science, v.4, n.1, p.31-39, 2005. http://scialert.net/qredirect.php?doi=ajps.2005.31.39&linkid=pdf • KARAM, D.; MAGALHÃES, P.C.; PADILHA L. Efeito da adição de polímeros na viabilidade, no vigor e na longevidade de sementes de milho. Sete Lagoas: Embrapa Milho e Sorgo. 2007. 6p. (Embrapa Milho e Sorgo, Circular Técnica 94). http://www.cnpms.embrapa.br/publicacoes/publica/2007/circular/Circ_94.pdf • KUNKUR, V.; HUNJE, R.; PATIL, N.K.B.; VYAKARNHAL, B.S. Effect of seed coating with polymer, fungicide and insecticide on seed quality in cotton during storage. Karnataka Journal of Agricultural Sciences, v.20, n.1, p.137-139, 2007. http://203.129.218.157/ojs/index.php/kjas/article/viewFile/42/42 • MANTOVANI, E.C.; MANTOVANI, B.H.M.; CRUZ, I.; MEWES, W.L.C.; OLIVEIRA, A.C. Desempenho de dois sistemas distribuidores de sementes utilizados em semeadoras de milho. Pesquisa Agropecuária Brasileira, v.34, n.1, p.93-98, 1999. http://www.scielo.br/pdf/pab/v34n1/8714.pdf • MUNKVOLD, G.P.; O'MARA, J.K. Laboratory and growth chamber evaluation of fungicidal seed treatments for maize seedling blight caused by Fusarium species. Plant Disease, v.86, n.2, p.143-150, 2002. http://apsjournals.apsnet.org/doi/pdf/10.1094/PDIS.2002.86.2.143 • PEREIRA, C.E.; OLIVEIRA, J.A.; EVANGELISTA, J.R.E. Qualidade fisiológica de sementes de milho tratadas associadas a polímeros durante o armazenamento. Ciência e Agrotecnologia, v.29, n.6, 1201-1208, 2005. http://www.scielo.br/pdf/cagro/v29n6/v29n6a14.pdf • PINTO, N.F.J. A. Tratamento fungicida de sementes de milho contra fungos do solo e o controle de Fusarium associado às sementes. Scientia Agricola, v.57, n.3, p.483-486, 2000. http://www.scielo.br/pdf/sa/v57n3/2679.pdf • RIVAS, B.A.; McGEE, D.C.; BURRIS, J.S. Evaluación del potencial de polímeros como agentes envolventes de fungicidas en el tratamiento de semillas de maíz para el control de Pythium spp Fitopatologia Venezuelana, v.11, n.4, p.471-488, 1998. http://sian.inia.gob.ve/repositorio/revistas_ci/Agronomia%20Tropical/at4804/arti/arias_b.htm • SCHUCH, L.O.B.; NEDEL, J.L.; ASSIS, F.N.; MAIA, M.S. Vigor de sementes e análise de crescimento de aveia preta. Scientia Agricola, v.57, n.2, 305-312, 2000. http://www.scielo.br/pdf/sa/v57n2/v57n2a18.pdf • TAYLOR, A.G.; HARMAN, G.E. Concepts and technologies of selected seed treatments. Annual Review of Phytopathology, v.28, p.321-339, 1990. http://www.annualreviews.org/doi/pdf/10.1146/annurev.py.28.090190.001541 • TILLMANM, M.A.A.; MIRANDA, D.M. Análise de sementes. In: PESKE. S.T.; LUCCA FILHO, O.A.; BARROS, A.C.S.A. (Ed.). Sementes: fundamentos científicos e tecnológicos, Pelotas: UFPel, 2006, p.159-255. • * Corresponding author < # Publication Dates • Publication in this collection 27 June 2012 • Date of issue June 2012
2021-07-30 15:28:59
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.5711008906364441, "perplexity": 7737.714460133701}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-31/segments/1627046153966.60/warc/CC-MAIN-20210730122926-20210730152926-00704.warc.gz"}
https://tex.stackexchange.com/questions/427244/need-help-resizing-floating-table
# need help resizing floating table Hello all! Any suggestions for this issue here? Thanks! Here is the code I am currently working with: \section{Empirical Results} \renewcommand{\thetable}{\arabic{table}} \begingroup \begin{table}[h] \caption {\label{tab:table1} Estimates of Pay-Performance Sensitivity: Coefficients of Ordinary Least Squares Regressions of CEO Salary on Shareholder Wealth (Standard Error in Parentheses} \begin{tabular}{llllll} Independent Variable & CEO Salary & CEO Total Compensation & \\ \hline Intercept & 0.0555555 & 0.055556 \\ Change in Shareholder Wealth (\% Change) & 0.0408163 (.05) & .025 (.05) \\ R-square & .005 & .005 \\ F-statistic for $\beta$ & 0.0246913\footnotemark[1] & 0.024691\footnotemark[1] & \\ Sample Size & 0.0199999 & 0.020000 \\ \end{tabular} \begin{tabbing} $\footnotemark[1]$Significant at the 1\% Level. \end{tabbing} \end{table} \endgroup When I try to implement @bernard's code, I run into some issues with the code: \newcommand{\rvec}{\mathrm {\mathbf {r}}} \usepackage{graphicx} \usepackage{subfigure} \usepackage{amsmath} \usepackage{xcolor} \usepackage{color, soul} \usepackage[symbol]{footmisc} \usepackage[utf8]{inputenc} \usepackage{makecell, threeparttable, booktabs} \usepackage{lipsum} This^ results in an error, as well as the following code: \documentclass[letterpaper, 10 pt, conference]{ieeeconf} \documentclass[twocolumn]{article} Any suggestions? Thanks! Edit for @samcarter, I will go with the column width table. Can I improve it from here or is this as good as it's going to get? It looks awkward in a few places: Here's the code that was provided to me to create it: \section{Empirical Results} %TABLE: \begin{table}[htbp] \caption{\label{tab:table1} Estimates of Pay-Performance Sensitivity: Coefficients of Ordinary Least Squares Regressions of CEO Salary on Shareholder Wealth (Standard Error in Parentheses} \begin{tabularx}{\columnwidth}{@{}XXX@{}} \toprule Independent Variable & CEO Salary & CEO Total Compensation\\ \midrule Intercept & 0.0555555 & 0.055556 \\ Change in Shareholder Wealth (\% Change) & 0.0408163 (.05) & .025 (.05) \\ R-square & .005 & .005 \\ F-statistic for $\beta$ & 0.0246913\footnotemark[1] & 0.024691\footnotemark[1] \\ Sample Size & 0.0199999 & 0.020000\\ \bottomrule \end{tabularx} \begin{tabbing} $\footnotemark[1]$Significant at the 1\% Level. \end{tabbing} \end{table} Thanks again! Edit 4/23: Here is where I am now. (Getting an error at \end{tabularx}): \documentclass[twocolumn]{article} \usepackage{graphicx} %for table \newcommand{\rvec}{\mathrm {\mathbf {r}}} \usepackage{graphicx} \usepackage{subfigure} \usepackage{amsmath} \usepackage{xcolor} \usepackage{color, soul} \usepackage[symbol]{footmisc} \usepackage{booktabs} \usepackage{caption} \usepackage{tabularx} %TABLE: \begin{table}[htbp] \caption{\label{tab:table1} Estimates of Pay-Performance Sensitivity: Coefficients of Ordinary Least Squares Regressions of CEO Salary on Shareholder Wealth (Standard Error in Parentheses} \begin{tabularx}{\columnwidth}{@{}LLL@{}} \toprule Independent Variable & CEO Salary & CEO Total Compensation\\ \midrule Intercept & 0.0555555 & 0.055556 \\ Change in Shareholder Wealth (\% Change) & 0.0408163 (.05) & .025 (.05) \\ R-square & .005 & .005 \\ F-statistic for $\beta$ & 0.0246913\footnotemark[1] & 0.024691\footnotemark[1] \\ Sample Size & 0.0199999 & 0.020000\\ \bottomrule \end{tabularx} \begin{tabbing} $\footnotemark[1]$Significant at the 1\% Level. \end{tabbing} \end{table} Never use \resizebox with a table: it leads to inconsistent font sizes, and can make your tables unreadable. I suggest loading makecell, which allows for line breaks in standard cells, and threeparttable to manage table notes, rather than the tabbing environment. Here is posssible code, with some improvements the rules from booktabs: \documentclass[twocolumn]{article} \usepackage[utf8]{inputenc} \usepackage{makecell, threeparttable, booktabs} \usepackage{lipsum} \begin{document} \section{Empirical Results} \lipsum[1] \renewcommand{\thetable}{\arabic{table}} \begin{table}[h] \begin{threeparttable} \caption {\label{tab:table1} Estimates of Pay-Performance Sensitivity: Coefficients of Ordinary Least Squares Regressions of CEO Salary on Shareholder Wealth (Standard Error in Parentheses} \begin{tabular}{lll} \toprule & CEO Salary & \thead[l]{CEO Total\\ Compensation} \\ \midrule Intercept & 0.0555555 & 0.055556 \\ \thead[l]{Change in \\Shareholder Wealth\\ (\% Change)} &\makecell[l]{0.0408163\\ (.05)} & \makecell[l]{0.025\\ (.05)} \\ R-square & 0.005 & 0.005 \\ F-statistic for $\beta$ & 0.0246913\tnote{1} & 0.024691\tnote{1} \\ Sample Size & 0.0199999 & 0.020000 \\ \bottomrule \end{tabular} \smallskip\footnotesize \begin{tablenotes}[flushleft] \item[1]Significant at the 1\,\% Level. \end{tablenotes} \end{threeparttable} \end{table} \end{document} • Thanks for this! When I swap out this document class for the one I currently have, there is an error, and when I add the packages you posted, I also get an error. How should I implement these changes? – texmex Apr 18 '18 at 16:45 • Could you post (or link to) a small document showing the problem? – Bernard Apr 18 '18 at 16:50 • I will show my problem in the original post – texmex Apr 18 '18 at 16:57 • @user159324: The reason is quite simple; you loaded the ieeeconf class first, then the article class. You can declare two classes for the same document. Drop the latter. Some other remarks: for the definition of \vec, \lmathbf is enough (it is an upright font). Also, package subfigure is obsolete (not maintained for more than 12 years), use the subfigure environment from subfloat. Also, needless to load color if you load xcolor (with option [table] if you use colour in tables, as it loads colortbl and extends it). Last, if your input encoding is utf8, load soulutf8. – Bernard Apr 18 '18 at 18:16 Your table is wider than your column. To squeeze it into the column width, you could use a tabularx environment, which will automatically determine the available width for the columns and add line breaks as needed. Furthermore I suggest to use the booktabs package for nicer tables. \documentclass[twocolumn]{article} \usepackage{tabularx} \usepackage{booktabs} \usepackage{caption} \usepackage{array} \newcolumntype{L}{>{\raggedright\arraybackslash}X} \begin{document} \section{Empirical Results} \begin{table}[htbp] \caption{\label{tab:table1} Estimates of Pay-Performance Sensitivity: Coefficients of Ordinary Least Squares Regressions of CEO Salary on Shareholder Wealth (Standard Error in Parentheses} \begin{tabularx}{\columnwidth}{@{}LLL@{}} \toprule Independent Variable & CEO Salary & CEO Total Compensation\\ \midrule Intercept & 0.0555555 & 0.055556 \\ Change in Shareholder Wealth (\% Change) & 0.0408163 (.05) & .025 (.05) \\ R-square & .005 & .005 \\ F-statistic for $\beta$ & 0.0246913\footnotemark[1] & 0.024691\footnotemark[1] \\ Sample Size & 0.0199999 & 0.020000\\ \bottomrule \end{tabularx} \begin{tabbing} $\footnotemark[1]$Significant at the 1\% Level. \end{tabbing} \end{table} \end{document} Another possibility: With table*, the table can span both columns \documentclass[twocolumn]{article} \usepackage{booktabs} \usepackage{caption} \begin{document} \section{Empirical Results} \begin{table*}[htbp] \caption{\label{tab:table1} Estimates of Pay-Performance Sensitivity: Coefficients of Ordinary Least Squares Regressions of CEO Salary on Shareholder Wealth (Standard Error in Parentheses} \centering \begin{tabular}{lll} \toprule Independent Variable & CEO Salary & CEO Total Compensation\\ \midrule Intercept & 0.0555555 & 0.055556 \\ Change in Shareholder Wealth (\% Change) & 0.0408163 (.05) & .025 (.05) \\ R-square & .005 & .005 \\ F-statistic for $\beta$ & 0.0246913\footnotemark[1] & 0.024691\footnotemark[1] \\ Sample Size & 0.0199999 & 0.020000\\ \bottomrule \end{tabular} \begin{tabbing} $\footnotemark[1]$Significant at the 1\% Level. \end{tabbing} \end{table*} \end{document} • Hello! Thanks for the response. I'm new to overleaf and latex. How do I implement those first lines of code? Do I go back to the beginning of the document and just copy and paste it in? Thanks again! – texmex Apr 18 '18 at 16:20 • @user159324 The \usepackage stuff has to be somewhere before \begin{document}, the table obviously where you want it to be. – user36296 Apr 18 '18 at 16:27 • thank you for these example tables. Very helpful. I like the look of the latter one, but can this not go into the body of the paper? It won't copy and paste into the empirical results section. I also like the look of the one that is column width, but I get an error in this case: "LaTeX Error: Environment tabularx undefined." Any tips? Thanks! – texmex Apr 18 '18 at 23:02 • @user159324 For using tabularx you need to load the package of the same name. As for your problems with the two column solution: please make a minimal working example (MWE) that reproduces your problem. – user36296 Apr 18 '18 at 23:51 • please see the original post for a follow-up question. thanks! – texmex Apr 19 '18 at 12:30 Note to OP: If you have a substantially new question, you should post a new query. Otherwise, the answers that were posted previously (and which were hopefully helpful) become meaningless to future readers of your posting. The reason your latest code doesn't compile -- other than the fact tbat it's missing \begin{document} and \end{document} statements -- is that you didn't define the L column type. I can't help but remark that your table is confusing and hence not easy to undertand. For instance, the legend refers to one dependent variable (CEO salary), but the table clearly reports the results of two regressions, with two separate dependent variables. Worse, assuming the F-statistic numbers you posted are real, it looks like you've misinterpreted the meaning of statistical significance. If the F-statistic is really 0.0247, then there's not a 1% but actually a 99% chance that any (linear) association between the dependent variables and the regressor is due solely to chance. Put differently, it looks like there's an exceedingly low likelihood that you've uncovered a meaningful linear relationship between change in shareholder wealth and CEO compensation: Is this what you meant to report? By the way, one usually doesn't need to highlight insignificant statistical results with asterisks. Incidentally, the F-statistic tests the significance of the entire regression, and not just the significance of an individual regressor and its coefficient \beta. By the way, how does one get fractional sample sizes of roughly 0.2? Sample sizes are generally expressed as integers... Finally, the subfigure package has been deprecated for a decade or more. Don't use it! Use either the subfig or the subcaption package. Here's my attempt to clean up some of the challenges pointed out above. \documentclass[twocolumn]{article} \usepackage{graphicx} %for table \newcommand{\rvec}{\mathrm {\mathbf {r}}} %% \usepackage{graphicx} % no need to load a package twice %%%%%%\usepackage{subfigure} % don't load this deprecated package! \usepackage{amsmath} \usepackage{xcolor} \usepackage[symbol]{footmisc} \usepackage{booktabs,caption,tabularx} \newcolumntype{L}{>{\raggedright\arraybackslash}X} \begin{document} %TABLE: \begin{table}[htbp] \caption{Estimates of pay-performance sensitivity} \label{tab:table1} \raggedright OLS regressions of CEO salary and CEO total compensation on change in shareholder wealth. Standard errors in parentheses. \medskip \begin{tabularx}{\columnwidth}{@{}LLL @{}} \toprule Independent variable & CEO salary & CEO total compensation\\ \midrule Intercept & 0.0555555 & 0.055556 \\ Change in Shareholder Wealth (\%~Change) & 0.0408163 (.05) & .025 (.05) \\ R-squared & .005 & .005 \\ F-statistic & 0.0246913$^{*}$ & 0.024691$^{*}$ \\ Sample Size & 0.0199999 & 0.020000\\ \bottomrule \end{tabularx} \smallskip $^{*}$ Significant at the 1\% level. \end{table} \end{document} • these numbers were just filler. You are correct . . . that would have been nonsense. Thanks for the heads up though! – texmex Apr 27 '18 at 20:59 as supplement to excellent @mico answer (with a careful review of statistics), limited only to table design: changes are in: • used \small font size in table • last two columns are of type l • added are vertical space (2pt) above and below cells contents (with \makegapedcells from the package makecell) • for the headers of columns is used macro \thead from makecell \documentclass[twocolumn]{article} \usepackage[symbol]{footmisc} \usepackage{booktabs, makecell, tabularx} \setcellgapes{2pt} \newcolumntype{L}{>{\raggedright\arraybackslash}X} \usepackage{caption} \begin{document} %TABLE: \begin{table}[htbp] \caption{Estimates of pay-performance sensitivity} \label{tab:table1} \raggedright OLS regressions of CEO salary and CEO total compensation on change in shareholder wealth. Standard errors in parentheses. \medskip \setlength\tabcolsep{4pt} \small \makegapedcells \begin{tabularx}{\columnwidth}{@{}Lll @{}} \toprule \midrule Intercept & 0.0555555 & 0.055556 \\ Change in Shareholder Wealth (\%~Change) & 0.0408163 (.05) & 0.025 (.05) \\ F-statistic & 0.0246913$^{*}$ & 0.024691$^{*}$ \\ $^{*}$ Significant at the 1\% level.
2019-10-14 13:50:47
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.4687858819961548, "perplexity": 4895.8991923172125}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-43/segments/1570986653247.25/warc/CC-MAIN-20191014124230-20191014151730-00370.warc.gz"}
https://rosettacommons.org/node/11050
# Using Rmsd RosettaScripts filter with alignment files 2 posts / 0 new Using Rmsd RosettaScripts filter with alignment files #1 I have a homology model that contains a residue insertion and thus a 1:1 residue RMSD calculation cannot be performed. I would like to try the Rmsd RosettaScripts Filter with an alignment file to get the correct C-alpha RMSD calculation. However, I'm not sure how to create this alignment file. I've tried the alignment Grishin file I used for Homology modelling but receive this error: File: src/protocols/protein_interface_design/filters/RmsdFilter.cc:217 [ ERROR ] UtilityExitException ERROR: Assertion template_segm.size() > 10 failed. MSG:there must be more that 10 residues to calculate RMSD over My first guess is that I'm not using the correct alignment format. The output file is also attached in case I'm overlooking anything. This is new territory for me so any advice on performing these RMSD calculations is appreciated! AttachmentSize 8.12 KB Post Situation: Sun, 2020-11-08 15:51 bjharris You're correct that the Rmsd filter doesn't take Grishin formatted alignment files. It takes FASTA-formatted ones. Sun, 2020-11-08 18:56 rmoretti
2021-09-23 02:57:51
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.5803076028823853, "perplexity": 7327.6493000552655}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-39/segments/1631780057416.67/warc/CC-MAIN-20210923013955-20210923043955-00017.warc.gz"}
https://pos.sissa.it/414/903/
Volume 414 - 41st International Conference on High Energy physics (ICHEP2022) - Poster Session A phenomenological note on the missing $\rho_2$ meson Full text: pdf Pre-published on: October 22, 2022 Published on: Abstract The $\rho_2$ meson is the missing isovector member of the meson nonet with the quantum numbers $J^{PC}=2^{--}$. It belongs to the class of $\rho$-mesons such as the vector meson $\rho(770)$, the excited vector  $\rho(1700)$ and the tensor $\rho_3(1690)$. Yet, despite the rich experimental and theoretical studies for other $\rho$-meson states, no resonance that could be assigned to the $\rho_2$ meson has been measured. In this note, we present the results for the mass and dominant decay channels of the $\rho_2$ meson within the extended Linear Sigma Model. DOI: https://doi.org/10.22323/1.414.0903 How to cite Metadata are provided both in "article" format (very similar to INSPIRE) as this helps creating very compact bibliographies which can be beneficial to authors and readers, and in "proceeding" format which is more detailed and complete. Open Access
2023-01-28 00:17:29
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.6468106508255005, "perplexity": 1694.1452797484617}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 5, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2023-06/segments/1674764499468.22/warc/CC-MAIN-20230127231443-20230128021443-00738.warc.gz"}
http://math.stackexchange.com/questions/107692/prove-that-a-finite-union-of-closed-sets-is-also-closed/107711
# Prove that a finite union of closed sets is also closed Let $X$ be a metric space. If $F_i \subset X$ is closed for $1 \leq i \leq n$, prove that $\bigcup_{i=1}^n F_i$ is also closed. I'm looking for a direct proof of this theorem. (I already know a proof which first shows that a finite intersection of open sets is also open, and then applies De Morgan's law and the theorem "the complement of an open set is closed.") Note that the theorem is not necessarily true for an infinite collection of closed $\{F_\alpha\}$. Here are the definitions I'm using: Let $X$ be a metric space with distance function $d(p, q)$. For any $p \in X$, the neighborhood $N_r(p)$ is the set $\{x \in X \,|\, d(p, x) < r\}$. Any $p \in X$ is a limit point of $E$ if $\forall r > 0$, $N_r(p) \cap E \neq \{p\}$ and $\neq \emptyset$. Any subset $E$ of $X$ is closed if it contains all of its limit points. - +1 for giving the definitions you're using. I would try proving the contrapositive: suppose that the union fails to be closed, so it doesn't contain one of its limit points, and try to show that this limit point is a limit point of at least one of the $F_i$. –  Qiaochu Yuan Feb 10 '12 at 1:41 @jamaicanworm The key point is that we need the finiteness to take a minimum of radii of balls. –  user38268 Feb 10 '12 at 2:05 Let $F$ and $G$ be two closed sets and let $x$ be a limit point of $F\cup G$. Now, if $x$ is a limit point of $F$ or $G$ it is clearly contained in $F\cup G$. So suppose that $x$ is not a limit point of $F$ and $G$ both. So there are radii $\alpha$ and $\beta$ such that $N_\alpha(x)$ and $N_\beta(x)$ don't intersect with $F$ and $G$ respectively except possibly for $x$. But then if $r=min (\alpha,\beta)$ then $N_r(x)$ doesn't intersect with $F\cup G$ except possibly for $x$, which contradicts $x$ being a limit point. This contradiction establishes the result. The proof can be extended easily to finitely many closed sets. Trying to extend it to infinitely many is not possible as then the "min" will be replaced by "inf" which is not necessarily positive. - Thanks! This is exactly the kind of precise answer I was looking for. –  jamaicanworm Feb 11 '12 at 20:55 It is sufficient to prove this for a pair of closed sets $F_1$ and $F_2$. Suppose $F_1 \cup F_2$ is not closed, even though $F_1$ and $F_2$ are closed. This means that some limit point $p$ of $F_1 \cup F_2$ is missing. So there is a sequence $\{ p_i\} \subset F_1 \cup F_2$ converging to $p$. By pigeonhole principle, at least one of $F_1$ or $F_2$, say $F_1$, contains infinitely many points of $\{p_i\}$, hence contains a subsequence of $\{p_i\}$. But this subsequence must converge to the same limit, so $p \in F_1$, because $F_1$ is closed. Thus, $p \in F_1 \subset F_1 \cup F_2$. Alternatively, if you do not wish to use sequences, then something like this should work. Again, it is sufficient to prove it for a pair of closed sets $F_1$ and $F_2$. Suppose $F_1 \cup F_2$ is not closed. That means that there is some points $p \notin F_1 \cup F_2$ every neighbourhood of which contains infinitely many points of $F_1 \cup F_2$. By pigeonhole principle again, every such neighbourhood contains infinitely many points of at least one of $F_1$ or $F_2$, say $F_1$. Then $p$ must be a limit point of $F_1$; so $p \in F_1 \subset F_1 \cup F_2$. - Thanks--but please see my comment on @Michael's answer. –  jamaicanworm Feb 10 '12 at 1:58 Made the correction. –  Rick Feb 10 '12 at 2:03 How do we know that the metric space contains infinitely many points? –  Shahab Feb 10 '12 at 2:40 @Shahab: We don't. All we need for this proof is to show that $F_1\cup F_2$ contains all of its limit points. If the set is finite, it will have no limit points, which makes the condition vacuously true. –  Michael Greinecker Feb 10 '12 at 15:45 One problem: All we can say is that every neighborhood of $p \notin F_1 \cup F_2$ contains infinitely many points of $F_1 \cup F_2$. BUT one neighborhood might contain infinitely many points of $F_1$, while another might contain infinitely many points of $F_2$, so we cannot say the limit point property of $p$ holds for every neighborhood of any particular $F_i$... –  jamaicanworm Feb 10 '12 at 16:03 Here is one method, that I think is very direct: Check first that a set contains all limit points if and only if every converging sequence in the set has a limit in the set. Now take a convergent sequence in the finite union. Since the union is finite, one of the sets in the union must contain infinitely many terms of the sequence and therefore a subsequence. A subsequence of a convergent sequence is converging and converges to the same point. So there is a converging subsequence lying whole in one of the sets of the finite union and this set contains the limit since it is closed. So the limit lies in the finite union, and we are done. Edit: Here is a sequence free version. Suppose $F_1$ and $F_2$ are closed. Let $x$ be a limit point of $F_1\cup F2$. We are done if we can show that $x$ is a limit point of $F_1$ or $F_2$. If $x$ is not a limit point of $F_1$, then there is an $\epsilon>0$ such that the $\epsilon$-Ball around $x$ contains no element of $F_1$. Hence it contains a point from $F_2$ and by the definition of a limit point, for every positive $\epsilon'<\epsilon$, the $\epsilon'$-Ball contains an element of $F_2$. Hence, $x$ is a limit point of $F_2$. - Sorry for not specifying this in my original post, but I would prefer to not use sequences in the proof. (I'm trying to teach this topic before sequences.) I can, however, use the theorem that says every neighborhood of a limit point $p$ of $E$ contains an infinite number of points in $E$... –  jamaicanworm Feb 10 '12 at 1:54 @MichaelGreinecker: In your second proof, don't you need to show that $x$ is actually contained in $F_1 \cup F_2$? Not just a limit point of $F_2$? –  user66360 Nov 24 '13 at 23:21 @kbball Yes, but if $x$ is a limit point of $F_2$ and $F_2$ is closed, then $x\in F_2$ and a fortiori $x\in F_1\cup F_2$. –  Michael Greinecker Nov 25 '13 at 6:43
2015-05-30 02:57:50
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9574712514877319, "perplexity": 64.69670291924815}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2015-22/segments/1432207930866.66/warc/CC-MAIN-20150521113210-00176-ip-10-180-206-219.ec2.internal.warc.gz"}
https://ora.ox.ac.uk/objects/uuid:6e408464-8a47-4335-9199-1cb849899fbf
Report ### Sparse finite element approximation of high-dimensional transport-dominated diffusion problems Abstract: Partial differential equations with nonnegative characteristic form arise in numerous mathematical models in science. In problems of this kind, the exponential growth of computational complexity as a function of the dimension d of the problem domain, the so-called curse of dimension'', is exacerbated by the fact that the problem may be transport-dominated. We develop the numerical analysis of stabilized sparse tensor-product finite element methods for such high-dimensional, non-self-adjoint... ### Access Document Files: • (pdf, 439.9KB) ### Authors Publisher: Unspecified Publication date: 2007-02-01 UUID: uuid:6e408464-8a47-4335-9199-1cb849899fbf Local pid: oai:eprints.maths.ox.ac.uk:1098 Deposit date: 2011-05-20
2022-06-26 12:37:45
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8309856653213501, "perplexity": 1503.1139322962952}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-27/segments/1656103205617.12/warc/CC-MAIN-20220626101442-20220626131442-00627.warc.gz"}
http://mathhelpforum.com/algebra/196633-x-60-x-96-a.html
# Math Help - x(60-x)-96 1. ## x(60-x)-96 I have x(60-x)-96 =x^2+60x-96 to get the answer I know I have to expand out from the brackts, however I make it -60x not 60x, Im stuck on what I am doing wrong Dave 2. ## Re: x(60-x)-96 Originally Posted by davellew69 I have x(60-x)-96 =x^2+60x-96 to get the answer I know I have to expand out from the brackts, however I make it -60x not 60x, Im stuck on what I am doing wrong Dave $x(60-x)-96 = (60x-x^2)-96 = -x^2+60x-96$ 3. ## Re: x(60-x)-96 thank, as soon as I saw your reply it all clicked. cheers
2015-06-03 05:03:34
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 1, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.7899059653282166, "perplexity": 3633.8660370453263}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2015-22/segments/1433195036618.23/warc/CC-MAIN-20150601214356-00046-ip-10-180-206-219.ec2.internal.warc.gz"}
https://www.freemathhelp.com/forum/threads/solve-inequation-x-2-4ax-0.116981/
# solve inequation x^2 + 4ax < 0 #### acemi123 ##### New member solve inequation with 2 unknown! What is the way to solve this type of inequations? x^2+4ax<0 a-) 0<x<4a for every a Real number b-) no solution if a>0 c-) -4a<x<0 if a>0 d-) -4a<x<0 if a<0 Thanks. #### Harry_the_cat ##### Senior Member "a" is considered a constant. x is the variable. Note that x2 + 4ax = x(x +4a). I like solving quadratic inequalities by considering the graph. There are also more algebraic ways. What would the graph of y = x2 + 4ax look like? When will this graph be below the x-axis? #### JeffM ##### Elite Member solve inequation with 2 unknown! What is the way to solve this type of inequations? x^2+4ax<0 a-) 0<x<4a for every a Real number b-) no solution if a>0 c-) -4a<x<0 if a>0 d-) -4a<x<0 if a<0 Thanks. Be systematic. Solve the corresponding equality first. $$\displaystyle x^2 + 4ax = 0 \implies x(x + 4a) = 0 \implies x = 0 \text { or } x = - \ 4a.$$ So you know that $$\displaystyle x = 0 \implies x^2 + 4ax \not < 0 \text {, and } x = -\ 4a \not < 0.$$ So you have four cases to to consider, namely: (1) x > 0 and x > - 4a, (2) x > 0 and x < - 4a, (3) x < 0 and x > - 4a, and (4) x < 0 and x < - 4a. Can you figure it out now? #### HallsofIvy ##### Elite Member The answer, of course, depends on a. In particular whether a is positive or negative. x^2- 4ax= x(x- 4a)\$. In order that the product of two number be negative, one must be positive and the other negative so either (i) x< 0, x- 4a> 0 or (ii) x> 0, x- 4a< 0. In case (i) x> 4a so that x< 0 can only be true if 4a is negative. If a is negative then 4a< x< 0. In case (ii) x< 4a so that x> 0 can only be true if 4a is positive. If a is positive then 0< x< 4a.
2019-08-23 07:39:19
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.6889048218727112, "perplexity": 2942.4820003457135}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-35/segments/1566027318011.89/warc/CC-MAIN-20190823062005-20190823084005-00425.warc.gz"}
https://mathoverflow.net/questions/307324/smith-normal-form-and-affine-buildings
# Smith normal form and affine buildings In Smith Normal Form of powers of a matrix someone has commented saying that one can reformulate many questions about Smith normal forms in the language of affine buildings. I wanted to know of a reference for this link or an explanation of how they are related. Thank you Shemanske, Thomas R., The arithmetic and combinatorics of buildings for $\text{Sp}_n$., Trans. Am. Math. Soc. 359, No. 7, 3409-3423 (2007). ZBL1126.20019.
2019-09-15 14:54:25
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.517560601234436, "perplexity": 291.05166041504805}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-39/segments/1568514571506.61/warc/CC-MAIN-20190915134729-20190915160729-00112.warc.gz"}
https://en-academic.com/dic.nsf/enwiki/2487540
# First moment of area First moment of area The first moment of area, sometimes misnamed as the first moment of inertia, is based in the mathematical construct moments in metric spaces, stating that the moment of area equals the summation of area times distance to an axis [Σ("a" x "d")] . It is a measure of the distribution of the area of a shape in relationship to an axis. First moment of area is commonly used in engineering applications to determine the centroid of an object or the statical moment of area. Definition Given an area, "A", of any shape, and division of that area into "n" number of very small, elemental areas ("dAi"). Let "xi" and "yi" be the distances (coordinates) to each elemental area measured from a given "x-y" axis. Now, the first moment of area in the "x" and "y" directions are respectively given by: : and :. The SI unit for first moment of area is metre to the third power ("m"3). In the American Engineering and Gravitational systems the unit is foot to the third power ("ft"3) or more commonly inch3. tatical moment of area The static or statical moment of area, usually denoted by the symbol "Q", is a property of a shape that is used to predict its resistance to shear stress. By definition: $Q_\left\{j,x\right\} = int y_i dA$, where * "Q""j,x" - the first moment of area "j" about the neutral "x" axis of the entire body (not the neutral axis of the area "j"); * "dA" - an elemental area of area "j"; * "y" - the perpendicular distance to the element "dA" from the neutral axis "x". hear Stress in a Semi-monocoque Structure The equation for shear flow in a particular web section of the cross-section of a semi-monocoque structure is: :$q = frac\left\{V_y Q_x\right\}\left\{I_x\right\}$ *"q" - the shear flow through a particular web section of the cross-section *"V""y" - the shear force perpendicular to the neutral axis "x" through the entire cross-section *"Q""x" - the first moment of area about the neutral axis "x" for a particular web section of the cross-section *"I""x" - the second moment of area about the neutral axis "x" for the entire cross-section Shear stress may now be calculated using the following equation: :$\left\{ au\right\} = frac\left\{q\right\}\left\{t\right\}$ * $\left\{ au\right\}$ - the shear stress through a particular web section of the cross-section *"q" - the shear flow through a particular web section of the cross-section *"t" - the (average) thickness of a particular web section of the cross-section ee also * Second moment of area * Polar moment of inertia * http://www.iaengr.org/forum/messages//468.html * http://mywebsite.bigpond.com/npajkic/solid_mechanics/first_moment_of_area/index.html Wikimedia Foundation. 2010. ### Look at other dictionaries: • Second moment of area — The second moment of area, also known as the area moment of inertia or second moment of inertia, is a property of a shape that is used to predict its resistance to bending and deflection which are directly proportional. This is why beams with… …   Wikipedia • Moment — Moment(s) may refer to: Moment (time) Contents 1 Science, engineering, and mathematics 2 Art and entertainment …   Wikipedia • Moment of inertia — This article is about the moment of inertia of a rotating object, also termed the mass moment of inertia. For the moment of inertia dealing with the bending of a beam, also termed the area moment of inertia, see second moment of area. In… …   Wikipedia • Moment (mathematics) — Second moment redirects here. For the technique in probability theory, see Second moment method. See also: Moment (physics) Increasing each of the first four moments in turn while keeping the others constant, for a discrete uniform distribution… …   Wikipedia • First Battle of Bull Run — (First Manassas) Part of the American Civil War Fi …   Wikipedia • First-order logic — is a formal logical system used in mathematics, philosophy, linguistics, and computer science. It goes by many names, including: first order predicate calculus, the lower predicate calculus, quantification theory, and predicate logic (a less… …   Wikipedia • First Battle of El Alamein — Part of Western Desert Campaign …   Wikipedia • First English Civil War — The First English Civil War (1642–1646) was the first of three wars known as the English Civil War (or Wars ). The English Civil War was a series of armed conflicts and political machinations which took place between Parliamentarians and… …   Wikipedia • First Vision — The First Vision (also called the grove experience) is a religious belief held by many members of the Latter Day Saint movement (commonly called Mormonism) that God the Father and Jesus Christ appeared to the fourteen year old Joseph Smith, Jr.… …   Wikipedia • First Stadtholderless Period — The First Stadtholderless Period or Era (1650 1672) (Dutch Eerste Stadhouderloze Tijdperk ) is the period in the history of the Dutch Republic in which it reached the zenith of its economic, military and political Golden Age. The term has… …   Wikipedia
2021-03-07 15:56:16
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 4, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.5792737007141113, "perplexity": 1981.52654472724}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-10/segments/1614178377821.94/warc/CC-MAIN-20210307135518-20210307165518-00537.warc.gz"}
https://www.techwhiff.com/learn/the-citizens-of-montana-withdraw-money-from-a/300911
# The citizens of Montana withdraw money from a cash machine according to the following probability function... ###### Question: The citizens of Montana withdraw money from a cash machine according to the following probability function (X): 102 0,28 The number of customers per day has poisson distribution with 2-816 Amount(S) P(X-x) 132 0,5 184 0,22 Calculate the expected money withdrawn for a given day Calculate the A) Cov(X,Y) B)Corr(X,Y) according to Item 10 #### Similar Solved Questions On August 1, 2018, the beginning of its current fiscal year, the following opening account balances, listed in alphabetical order, were reported by Tobique Ltd. Accounts payable $2,500 Interest receivable 20 Accounts receivable 4,740 Note receivable, due October 31, 2018 4,000 Accumul... 1 answer ##### ACC-250-T0101 Test: ACC-250 Practice Quiz 1 6 This Question: 1 pt Which of the following events... ACC-250-T0101 Test: ACC-250 Practice Quiz 1 6 This Question: 1 pt Which of the following events is not recorded by accountants? O A. purchase of a building for$200,000 cash OB. sale of merchandise on account O C. signing a 400,000 note to purchase land OD. effects of an economic boom ACC-250-T0101... 1 answer ##### Personal affiliations and networking are important for nursing leaders. why are these important ?How will they... personal affiliations and networking are important for nursing leaders. why are these important ?How will they benefit you in your career future ? Identify two affiliations or a situation in whivh you have networked for the health of a populationor your community... 1 answer ##### Genetics Problems 4 1) Janet and her siblings have attached earlobes (recessive trait). However, her mother... Genetics Problems 4 1) Janet and her siblings have attached earlobes (recessive trait). However, her mother has free earlobes (dominant trait). What is the genotype of her mother? - 2) Charlie has a straight little finger (recessive trait), but his parents don't. What is the phenotypic ratio am... 1 answer ##### Please show result in skeletal formula. What is the major product of the reaction scheme shown... Please show result in skeletal formula. What is the major product of the reaction scheme shown below ? (No need to use wedges and hashes to show stereochemistry) Na NH, ?... 1 answer ##### There are 15 elements that need to be in a cover letter. Identify all of them.... There are 15 elements that need to be in a cover letter. Identify all of them. (Please refer to the book to find the list of 15 elements.). If one is missing, provide a recommendation of what can be added?... 1 answer ##### I need step by step solution to the following this question asap .I have limited time... I need step by step solution to the following this question asap .I have limited time so please do it quickly with detailed explanation thanks in advance/Ha Question 4 Mr Magoo owns and drives a car which he will wreck with probability it (and does not wreck with probability (1-1)). If Mr Magoo wre... 1 answer ##### Question 5 Not yet answered Points out of 3.00 Flag question Figure 15-1 tCasts ATC Quantity... Question 5 Not yet answered Points out of 3.00 Flag question Figure 15-1 tCasts ATC Quantity Refer to Figure 15-1. The shape of the average total cost curve reveals information about the nature of the barrier to entry that might exist in a monopoly market. Which of the following monopoly types best ... 1 answer ##### In the video "Park Avenue: money, power and the American dream," the narrator asserts that Americans... In the video "Park Avenue: money, power and the American dream," the narrator asserts that Americans believe all inequality is unjust. A.True B.False... 1 answer ##### 32. (3) Determine the mass of lithium hydroxide produced when 1.26 g of lithium nitride reacts... 32. (3) Determine the mass of lithium hydroxide produced when 1.26 g of lithium nitride reacts with water. LisN + 3 H2O → 3 LiOH + NH3 a. 0.871 g b. 2.60 g c. 0.62 g d. 2.40 L... 1 answer ##### 3. (9 points) In an automobile transmission, the planet pinions A and B rotate on shafts... 3. (9 points) In an automobile transmission, the planet pinions A and B rotate on shafts that are mounted on the planet-pinion carrier CD. As shown, CD is attached to a shaft at E which is aligned with the center of the fixed sun-gear S. This shaft is not attached to the sun gear. If CD is rotating ... 1 answer ##### After siting down the C(2)-C(3) bond in the Newman projection of butane, which is the anti... After siting down the C(2)-C(3) bond in the Newman projection of butane, which is the anti conformation? HC Н;С СН. Н. CH н. Н н CH; H- н H н CH; н H (b) Hн HC н CH H H Н. H Н H H н CH; (d) Н;С... 1 answer ##### You have an ABC system with three pools total cost in the pool pool 120,000... You have an ABC system with three pools total cost in the pool pool 1 $20,000 pool 2$15,000 pool 31 $10,000 product A 4,000 DL$ 20 setups 50 hours number of cost driver units product B 6,000 DL$30 setups 150 hours total 10,000 DL$ 50 setups 200 hours Compute the activity rates and the allocated co...
2022-12-03 08:26:19
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.4484722912311554, "perplexity": 6413.378400642227}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-49/segments/1669446710926.23/warc/CC-MAIN-20221203075717-20221203105717-00308.warc.gz"}
https://aitopics.org/mlt?cdid=news%3A1DBC55D0&dimension=pagetext
to ### Scaling Laws in Natural Scenes and the Inference of 3D Shape This paper explores the statistical relationship between natural images and their underlying range (depth) images. We look at how this relationship changesover scale, and how this information can be used to enhance low resolution range data using a full resolution intensity image. Based on our findings, we propose an extension to an existing technique known as shape recipes [3], and the success of the two methods are compared using images and laser scans of real scenes. Our extension is shown to provide a twofold improvement over the current method. Furthermore, wedemonstrate that ideal linear shape-from-shading filters, when learned from natural scenes, may derive even more strength from shadow cues than from the traditional linear-Lambertian shading cues. ### New algorithm helps turn low-resolution images into detailed photos, 'CSI'-style The EnhanceNet-PAT algorithm could help with everything from restoring old photos to improving image recognition for self-driving cars. Anyone who has ever worked with image files knows that, unlike the fictional world of shows like CSI, there's no easy way to take a low-resolution image and magically transform it into a high-resolution picture using some fancy "enhance" tool. Fortunately, some brilliant computer scientists at the Max Planck Institute for Intelligent Systems in Germany are working on the problem -- and they've come up with a pretty nifty algorithm to address it. What they have developed is a tool called EnhanceNet-PAT, which uses artificial intelligence to create high-definition versions of low-res images. While the solution is not a miracle fix, it does produce a noticeably better result than previous attempts, thanks to some smart machine-learning algorithms. ### Anomaly Detection via Graphical Lasso Anomalies and outliers are common in real-world data, and they can arise from many sources, such as sensor faults. Accordingly, anomaly detection is important both for analyzing the anomalies themselves and for cleaning the data for further analysis of its ambient structure. Nonetheless, a precise definition of anomalies is important for automated detection and herein we approach such problems from the perspective of detecting sparse latent effects embedded in large collections of noisy data. Standard Graphical Lasso-based techniques can identify the conditional dependency structure of a collection of random variables based on their sample covariance matrix. However, classic Graphical Lasso is sensitive to outliers in the sample covariance matrix. In particular, several outliers in a sample covariance matrix can destroy the sparsity of its inverse. Accordingly, we propose a novel optimization problem that is similar in spirit to Robust Principal Component Analysis (RPCA) and splits the sample covariance matrix $M$ into two parts, $M=F+S$, where $F$ is the cleaned sample covariance whose inverse is sparse and computable by Graphical Lasso, and $S$ contains the outliers in $M$. We accomplish this decomposition by adding an additional $\ell_1$ penalty to classic Graphical Lasso, and name it "Robust Graphical Lasso (Rglasso)". Moreover, we propose an Alternating Direction Method of Multipliers (ADMM) solution to the optimization problem which scales to large numbers of unknowns. We evaluate our algorithm on both real and synthetic datasets, obtaining interpretable results and outperforming the standard robust Minimum Covariance Determinant (MCD) method and Robust Principal Component Analysis (RPCA) regarding both accuracy and speed. ### A Nonlinear Dimensionality Reduction Framework Using Smooth Geodesics Existing dimensionality reduction methods are adept at revealing hidden underlying manifolds arising from high-dimensional data and thereby producing a low-dimensional representation. However, the smoothness of the manifolds produced by classic techniques in the presence of noise is not guaranteed. In fact, the embedding generated using such non-smooth, noisy measurements may distort the geometry of the manifold and thereby produce an unfaithful embedding. Herein, we propose a framework for nonlinear dimensionality reduction that generates a manifold in terms of smooth geodesics that is designed to treat problems in which manifold measurements have been corrupted by noise. Our method generates a network structure for given high-dimensional data using a neighborhood search and then produces piecewise linear shortest paths that are defined as geodesics. Then, we fit points in each geodesic by a smoothing spline to emphasize the smoothness. The robustness of this approach for noisy and sparse datasets is demonstrated by the implementation of the method on synthetic and real-world datasets. ### ML Super Resolution - Pixelmator Blog It's no secret that we're pretty big fans of machine learning and we love thinking of new and exciting ways to use it in Pixelmator Pro. Our latest ML-powered feature is called ML Super Resolution, released in today's update, and it makes it possible to increase the resolution of images while keeping them stunningly sharp and detailed. Yes, zooming and enhancing images like they do in all those cheesy police dramas is now a reality! Before we get into the nitty-gritty technical stuff, let's get right to the point and take a look at some examples of what ML Super Resolution can do. Until now, if you had opened up the Image menu and chosen Image Size, you would've found three image scaling algorithms -- Bilinear, Lanczos (lan-tsosh, for anyone curious), and Nearest Neighbor, so we'll compare our new algorithm to those three.
2020-10-27 18:53:34
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.33631089329719543, "perplexity": 789.7109824081459}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-45/segments/1603107894426.63/warc/CC-MAIN-20201027170516-20201027200516-00051.warc.gz"}
https://notesbylex.com/eigenvector.html
## Eigenvector An Eigenvectors of a Matrix Transformation is any non-zero vector that remains on its Vector Span after being transformed. That means that performing the transformation is equivalent to scaling the vector by some amount. The amount it scales the Eigenvector is called the Eigenvalue. For example, if we transform the basis vectors with matrix $\begin{bmatrix}2 && 1 \\ 0 && 2\end{bmatrix}$, we can see that $\hat{j}$ is knocked off its span, where $\hat{i}$ is simply scaled by 2. One other particular case of vector that remains on its span is the zero vector: $\vec{v}=\begin{bmatrix}0\\0\end{bmatrix}$, but that's not an Eigenvector. When performing 3d rotations, the Eigenvectors are particularly useful as they describe the axis of rotation. The notation for describing the relationship between the matrix transformation of the vector and same scaling quality equivalent: $A\vec{v} = \lambda\vec{v}$
2023-04-02 05:39:36
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 5, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8701785802841187, "perplexity": 372.7490894565409}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2023-14/segments/1679296950383.8/warc/CC-MAIN-20230402043600-20230402073600-00335.warc.gz"}
https://firas.moosvi.com/oer/physics_bank/content/public/007.Momentum%20and%20Impulse/Topic%20Outcome/Collision%20of%20River%20Otters/Collision%20of%20River%20Otters.html
# # Two river otters collide while sliding across frictionless ice and get tangled together following the (perfectly inelastic) collision as shown in the figure, where a coordinate system has been defined. The smaller otter ($$m_s =$$ $$kg$$) is initially moving at $$v\_{i_s} =$$ $$m/s$$ along the x-axis, while the larger otter ($$m_l =$$ $$kg$$) is initially moving at $$v\_{i_l} =$$ $$m/s$$, $$\theta_i=$$$$^{\circ}$$ counterclockwise from the x-axis. ## Part 1# Find the $$x$$-component of the initial momentum of the smaller otter. Please enter in a numeric value in $$kg\cdot m/s$$. ## Part 2# Find the $$y$$-component of the initial momentum of the smaller otter. Please enter in a numeric value in $$kg\cdot m/s$$. ## Part 3# Find the $$x$$-component of the initial momentum of the larger otter. Please enter in a numeric value in $$kg\cdot m/s$$. ## Part 4# Find the $$y$$-component of the initial momentum of the larger otter. Please enter in a numeric value in $$kg\cdot m/s$$. ## Part 5# Find the $$x$$-component of the total final momentum of the tangled otters in component form. Please enter in a numeric value in $$kg\cdot m/s$$. ## Part 6# Find the $$y$$-component of the total final momentum of the tangled otters in component form. Please enter in a numeric value in $$kg\cdot m/s$$. ## Part 7# What direction do the river otters end up heading in, relative to the $$x$$-axis as defined in the figure? Please enter in a numeric value in degrees. ## Part 8# What is the speed of the otters after this collision? Please enter in a numeric value in $$m/s$$. ## Part 9# Is kinetic energy lost, gained, or does it remain constant in this collision? If the difference in kinetic energy is less than 0.05, consider that kinetic energy has remained unchanged.
2023-03-30 18:30:40
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.7906523942947388, "perplexity": 1795.8071076910624}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 5, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2023-14/segments/1679296949355.52/warc/CC-MAIN-20230330163823-20230330193823-00659.warc.gz"}
https://hsm.stackexchange.com/questions/11473/who-first-solved-the-classical-harmonic-oscillator
# Who first solved the classical harmonic oscillator? There is a question Who solved the quantum harmonic oscillator?, but not one for the classical oscillator. Wikipedia's article Harmonic Oscillator does not have historical information either. So who first solved the classical harmonic oscillator equation? ## 1 Answer It was "solved" by Huygens in Horologium Oscillatorum (1673). The scare quotes are there because he never wrote down the equation, and even Newton's laws were not yet explicitly formulated. Huygens considered the motion of pendula, and for simple cases knew the "law of the conservation of living force" (mechanical energy), as Bernoullis later called it, see Mach, History and Root of the Principle of the Conservation of Energy, p. 30. In modern terms, this would be the first integral, or quadrature, of the corresponding equation of motion. With that, he was able to derive the period formula for pendulum motion with small amplitudes, $$T=2\pi\sqrt{\frac{l}{g}}$$ in modern notation, which he also did not use. Here is from Acoustic origins of harmonic analysis by Darrigol, p.351: "The first intimation that harmonic (sine-like) motion plays a basic role in acoustics is found in Christiaan Huygens's theory of musical strings. In his celebrated Horologium Oscillatorium of 1673, Huygens showed that the pendulous motion of a body sliding down on a cycloid was harmonic and isochronous. Around that time, he also understood that the force responsible for this motion was proportional to the distance traveled by the body from the point of equilibrium. Probably noticing that a similar circumstance held in the case of a tense, weightless elastic string loaded with one mass in the middle, he derived the oscillation frequency as a function of tension, length, and mass. The reasoning implied harmonic oscillations for the loaded string. He also sketched a generalization to a string loaded with several masses, in which he assumed all the masses to perform harmonic oscillations of the same frequency and phase." Taylor, who was studying vibrating strings in 1713, had the benefit of Newton's mechanics. Still, he did not write down the equation, but used pendulum analogies, and realized that the sine shape was a solution for the string. Johann Bernoulli followed Taylor's lead, and represented strings as connected masses. In a 1727 letter to his son Daniel he explicitly wrote the harmonic oscillator equation for each and integrated it analytically. In print, the first modern treatment of the harmonic oscillator is Euler's De Novo Genere Oscillationum (presented 1738-9, published 1750). He solved in quadratures not only the equation of the free oscillator, but also of the oscillator driven by harmonic force. This was the first analytic treatment of resonance, see Kline, Mathematical Thought From Ancient to Modern Times, v.2, pp. 479-482: "In his effort to treat the vibrating string, John Bernoulli, in a letter of 1727 to his son Daniel and in a paper, considered the weightless elastic string loaded with $$n$$ equal and equally spaced masses... John recognized that the force on each mass is $$-K$$ times its displacement, and solved $$\frac{d^2x}{dt^2} = - Kx$$, thus integrating the equation of simple harmonic motion by analytic methods. [...] In 1728 Euler began to consider second order equations. His interest in these was aroused partly by his work in mechanics. He had worked, for example, on pendulum motion in resisting media, which leads to a second order differential equation... In a paper of 1739 Euler took up the differential equations of the harmonic oscillator $$\ddot{x} + Kx = 0$$ and the forced oscillation of the harmonic oscillator $$M\ddot{x} + Kx = F\sin(\omega t)$$. He obtained the solutions by quadratures and discovered (really rediscovered, since others had found it earlier) the phenomenon of resonance".
2021-06-20 12:34:35
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 6, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.7894493937492371, "perplexity": 990.3760318366965}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": false}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-25/segments/1623487662882.61/warc/CC-MAIN-20210620114611-20210620144611-00486.warc.gz"}
http://carstenfuehrmann.org/parametric-oscillator/
# Parametric oscillator: a close look By | 23. July 2014 This post contains my research notes about the parametric oscillator. Here is an introduction from Wikipedia (see references section): “A parametric oscillator is a harmonic oscillator whose parameters oscillate in time. For example, a well known parametric oscillator is a child pumping a swing by periodically standing and squatting to increase the size of the swing’s oscillations. The varying of the parameters drives the system. Examples of parameters that may be varied are its resonance frequency $\omega$ and damping $\beta$.” As that Wikipedia article shows, a certain coordinate change can eliminate damping. So we focus on the case where there is only a resonance frequency $\omega$, which varies with time. That is, we consider the differential equation $$\displaystyle\ddot{x}(t) + \omega(t)^2 x(t) = 0 \quad$$ where $\omega(t)$ repeats itself after some time interval $T$. The main purpose of this post is to preserve what I learned, for my own future reference. I am bad at maintaining handwritten notes, so I am moving to a digital medium. On the off chance that this interests others, I put it on the web. This post contains some work of my own that goes beyond the material in the references section. Most or all of my findings are likely known by experts. The proofs are mine, and so are the mistakes. ## Floquet theory A suitable mathematical background for the parametric oscillator is Floquet theory (see references section). It deals with linear differential equations of the form $$\displaystyle\dot{x}(t) = A(t) x(t)$$ where $x:\mathbb{R}\to\mathbb{R}^n$, and the function $A:\mathbb{R}\to\mathbb{R}^{n\times n}$ is periodic with $T$. We could also consider complex numbers as the elements of $x$ and $A$, but we shall stick with the reals here, in accordance with the physics. We can write a parametric oscillator as a Floquet equation:$$\frac{d}{dt} \left(\begin{array}{c} x \\ v\end{array}\right) = \left(\begin{array}{cc} 0 & 1 \\-\omega(t)^2 & 0\end{array}\right)\left(\begin{array}{c} x \\ v\end{array}\right)$$I encountered Floquet theory in the well-known “Mechanics” book by Landau and Lifshitz (see references section), which we shall call “the book” in this post. The book contains a chapter on parametric resonance, which deals with parametric oscillators and their resonance behavior. The book uses Floquet theory in specialized form, without calling it so. Part of my motivation here is to obviate the exact way in which that book chapter ties in with general Floquet theory. ## The monodromy matrix We shall now introduce the notion of monodromy, which is pivotal for Floquet theory. Let $\Psi: \mathbb{R}\to \mathbb{R}^{n\times n}$ be a fundamental matrix for a Floquet differential equation. Then $\Psi_T(t) := \Psi(t + T)$ is also a fundamental matrix, because $A(t+T)=A(t)$. So we have $\Psi(t + T) = \Psi(t) M$ for some invertible $n\times n$-matrix $M$. This $M$ describes the change of the solution after each period. It is called the monodromy matrix. Rightly so, since in Greek “mono” means “one” and “dromos” means “course” or “running”. One checks easily that different $\Psi$ for the same $A$ can yield different $M$. But: Lemma: The monodromy matrix of a Floquet equation is determined up to isomorphism. So see this, let $\Phi$ be another fundamental matrix for $A$. Let $N$ be the corresponding monodromy matrix. We have $\Phi(t) = \Psi(t) Q$ for some invertible $n\times n$-matrix $Q$. So \begin{aligned}\Psi(t) Q N &= \Phi(t)N = \Phi(t+T) \\&= \Psi(t+T)Q = \Psi(t) M Q\end{aligned}Because $\Psi(t)$ and $Q$ are invertible, we get $N = Q^{-1} M Q$ Importantly, it follows that any two monodromy matrices have the same Jordan normal forms, and therefore the same eigenvalues. Now recall that we assumed that the matrix $A$ from our Floquet equation has only real entries. Naturally, we are only interested in real solutions $\Psi$. So any resulting $M$ too has only real entries. ## Floquet’s theorem Floquet equations cannot not generally be solved symbolically. However, Floquet’s theorem makes a useful statement about the solutions. The version of the theorem that interests me here is: Theorem: Any fundamental matrix $\Psi$ of a Floquet equation with period $T$ has the form$$\Psi(t) = \Pi(t) e^{t B}$$for some $T$-periodic matrix function $\Pi$ and some matrix $B$ of matching dimension. Note that the statement employs the matrix exponential $e^{t B}$ (see references section). Proof: Because $M$ is invertible, it has a matrix logarithm (see references section), that is, a matrix of which it $M$ is the exponential. (Important properties of the matrix logarithm are: they exist for every invertible matrix; and as in the special case of scalar logarithms, they can be complex-valued even if the exponential is real-valued, and they are not unique. For example, $i(2k+1)\pi$ is a logarithm of minus one for every integer $k$.) Let $B$ be a logarithm of $M$, divided by $T$. That is, $e^{T B} = M$. Let $\Pi(t) := \Psi(t) e^{-t B}$. To see that $\Pi$ is $T$-periodic, consider \begin{aligned}\Pi(t+T) &= \Psi(t+T) e^{-(t+T)B} \\ &= \Psi(t) M e^{-T B -t B} \\ &= \Psi(t) e^{T B} e^{-T B} e^{-t B} \\ &= \Psi(t) e^{-t B} = \Pi(t) \end{aligned} # Applying the theorem to the oscillator First we perform a coordinate change into the eigensystem of the monodromy matrix $M$. This is tantamount to assuming that $M$ is in Jordan normal form. As for any $2\times 2$-Matrix, the Jordan normal form is $$M = \left(\begin{array}{cc}\mu_1 & \delta \\0 & \mu_2\end{array}\right)$$where the $\mu_i$ are the eigenvalues and $\delta$ is zero ore one. The book considers only the case where the two eigenvalues differ, and therefore $\delta$ is zero. We shall also consider the case where $\delta$ is one. This can happen, as we shall see later. We shall now apply the Floquet theorem. First, we need a logarithm of $M$. We shall deal first with the more difficult case where $\delta$ is one, and therefore $\mu_1=\mu_2=\mu$. As explained in a Wikipedia article referenced below, the logarithm can be calculated as a Mercator series:$$\ln (1+x)=x-\frac{x^2}{2}+\frac{x^3}{3}-\frac{x^4}{4}+\cdots$$We have $M = \mu(I + K)$ where $I$ stands for the identity matrix, and$$K =\left(\begin{array}{cc}0 & 1/\mu \\0 & 0\end{array}\right)$$Using the fact that $K^n$ vanishes for $n$ greater than two, we get \begin{aligned}\ln M &=\ln \big(\mu(I+K)\big) \\&=\ln (\mu I) +\ln (I+K) \\&= (\ln \mu) I + K-\frac{K^2}{2}+\frac{K^3}{3}-\cdots \\&= (\ln \mu) I + K \\&=\left(\begin{array}{cc}\ln\mu & 1/\mu \\0 & \ln\mu\end{array}\right)\end{aligned}From the proof of the theorem, we know that we can choose $B = T^{-1}\ln M$. Some calculation involving the matrix exponential yields$$e^{t B} =\left(\begin{array}{cc}\mu^{t/T} & \frac{t}{T}\mu^{t/T-1} \\0 & \mu^{t/T}\end{array}\right)$$Note that $e^{T B} = M$, as required. Now suppose we have a fundamental matrix$$\Psi(t) =\left(\begin{array}{cc}x_1(t) & x_2(t) \\v_1(t) & v_2(t)\end{array}\right)$$When we spell out the Floquet theorem elementwise, and ignore the $v_i$, we get: Corollary: If $\delta = 1$, there are $T$-periodic functions $\pi_1$, $\pi_2$ and $\pi_3$ such that \begin{aligned}x_1(t) &= \mu^{t/T} \pi_1(t) + \frac{t}{T}\mu^{t/T-1} \pi_3(t) \\ x_2(t) &= \mu^{t/T} \pi_2(t)\end{aligned}If $\delta = 0$, we get the result from the book: Corollary: If $\delta = 0$, there are $T$-periodic functions $\pi_1$ and $\pi_2$ such that$$x_1(t) = \mu_1^{t/T} \pi_1(t) \qquad x_2(t) = \mu_2^{t/T} \pi_2(t)$$The calculation is much simpler than for $\delta=1$. I leave it away here. ## Possible eigenvalues First, we observe, like the book: Observation: The eigenvalues of the monodromy matrix $M$ of a parametric oscillator are either real or complex conjugates of one another. This follows simply from the fact that $M$ has only real entries. Now the book deduces, for $\delta = 0$, that the eigenvalues must be mutually reciprocal. We shall show this even for $\delta = 1$: Lemma: The product of the eigenvalues of the monodromy matrix $M$ of a parametric oscillator is one. Proof: Liouvilles formula (see reference section) entails for every fundamental matrix $\Psi$ of a Floquet equation that $$\frac{d}{dt}\det\,\Psi(t) = \mathrm{tr}\,A(t) \cdot\det\,\Psi(t)$$Here $\mathrm{tr}$ stands for trace, which is the sum of the diagonal elements of a matrix. For a parametric oscillator, that trace is zero. So $\det\,\Psi(t)$ is constant. Because $\Psi(t+T) = \Psi(t)M$, we have $\det\,\Psi(t+T)=\det\Psi(t)\det M$. Since $\Psi$ is constant, we have $\det\,\Psi(t+T)=\det\Psi(t)$, and since $\det\,\Psi(t)\neq 0$ we have $\det\,M = 1$. The claim follows because the determinant is the product of the eigenvalues □ Combining the results of this section, we see that the eigenvalues are either reciprocal reals, or two non-reals on the complex unit circle which are complex conjugates of one another. When $\delta = 1$, we know also that the two eigenvalues are the same, and so they are both one or both minus one. ## Classification of possible behaviors First, suppose that $\delta = 1$. Then the eigenvalues are both one or both minus one. If $\mu = 1$, we have by an earlier corollary \begin{aligned}x_1(t) &= \pi_1(t) + \frac{t}{T}\pi_3(t) \\x_2(t) &= \pi_2(t)\end{aligned}for $T$-periodic $\pi_1$, $\pi_2$, and $\pi_3$, where $x_1$ and $x_2$ are coordinates which go along with the eigensystem of the monodromy matrix. If $\mu = -1$, we have \begin{aligned}x_1(t) &= (-1)^{t/T} \pi_1(t) + \frac{t}{T}(-1)^{t/T-1} \pi_3(t) \\ x_2(t) &= (-1)^{t/T} \pi_2(t)\end{aligned}Note that this entails that we have$2T$-periodic $\rho_1$, $\rho_2$, and $\rho_3$ such that \begin{aligned}x_1(t) &= \rho_1(t) + \frac{t}{T}\rho_3(t) \\x_2(t) &= \rho_2(t)\end{aligned}Now suppose that $\delta = 0$. We concluded above that $x_1(t) = \mu_1^{t/T} \pi_1(t)$ and $x_2(t) = \mu_2^{t/T} \pi_2(t)$ for $T$-periodic $\pi_1$ and $\pi_2$. If the eigenvalues are both one, we have $T$-periodic behavior, respectively. Note that in this case $M$ is not just isomorphic to, but equal to the identity matrix. So any coordinate system is an eigensystem, that is, we can choose the $x_i$ freely. If the eigenvalues are both minus one, we have $2T$-periodic behavior. In this case $M$ is not just isomorphic to, but equal to minus one times the identity matrix. So here too, any coordinate system is an eigensystem, so we can choose the $x_i$ freely. If the eigenvalues are other reals, the one whose absolute value is greater than one “wins” as $t$ goes towards infinity. So the amplitude grows exponentially. If the eigenvalues are not reals, they are on the complex unit circle, and the amplitude has an upper bound. ## Example: the Mathieu equation The Mathieu equation is the parametric oscillator with$$\omega(t)^2 = \omega_0^2(1 + h \cos (\gamma t))$$If this $\omega(t)$ came from a child on a swing, it would be a strange child: one that stands and squats at a frequency $\gamma$ independent of the resonance frequency $\omega_0$ of the swing. Still, the Mathieu equation is important in physics. Here is a graph showing the monodromy’s eigenvalues for the Mathieu equation with $\omega_0 = 1$ and $h = 1$. The vertical axis corresponds to $\gamma$, which ranges from $0.2$ to $5$. Horizontally, we have the complex plane. For each $\gamma$, the graph contains both eigenvalues of the corresponding monodromy matrix. I refrained from drawing the coordinate axes, to avoid clutter. The eigenvalues of a Mathieu equation as gamma changes The graph shows that, for every $\gamma$, the eigenvalues are either (1) on a circle, which happens to be the unit circle, or (2) opposite one another on a perpendicular of the circle, actually, reciprocal reals. This agrees with our mathematical results about the possible eigenvalues. In case (1) we have no resonance. In case (2), we have resonance. The greatest resonance is represented by the “face of the bunny”, around $\gamma = 2\omega_0 = 2$, connected to the circle at $-1 + 0i$. The second greatest resonance is represented by the bunny’s (uppermost) tail, around $\gamma = \omega_0 = 1$, connected to the circle at $1 + 0i$. This second greatest resonance corresponds to a normal child that stands and squats once during a period of the swing. The greatest resonance corresponds to an eager child that turns around at the apex, facing down again, and stands and squats again on the way back. There are also resonances for smaller $\gamma$, their connection points with the circle alternating between $-1 + 0i$ and $1 + 0i$. It is worth noting that, for smaller $h$, the resonance areas can shrink in such a way that only the bunny’s face at $\gamma = 2$ remains, while all resonances at smaller $\gamma$ vanish. That is: if the child’s standing and squatting have a small amplitude $h$, the child needs to stand and squat more often to achieve resonance. ## The transition into and out of resonance ###### Possible shapes of the monodromy matrix As we have seen, the transitions into and out of resonance happen where the eigenvalues are both one or both minus one. This means that the Jordan normal form of the monodromy matrix is $$\left(\begin{array}{cc}1 & \delta \\0 & 1\end{array}\right)\qquad \mathrm{or} \qquad\left(\begin{array}{cc}-1 & \delta \\0 & -1\end{array}\right)$$where $\delta$ is zero or one. So: To fully understand the transitions into and out of resonance, we must know $\delta$! From the start, I wondered about the case where $M$ cannot be diagonalized, that is, $\delta = 1$, since that was left aside in the book. Next, I was intrigued by the instantaneous ninety-degree turns where the bunny’s body meets the face or a tail. Those points turned out to be the only ones where $M$ might be undiagonalizable. So I kept running into the question about $\delta$. I checked, with Mathematica, the bunny’s two transition points for the resonance at $\gamma = 2$, and its two transition points for the resonance at $\gamma = 1$. In all cases, we have $\delta = 1$. So the question arises: Is it true for all parametric oscillators that the monodromy matrix is undiagonalizable at all transitions into and out of resonance? We shall now shed light on this question. ###### The meaning of diagonalizability and lack thereof First, suppose that $\delta = 0$. If $\mu = 1$, we have, as observed above, two linearly independent solutions $x_1(t) = \pi_1(t)$ and $x_2(t) = \pi_2(t)$ where the $\pi_i$ are $T$-periodic. Since every solution $x(t)$ is a linear combination of those $x_i$, it follows that every solution is $T$-periodic. So, for every initial phase $(x(t_0), v(t_0))$ at some $t_0$, the corresponding solution is $T$-periodic. If $\mu = -1$, we can deduce by a similar argument: for every initial phase $(x(t_0), v(t_0))$ at some $t_0$, the corresponding solution is $2T$-periodic. Now suppose that $\delta = 1$. If $\mu = 1$, we have, as observed above, two linearly dependent solutions $x_1(t) = \pi_1(t)$ and $x_2(t) = \pi_2(t) + \frac{t}{T} \pi_3(t)$ where the $\pi_i$ are $T$-periodic. So the solution space, which is two-dimensional, has a one-dimensional subspace of $T$-periodic functions. All other solutions grow linearly with time. So for every $t_0$, the (also two-dimensional) space of initial conditions at $t_0$ has a one-dimensional subspace of $T$-periodic solutions. For all other initial conditions, the solutions grow linearly with time. For $\mu = -1$, we get something similar: for every $t_0$, the space of initial conditions has a one-dimensional subspace of periodic solutions, this time with period $2T$. Again, all other solutions grow linearly. In summary: for $\delta = 0$, all solution are periodic, while for $\delta = 1$ only some are periodic. In the latter case, we can destabilize a periodic solution by arbitrarily small changes of the initial conditions. ###### Undiagonizable examples We shall now give a stringent example, more illuminating than the Mathieu equation, where $\delta = 1$, that is, $M$ cannot be diagonalized. Here $\omega$ will be a certain rectangular pulse: $$\omega(t) =\begin{cases}1 & 0 \leq (t\, \mathrm{mod}\, T) < t_1\\\omega_{\mathrm{max}} & t_1 \leq t\,\mathrm{mod}\, T) < t_2\\0 & t_2 \leq (t\,\mathrm{mod}\, T) < t_3 < \frac{\pi}{2}\\1 & t_3 \leq (t\, \mathrm{mod} \,T) < T\end{cases}$$Here $T$ is the period, which we must still determine. And $\omega_{\mathrm{max}}$ is a value greater than one, which we must still determine. For the construction, we assume temporarily as initial conditions $x(0) = 1$ and $v(0) = 0$. That is, the solution is the cosine for $0 \leq t < t_1$. We let $t_2 = t_1 + \Delta t$ for a small $\Delta t$. The $\omega_{\mathrm{max}} > 1$ “accelerates the swing”, that is, the solution increases its descent more than a cosine while $\omega_{\mathrm{max}}$ lasts. We choose $\omega_{\mathrm{max}}$ in such a way that at $t_2$ the solution’s first derivative is minus one. There it remains until $t_3$ since $\omega$ is zero there. We let $t_3$ be the point where the solution is zero for the first time for positive $t$. So, from $t_3$, the solution is again a like cosine with amplitude one, but shifted a little to the left. We let $T$ be the time, slightly less than $2\pi$, where the solution is zero the second time for positive $t$. Obviously, the solution is periodic with $T$. It looks like a cosine, except that in the first quadrant there is a “fast forward” lasting from $t_1$ to $t_3$. So, our constructed parametric oscillator has a periodic solution. But are all solutions periodic? No! We fine-tuned $\omega(t)$ so that it would have a periodic solution specifically for the initial condition $x(0) = 1$ and $v(0) = 0$. As can be easily checked, that there are other initial conditions with non-periodic solutions. So, owing to earlier observations, the initial conditions with periodic solutions form a one-dimensional subspace. That is, the only periodic solutions arise from initial conditions that are scalar multiples of $x(0) = 1, v(0) = 0$. The period of our $\omega$ function happens to agree with that of the oscillator’s solution, so the eigenvalues are one. In summary, our constructed parametric oscillator has$$M = \left(\begin{array}{cc}1 & 1 \\0 & 1\end{array}\right)$$Our constructed $\omega$ supplies one impulse in the first quadrant of the movement. So four quadrants pass between impulses. Obviously, we could modify our construction to have an impulse in the first and third quadrant. Then two quadrants would pass between impulses. So the solution’s period would be twice that of $\omega$, and the eigenvalues would be minus one. We could also modify our construction to have six quadrants between impulses (eigenvalues minus one), or eight (eigenvalues one), or ten( eigenvalues minus one), and so on. ###### Diagonalizable examples First I conjectured, in this post, that there is no parametric oscillator with non-constant $\omega$ that has $M = \mathrm{Id}$ or $M = -\mathrm{Id}$. My conjecture was inspired by the previous section. But John Baez proved me wrong. First, an example where $M = \mathrm{Id}$. Consider the following non-constant $\omega$:$$\frac{\omega(t)}{2\pi} =\begin{cases}1 & 0 \leq (t\, \mathrm{mod} \,0.75) < 0.5\\2 & 0.5 \leq (t\, \mathrm{mod}\, 0.75) < 0.75\end{cases}$$The solution for $x(0) = 0$ and $v(0) = 1$ is composed of two sine curves of different frequency: The solution for x(0) = 0 and v(0) = 1 It is periodic, with period $0.75$. The solution for $x(0) = 1$ and $v(0) = 0$ is composed of two cosine curves of different frequency: The solution for x(0) = 1 and v(0) = 0 This too is periodic with period $0.75$. Since the solution space is spanned by those two solutions, every solution is periodic with period $0.75$. Since $0.75$ is also the period of $\omega$, both eigenvalues are one. So the monodromy matrix is the identity. Now an example where $M = -\mathrm{Id}$. Consider the following non-constant $\omega$: $$\frac{\omega(t)}{2\pi} =\begin{cases}1 & 0 \leq (t\,\mathrm{mod}\, 1) < 0.5\\2 & 0.5 \leq (t\,\mathrm{mod}\, 1) \lt 1\end{cases}$$The solution for $x(0) = 0$ and $v(0) = 1$ is composed of two sine/cosine curves of different frequency: Solution for x(0) = 0 and v(0) = 1 This is periodic, with period two. The solution for $x(0) = 1$ and $v(0) = 0$ too is composed of two sine/cosine curves of different frequency: The solution for x(0) = 1 and v(0) = 0 This too is periodic with period two. Since the solution space is spanned by those two solutions, every solution is periodic with period two. Since that is twice the period of $\omega$, both eigenvalues are minus one. So the monodromy matrix is the minus identity. ## References 1. L.D.Landau and E.M.Lifschitz. Lehrbuch der theoretischen Physik I: Mechanik. Verlag Harry Deutsch, 14. Auflage.
2018-07-21 18:50:40
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 256, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9851751327514648, "perplexity": 334.97303441200904}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-30/segments/1531676592654.99/warc/CC-MAIN-20180721184238-20180721204238-00196.warc.gz"}
https://onetransistor.blogspot.com.tr/2014/
### Infrared protocol analysis with PC soundcard Infrared communication is widespread among electronic devices that use remote controls. Because of this, there are more protocols in use and the data (bytes) sent to the device depend on manufacturer. This post will show you how to view the waveform of an infrared signal emitted by a remote control, how to analyze and decode it. Then it will be possible to reproduce it. In this way you can program your universal remote control or your mobile device (smartphone with IR transmitter) with the right code for the best results. Instead of using an oscilloscope for signal analysis, due to the low frequency of the IR carrier an ordinary soundcard will be used. ### High quality scanning vs. small file size This post will not talk about scanning only, but about the whole process of turning a sheet of paper, a magazine or a book into a high quality digital document of the smallest possible size. Nothing will be lost during the conversion as lossless compression will be used and OCR will be run on documents. A low quality scan is hard to read (impossible to run OCR). A high quality scan must preserve as much detail as possible, must have the correct page size (when printed at 100%, the resulting copy should be exactly the same size as the original), must load as quick as possible on low-end devices and shouldn't eat the whole drive space. Software processing of the raw image from scanner plays a very important role. Yet, if the image from scanner has a low resolution, further processing is useless and may have negative results. All software used in this tutorial is free (some apps are open-source). But let's start with the basics. ### Draw electronic schematics using LibreOffice Draw is a powerful vector graphics drawing software. It is part of the free office suites LibreOffice and Apache OpenOffice. There are a lot of free EDA software solutions but none of them allows users to 'customize' the schematics as they want (for example part colors - outline, background, including high resolution images in the schematic etc.). This tutorial will cover some aspects of drawing schematics in Draw. Similarly, schematics can be drawn in any other graphics software (for example Inkscape, even GIMP and Scribus). ### Make a bootable Windows USB from Linux Ubuntu has already an application called Startup Disk Creator, but this can only be used to make Linux bootable USB drives. To make a Windows bootable USB there is an application called WinUSB but it hasn't been updated for a while. The following guide has been updated and works on any Linux distribution as long as it has GRUB and GParted installed and can make bootable USB for any Windows version newer than Vista: Windows Vista, Windows 7, Windows 8, Windows 8.1 and Windows 10. UEFI boot is only supported for Windows 7 x64 and newer. Before starting, let's mention that there are two types of boot methods. There is the MBR code type where the bootable executable is stored in a reserved section at the beginning of the storage device. And there is the EFI type, where the boot loader executable file is stored at a standard path in an FAT32 filesystem. You must decide in advance what you will use. There are some variables for each boot type. If you have no idea what to use, the most common setup that works with unmodified Windows sources, is msdos partition table with fat32 filesystem and flag the partition with boot. In this way you will get both an MBR and UEFI bootable drive. Partition table Filesystem Partition flag MBR bootable msdos ntfs / fat32 boot UEFI bootable msdos / gpt fat32 boot / msftdata * * msdos should be flagged with boot and gpt should be flagged with msftdata. UEFI can only boot FAT32 drives! If you need to make an NTFS UEFI bootable flashdrive to remove the 4 GB maximum file size restriction of FAT32 see this: UEFI NTFS: Bootable Windows USB from Linux. If you prefer, here is the video version of what is about to follow: ## 1. Format USB drive This is the first step. GParted has a nice GUI and it is easy to use for this. So, plug in your USB flashdrive and start GParted (root permissions required). Select the USB drive and unmount it, otherwise you won't be able to format it. Warning! Selecting the wrong device will result in data loss! GParted main window. The first thing to do is select the USB drive. Right-click the USB drive partition and select Unmount You must re-create the partition table by going to the Device menu then select Create Partition Table. Choose msdos (or gpt if you want an UEFI only bootable drive) and click Apply. The Partition Table dialog. Right click the unallocated space and select New. Make a primary NTFS or FAT32 partition and give it a label too. The label must be as strange as possible because the bootloader will identify the bootable partition by this and you should not use windows like I did in the video! If the filesystem is FAT32 use only uppercase letters. For example: WUSB1840 would be a good label (W for Windows, USB for USB flash drive and 18:40 is the time I was writing this). Remember the label as you will need it later. If you have a customized Windows with install.wim larger than 4 GB you should definitely go for NTFS. Otherwise, if you choose FAT32, you could get the flashdrive bootable from UEFI too. New partition dialog Apply all pending operation from Edit menu - Apply all operations or click the button on the main window. Right click the partition and choose Manage flags. If you chose the msdos partition table tick boot. If you chose the gpt partition table, msftdata should already be checked. The Apply button from the main window of GParted ## 2. Copy Windows files Quit GParted and use the file manager to copy all files from Windows ISO to USB stick. Mount the ISO using Open with - Disk Image Mounter (if you use Nautilus as a file manager). If that fails you can use Furius ISO Mount and loop-mount the ISO. Select all files Ctrl+A and Copy to USB drive which will be automatically mounted when you click on it at /media/<username>/<drive_label>. After the copy process is finished, look in the USB root folder for the boot directory. If it is uppercase, rename it to lowercase. ## 3. Make it bootable If you used NTFS filesystem and MSDOS table, only method A is available. If you used FAT32 and MSDOS table, you can apply method A, B or both. If you used GPT partition table, only method B should be followed. ### A. MBR bootable GRUB will be used for that. Open a Terminal and run: sudo grub-install --target=i386-pc --boot-directory="/media/<username>/<drive_label>/boot" /dev/sdX Replace: • /media/<username>/<drive_label> with the path where USB drive is mounted; • /dev/sdX with the USB drive, not the partition (e.g. /dev/sdb) Warning! Selecting the wrong device (/dev/sdX) may result in bootloader corruption of the running operating system! Wait for it to finish. If everything is OK, you should see: Installing for i386-pc platform. Installation finished. No error reported. Now, create a text file and write the following in it: default=1 timeout=15 color_normal=light-cyan/dark-gray insmod ntfs insmod search_label search --no-floppy --set=root --label <USB_drive_label> --hint hd0,msdos1 ntldr /bootmgr boot } menuentry "Boot from the first hard drive" { insmod ntfs insmod chain insmod part_msdos insmod part_gpt set root=(hd1) boot } Replace <USB_drive_label> with the label from step 1 (you can place it between quotes if it contains a space, although it is not recommended to use spaces in drive label). Save the file as grub.cfg and put it on the USB drive in the boot/grub folder. That's it. The USB drive is now bootable from BIOS and can be used to install Windows on your PC. The first time you boot from it in MBR BIOS or CSM mode select Start Windows Installation. ### B. UEFI bootable Not all Windows versions are supported. Windows 7 on 64 bits, Windows 8 and newer versions should work. After the copy process is finished, look in the USB root folder for the efi/boot directory. If there's a bootx64.efi or bootia32.efi file there, then you're done. You can boot from your USB in UEFI mode. If the OS you are making a bootable USB for is Windows 7, browse the efi/microsoft folder and copy the entire boot folder from this path one level up in the efi folder. Merge folders if boot already exists. Here is what to do if you don't have the bootx64.efi file in efi/boot folder. Browse the mounted Windows ISO image into the sources folder. Open install.wim (or install.esd) with your archive manager (you will need 7z installed). Go to the path ./1/Windows/Boot/EFI and extract the file bootmgfw.efi anywhere you want. Rename it to bootx64.efi and put it on the USB drive, in the efi/boot folder. If you can't find bootmgfw.efi in install.wim then you probably have a 32 bit Windows ISO or other types of images (recovery disks, upgrade versions). You can now boot from your USB in UEFI mode. ## Errors 1. modinfo.sh doesn't exist grub-install: error: /usr/lib/grub/i386-pc/modinfo.sh doesn't exist. Please specify --target or --directory. Install the grub-pc-bin package with sudo apt install grub-pc-bin and run the grub-install command again. 2. Embedding errors If you get embedding errors (something like filesystem 'x' does not support embedding or Embedding is not possible), be sure you are installing GRUB to USB device and not USB partition. Most likely you typed /dev/sdb1 instead of /dev/sdb (sdb is just an example here). If it still doesn't work, try zeroing the USB drive (at least some sectors at beggining) or use a different USB flash drive. 3. Blocklists Sometimes, GRUB will not want to install on some flash drives. Try to force it by adding --force argument to the grub-install command. 4. Alternate root partition selection The root partition selection may fail if your USB flash drive partition has the same label as one of the partitions on the target computer. The best way of setting the root partition is by UUID. Launch again GParted and select the USB flashdrive. Right click the partition and select Information. Note the UUID field. Partition UUID In grub.cfg, replace the line: search --no-floppy --set=root --label <USB_drive_label> --hint hd0,msdos1 with: insmod search_fs_uuid search --no-floppy --fs-uuid --set root <drive_UUID> where you will replace <drive_UUID> with the UUID you got from GParted. Still getting errors? If you want an useful answer, please post a comment with the complete grub-install command and the error message. ### Change DPI in Ubuntu By default Ubuntu assumes you use a 96 DPI monitor. But nowadays pixel density keeps increasing as monitors with high resolution became accessible. There is no straightforward option in Ubuntu with Unity to change the default DPI which is considered to be 96 DPI (run in  a terminal xrdb -query). But there are two parameters which control the user interface DPI (this affects font rendering too) and font rendering only. The parameter that controls user interface and fonts DPI is scaling-factor and the font rendering parameter is text-scaling factor. These parameters can be modified by the Displays application, by GNOME Tweak Tool and by Unity Tweak Tool, but none of these allows you to set a value with 3-4 decimals. ### Computer PSU start circuit with bicolor LED There are a lot of tutorials on the internet on how to turn a computer PSU into a bench top power supply. Most of them involve adding a load on the 5V line and turning the PSU on by grounding the PS_ON wire via a switch. Here is a nice indicator and turn on/off switch for computer PSUs. ATX PSUs work well under rather constant loads. So if you power up the PSU with a small load of a few tens miliamps and then connect a greater load, for example a car light bulb which may require 3-4 A, the PSU may shut down. The same happens in case of an accidental shortcircuit. You may believe that the PSU got broken or its fuse got blown. That's not true. The PSU automatically shut itself off. ### Regular backups using Grive2 on Ubuntu Grive is a Google Drive client for Linux that can do two-side synchronization between Google Drive and a local folder. The synchronization is done whenever the user launches the application, either from a launcher or from command line (it is a CLI application). By adding grive to crontab, periodic backups of important folders can be made. And no user interaction is required because the process is automated. Here is how to do it in Ubuntu. There is no system load when the process is not running, but this comes with a disadvantage: no filesystem monitoring. Any updates are made during the automatic execution of Grive. MxL5007T is a tuner IC designed mostly for digital signals (DVB-T, ATSC), but it can be used for analog reception too. I will show you how I took it out from a receiver so I can use it in my projects. It has a programmable IF output and it can receive anything from 44 to 885 MHz. There is no datasheet for it, but there are Linux drivers. ### Design of a TV Tuner based radio scanner Building radio frequency devices becomes difficult starting from VHF band. Moreover, tuning various stages is difficult without expensive electronics equipment. But a ready made unit can be used as the frontend of any radio receiver operating in the VHF – UHF band. That unit is the tuner coming from a TV, from a STB or from a PC card. It has the advantage of covering a wide spectrum of frequencies at a reasonable reception quality well above homemade radio receivers. In order to build a TV tuner radio scanner you need, of course, a tuner. Also you must be able to build some detectors (demodulators) for the common analog modulation schemes. Some frequency mixers need to be built too. Although designed for analog modulation, the scanner is able to demodulate digital signals using the PC sound card as input. The whole constructional project is not difficult because it is modular and you don’t have to build all modules to get it working with a specific RF signal. Let’s take a simple example: if you are interested in receiving broadcast FM radio stations, you only need a simple FM detector that can take as input the tuner’s intermediary frequency. Of course, if you want to improve it you can add a stereo and RDS decoder. On the other hand, if you are interested in digital aviation signals, you’ll need a detector with a tunable local oscillator to overcome tuner large frequency step and a tunable IF filter. To maintain the modular design of the project, you could make this from two modules: a frequency downconverter with the tunable oscillator and filter followed by a fixed frequency detector. In the end, the main point of building this is obtaining the same results as with Realtek SDR devices and SDR#. The functional schematic of the scanner is shown in the figure below. Functional schematic of the scanner. Don't worry, you don't have to build everything from scratch. ### Render 3D realistic images of EAGLE circuit boards Did you ever wanted to see how some of your electronic projects would look before you actually build it? This post will show you how to create realistic images of your PCB designs that you can use in presentations, websites or only to get an idea of how it will look. The process is the following: you design a PCB using CadSoft EAGLE. Then, the Eagle3D script converts the EAGLE PCB file into a POV-Ray source. This is then rendered into something like below. POV-Ray is a very high quality and very configurable renderer. To make an idea of what ic can do with the right input, see the hall of fame of images rendered with it. There is also a video version of this and some new improvements in: [Video] Render 3D images of EAGLE PCB projects. This post will describe: • How to compile POV-Ray (at the time of writing this there were no Ubuntu packages, you can now install from repositories). • How to use HDRI lighting to improve the quality of the rendered pictures. • How to compile MegaPOV, a modified version of POV-Ray 3.6. ### Compile and setup PonyProg on Ubuntu PonyProg is a device programming software for the SI Prog serial interface programmer designed by Claudio Lanconelli. The latest version of the PonyProg software can be found on SourceForge, but there are no Ubuntu packages. Here is how to compile it on Ubuntu. This post was updated for Ubuntu 16.10. First you'll have to install some development libraries: sudo apt install build-essential libxt-dev libxmu-dev libxaw7-dev Then download the archive from SourceForge and extract it to some folder you want (at the time of writing this, the latest version is 2.08d).
2017-08-19 14:52:34
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.3694436252117157, "perplexity": 3366.2253308077625}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-34/segments/1502886105455.37/warc/CC-MAIN-20170819143637-20170819163637-00705.warc.gz"}
https://www.unix.com/man-page/centos/1/perlport/
Home Search Forums Register Forum Rules Man Pages FAQ Members Search Today's Posts Mark Forums Read # CentOS 7.0 - man page for perlport (centos section 1) Linux & Unix Commands - Search Man Pages Man Page or Keyword Search: man All Sections 1 - General Commands 1m - System Admin 2 - System Calls 3 - Subroutines 4 - Special Files 5 - File Formats 6 - Games 7 - Macros and Conventions 8 - Maintenance Commands 9 - Kernel Interface N - New Commands Select Man Page Set:       Linux 2.6 RedHat 9 (Linux i386) Debian 7.7 SuSE 11.3 CentOS 7.0 SunOS 5.10 OpenSolaris 2009.06 BSD 2.11 FreeBSD 11.0 NetBSD 6.1.5 OSX 10.6.2 OpenDarwin 7.2.1 ULTRIX 4.2 PHP 5.6 Minix 2.0 Plan 9 Unix Version 7 OSF1 5.1 (alpha) POSIX 1003.1 X11R7.4 XFree86 4.7.0 all unix.com man page sets apropos Keyword Search (sections above) PERLPORT(1) Perl Programmers Reference Guide PERLPORT(1) NAME perlport - Writing portable Perl DESCRIPTION Perl runs on numerous operating systems. While most of them share much in common, they also have their own unique features. This document is meant to help you to find out what constitutes portable Perl code. That way once you make a decision to write portably, you know where the lines are drawn, and you can stay within them. There is a tradeoff between taking full advantage of one particular type of computer and taking advantage of a full range of them. Naturally, as you broaden your range and become more diverse, the common factors drop, and you are left with an increasingly smaller area of common ground in which you can operate to accomplish a particular task. Thus, when you begin attacking a problem, it is important to consider under which part of the tradeoff curve you want to operate. Specifically, you must decide whether it is important that the task that you are coding have the full generality of being portable, or whether to just get the job done right now. This is the hardest choice to be made. The rest is easy, because Perl provides many choices, whichever way you want to approach your problem. Looking at it another way, writing portable code is usually about willfully limiting your available choices. Naturally, it takes discipline and sacrifice to do that. The product of portability and convenience may be a constant. You have been warned. Be aware of two important points: Not all Perl programs have to be portable There is no reason you should not use Perl as a language to glue Unix tools together, or to prototype a Macintosh application, or to manage the Windows registry. If it makes no sense to aim for portability for one reason or another in a given program, then don't bother. Nearly all of Perl already is portable Don't be fooled into thinking that it is hard to create portable Perl code. It isn't. Perl tries its level-best to bridge the gaps between what's available on different platforms, and all the means available to use those features. Thus almost all Perl code runs on any machine without modification. But there are some significant issues in writing portable code, and this document is entirely about those issues. Here's the general rule: When you approach a task commonly done using a whole range of platforms, think about writing portable code. That way, you don't sacrifice much by way of the implementation choices you can avail yourself of, and at the same time you can give your users lots of platform choices. On the other hand, when you have to take advantage of some unique feature of a particular platform, as is often the case with systems programming (whether for Unix, Windows, VMS, etc.), consider writing platform-specific code. When the code will run on only two or three operating systems, you may need to consider only the differences of those particular systems. The important thing is to decide where the code will run and to be deliberate in your decision. The material below is separated into three main sections: main issues of portability ("ISSUES"), platform-specific issues ("PLATFORMS"), and built-in perl functions that behave differently on various ports ("FUNCTION IMPLEMENTATIONS"). This information should not be considered complete; it includes possibly transient information about idiosyncrasies of some of the ports, almost all of which are in a state of constant evolution. Thus, this material should be considered a perpetual work in progress (""). ISSUES Newlines In most operating systems, lines in files are terminated by newlines. Just what is used as a newline may vary from OS to OS. Unix traditionally uses "\012", one type of DOSish I/O uses "\015\012", and Mac OS uses "\015". Perl uses "\n" to represent the "logical" newline, where what is logical may depend on the platform in use. In MacPerl, "\n" always means "\015". In DOSish perls, "\n" usually means "\012", but when accessing a file in "text" mode, perl uses the ":crlf" layer that translates it to (or from) "\015\012", depending on whether you're reading or writing. Unix does the same thing on ttys in canonical mode. "\015\012" is commonly referred to as CRLF. To trim trailing newlines from text lines use chomp(). With default settings that function looks for a trailing "\n" character and thus trims in a portable way. When dealing with binary files (or text files in binary mode) be sure to explicitly set $/ to the appropriate value for your file format before using chomp(). Because of the "text" mode translation, DOSish perls have limitations in using "seek" and "tell" on a file accessed in "text" mode. Stick to "seek"-ing to locations you got from "tell" (and no others), and you are usually free to use "seek" and "tell" even in "text" mode. Using "seek" or "tell" or other file operations may be non-portable. If you use "binmode" on a file, however, you can usually "seek" and "tell" with arbitrary values in safety. A common misconception in socket programming is that "\n" eq "\012" everywhere. When using protocols such as common Internet protocols, "\012" and "\015" are called for specifically, and the values of the logical "\n" and "\r" (carriage return) are not reliable. print SOCKET "Hi there, client!\r\n"; # WRONG print SOCKET "Hi there, client!\015\012"; # RIGHT However, using "\015\012" (or "\cM\cJ", or "\x0D\x0A") can be tedious and unsightly, as well as confusing to those maintaining the code. As such, the Socket module supplies the Right Thing for those who want it. use Socket qw(:DEFAULT :crlf); print SOCKET "Hi there, client!$CRLF" # RIGHT When reading from a socket, remember that the default input record separator $/ is "\n", but robust socket code will recognize as either "\012" or "\015\012" as end of line: while () { # ... } Because both CRLF and LF end in LF, the input record separator can be set to LF and any CR stripped later. Better to write: use Socket qw(:DEFAULT :crlf); local($/) = LF; # not needed if $/ is already \012 while () { s/$CR?$LF/\n/; # not sure if socket uses LF or CRLF, OK # s/\015?\012/\n/; # same thing } This example is preferred over the previous one--even for Unix platforms--because now any "\015"'s ("\cM"'s) are stripped out (and there was much rejoicing). Similarly, functions that return text data--such as a function that fetches a web page--should sometimes translate newlines before returning the data, if they've not yet been translated to the local newline representation. A single line of code will often suffice:$data =~ s/\015?\012/\n/g; return $data; Some of this may be confusing. Here's a handy reference to the ASCII CR and LF characters. You can print it out and stick it in your wallet. LF eq \012 eq \x0A eq \cJ eq chr(10) eq ASCII 10 CR eq \015 eq \x0D eq \cM eq chr(13) eq ASCII 13 | Unix | DOS | Mac | --------------------------- \n | LF | LF | CR | \r | CR | CR | LF | \n * | LF | CRLF | CR | \r * | CR | CR | LF | --------------------------- * text-mode STDIO The Unix column assumes that you are not accessing a serial line (like a tty) in canonical mode. If you are, then CR on input becomes "\n", and "\n" on output becomes CRLF. These are just the most common definitions of "\n" and "\r" in Perl. There may well be others. For example, on an EBCDIC implementation such as z/OS (OS/390) or OS/400 (using the ILE, the PASE is ASCII-based) the above material is similar to "Unix" but the code numbers change: LF eq \025 eq \x15 eq \cU eq chr(21) eq CP-1047 21 LF eq \045 eq \x25 eq chr(37) eq CP-0037 37 CR eq \015 eq \x0D eq \cM eq chr(13) eq CP-1047 13 CR eq \015 eq \x0D eq \cM eq chr(13) eq CP-0037 13 | z/OS | OS/400 | ---------------------- \n | LF | LF | \r | CR | CR | \n * | LF | LF | \r * | CR | CR | ---------------------- * text-mode STDIO Numbers endianness and Width Different CPUs store integers and floating point numbers in different orders (called endianness) and widths (32-bit and 64-bit being the most common today). This affects your programs when they attempt to transfer numbers in binary format from one CPU architecture to another, usually either "live" via network connection, or by storing the numbers to secondary storage such as a disk file or tape. Conflicting storage orders make utter mess out of the numbers. If a little-endian host (Intel, VAX) stores 0x12345678 (305419896 in decimal), a big-endian host (Motorola, Sparc, PA) reads it as 0x78563412 (2018915346 in decimal). Alpha and MIPS can be either: Digital/Compaq used/uses them in little-endian mode; SGI/Cray uses them in big-endian mode. To avoid this problem in network (socket) connections use the "pack" and "unpack" formats "n" and "N", the "network" orders. These are guaranteed to be portable. As of perl 5.9.2, you can also use the ">" and "<" modifiers to force big- or little- endian byte-order. This is useful if you want to store signed integers or 64-bit integers, for example. You can explore the endianness of your platform by unpacking a data structure packed in native format such as: print unpack("h*", pack("s2", 1, 2)), "\n"; # '10002000' on e.g. Intel x86 or Alpha 21064 in little-endian mode # '00100020' on e.g. Motorola 68040 If you need to distinguish between endian architectures you could use either of the variables set like so:$is_big_endian = unpack("h*", pack("s", 1)) =~ /01/; $is_little_endian = unpack("h*", pack("s", 1)) =~ /^1/; Differing widths can cause truncation even between platforms of equal endianness. The platform of shorter width loses the upper parts of the number. There is no good solution for this problem except to avoid transferring or storing raw binary numbers. One can circumnavigate both these problems in two ways. Either transfer and store numbers always in text format, instead of raw binary, or else consider using modules like Data::Dumper (included in the standard distribution as of Perl 5.005) and Storable (included as of perl 5.8). Keeping all data as text significantly simplifies matters. The v-strings are portable only up to v2147483647(0x7FFFFFFF), that's how far EBCDIC, or more precisely UTF-EBCDIC will go. Files and Filesystems Most platforms these days structure files in a hierarchical fashion. So, it is reasonably safe to assume that all platforms support the notion of a "path" to uniquely identify a file on the system. How that path is really written, though, differs considerably. Although similar, file path specifications differ between Unix, Windows, Mac OS, OS/2, VMS, VOS, RISC OS, and probably others. Unix, for example, is one of the few OSes that has the elegant idea of a single root directory. DOS, OS/2, VMS, VOS, and Windows can work similarly to Unix with "/" as path separator, or in their own idiosyncratic ways (such as having several root directories and various "unrooted" device files such NIL: and LPT:). Mac OS 9 and earlier used ":" as a path separator instead of "/". The filesystem may support neither hard links ("link") nor symbolic links ("symlink", "readlink", "lstat"). The filesystem may support neither access timestamp nor change timestamp (meaning that about the only portable timestamp is the modification timestamp), or one second granularity of any timestamps (e.g. the FAT filesystem limits the time granularity to two seconds). The "inode change timestamp" (the "-C" filetest) may really be the "creation timestamp" (which it is not in Unix). VOS perl can emulate Unix filenames with "/" as path separator. The native pathname characters greater-than, less-than, number-sign, and percent-sign are always accepted. RISC OS perl can emulate Unix filenames with "/" as path separator, or go native and use "." for path separator and ":" to signal filesystems and disk names. Don't assume Unix filesystem access semantics: that read, write, and execute are all the permissions there are, and even if they exist, that their semantics (for example what do r, w, and x mean on a directory) are the Unix ones. The various Unix/POSIX compatibility layers usually try to make interfaces like chmod() work, but sometimes there simply is no good mapping. If all this is intimidating, have no (well, maybe only a little) fear. There are modules that can help. The File::Spec modules provide methods to do the Right Thing on whatever platform happens to be running the program. use File::Spec::Functions; chdir(updir()); # go up one directory my$file = catfile(curdir(), 'temp', 'file.txt'); # on Unix and Win32, './temp/file.txt' # on Mac OS Classic, ':temp:file.txt' # on VMS, '[.temp]file.txt' File::Spec is available in the standard distribution as of version 5.004_05. File::Spec::Functions is only in File::Spec 0.7 and later, and some versions of perl come with version 0.6. If File::Spec is not updated to 0.7 or later, you must use the object- oriented interface from File::Spec (or upgrade File::Spec). In general, production code should not have file paths hardcoded. Making them user- supplied or read from a configuration file is better, keeping in mind that file path syntax varies on different machines. This is especially noticeable in scripts like Makefiles and test suites, which often assume "/" as a path separator for subdirectories. Also of use is File::Basename from the standard distribution, which splits a pathname into pieces (base filename, full path to directory, and file suffix). Even when on a single platform (if you can call Unix a single platform), remember not to count on the existence or the contents of particular system-specific files or directories, like /etc/passwd, /etc/sendmail.conf, /etc/resolv.conf, or even /tmp/. For example, /etc/passwd may exist but not contain the encrypted passwords, because the system is using some form of enhanced security. Or it may not contain all the accounts, because the system is using NIS. If code does need to rely on such a file, include a description of the file and its format in the code's documentation, then make it easy for the user to override the default location of the file. Don't assume a text file will end with a newline. They should, but people forget. Do not have two files or directories of the same name with different case, like test.pl and Test.pl, as many platforms have case-insensitive (or at least case-forgiving) filenames. Also, try not to have non-word characters (except for ".") in the names, and keep them to the 8.3 convention, for maximum portability, onerous a burden though this may appear. Likewise, when using the AutoSplit module, try to keep your functions to 8.3 naming and case-insensitive conventions; or, at the least, make it so the resulting files have a unique (case-insensitively) first 8 characters. Whitespace in filenames is tolerated on most systems, but not all, and even on systems where it might be tolerated, some utilities might become confused by such whitespace. Many systems (DOS, VMS ODS-2) cannot have more than one "." in their filenames. Don't assume ">" won't be the first character of a filename. Always use "<" explicitly to open a file for reading, or even better, use the three-arg version of open, unless you want the user to be able to specify a pipe open. open my $fh, '<',$existing_file) or die $!; If filenames might use strange characters, it is safest to open it with "sysopen" instead of "open". "open" is magic and can translate characters like ">", "<", and "|", which may be the wrong thing to do. (Sometimes, though, it's the right thing.) Three-arg open can also help protect against this translation in cases where it is undesirable. Don't use ":" as a part of a filename since many systems use that for their own semantics (Mac OS Classic for separating pathname components, many networking schemes and utilities for separating the nodename and the pathname, and so on). For the same reasons, avoid "@", ";" and "|". Don't assume that in pathnames you can collapse two leading slashes "//" into one: some networking and clustering filesystems have special semantics for that. Let the operating system to sort it out. The portable filename characters as defined by ANSI C are a b c d e f g h i j k l m n o p q r t u v w x y z A B C D E F G H I J K L M N O P Q R T U V W X Y Z 0 1 2 3 4 5 6 7 8 9 . _ - and the "-" shouldn't be the first character. If you want to be hypercorrect, stay case- insensitive and within the 8.3 naming convention (all the files and directories have to be unique within one directory if their names are lowercased and truncated to eight characters before the ".", if any, and to three characters after the ".", if any). (And do not use "."s in directory names.) System Interaction Not all platforms provide a command line. These are usually platforms that rely primarily on a Graphical User Interface (GUI) for user interaction. A program requiring a command line interface might not work everywhere. This is probably for the user of the program to deal with, so don't stay up late worrying about it. Some platforms can't delete or rename files held open by the system, this limitation may also apply to changing filesystem metainformation like file permissions or owners. Remember to "close" files when you are done with them. Don't "unlink" or "rename" an open file. Don't "tie" or "open" a file already tied or opened; "untie" or "close" it first. Don't open the same file more than once at a time for writing, as some operating systems put mandatory locks on such files. Don't assume that write/modify permission on a directory gives the right to add or delete files/directories in that directory. That is filesystem specific: in some filesystems you need write/modify permission also (or even just) in the file/directory itself. In some filesystems (AFS, DFS) the permission to add/delete directory entries is a completely separate permission. Don't assume that a single "unlink" completely gets rid of the file: some filesystems (most notably the ones in VMS) have versioned filesystems, and unlink() removes only the most recent one (it doesn't remove all the versions because by default the native tools on those platforms remove just the most recent version, too). The portable idiom to remove all the versions of a file is 1 while unlink "file"; This will terminate if the file is undeleteable for some reason (protected, not there, and so on). Don't count on a specific environment variable existing in %ENV. Don't count on %ENV entries being case-sensitive, or even case-preserving. Don't try to clear %ENV by saying "%ENV = ();", or, if you really have to, make it conditional on "$^O ne 'VMS'" since in VMS the %ENV table is much more than a per-process key-value string table. On VMS, some entries in the %ENV hash are dynamically created when their key is used on a read if they did not previously exist. The values for $ENV{HOME},$ENV{TERM}, $ENV{HOME}, and$ENV{USER}, are known to be dynamically generated. The specific names that are dynamically generated may vary with the version of the C library on VMS, and more may exist than is documented. On VMS by default, changes to the %ENV hash are persistent after the process exits. This can cause unintended issues. Don't count on signals or %SIG for anything. Don't count on filename globbing. Use "opendir", "readdir", and "closedir" instead. Don't count on per-program environment variables, or per-program current directories. Don't count on specific values of $!, neither numeric nor especially the strings values. Users may switch their locales causing error messages to be translated into their languages. If you can trust a POSIXish environment, you can portably use the symbols defined by the Errno module, like ENOENT. And don't trust on the values of$! at all except immediately after a failed system call. Command names versus file pathnames Don't assume that the name used to invoke a command or program with "system" or "exec" can also be used to test for the existence of the file that holds the executable code for that command or program. First, many systems have "internal" commands that are built-in to the shell or OS and while these commands can be invoked, there is no corresponding file. Second, some operating systems (e.g., Cygwin, DJGPP, OS/2, and VOS) have required suffixes for executable files; these suffixes are generally permitted on the command name but are not required. Thus, a command like "perl" might exist in a file named "perl", "perl.exe", or "perl.pm", depending on the operating system. The variable "_exe" in the Config module holds the executable suffix, if any. Third, the VMS port carefully sets up $^X and$Config{perlpath} so that no further processing is required. This is just as well, because the matching regular expression used below would then have to deal with a possible trailing version number in the VMS file name. To convert $^X to a file pathname, taking account of the requirements of the various operating system possibilities, say: use Config; my$thisperl = $^X; if ($^O ne 'VMS') {$thisperl .=$Config{_exe} unless $thisperl =~ m/$Config{_exe}$/i;} To convert$Config{perlpath} to a file pathname, say: use Config; my $thisperl =$Config{perlpath}; if ($^O ne 'VMS') {$thisperl .= $Config{_exe} unless$thisperl =~ m/$Config{_exe}$/i;} Networking Don't assume that you can reach the public Internet. Don't assume that there is only one way to get through firewalls to the public Internet. Don't assume that you can reach outside world through any other port than 80, or some web proxy. ftp is blocked by many firewalls. Don't assume that you can send email by connecting to the local SMTP port. Don't assume that you can reach yourself or any node by the name 'localhost'. The same goes for '127.0.0.1'. You will have to try both. Don't assume that the host has only one network card, or that it can't bind to many virtual IP addresses. Don't assume a particular network device name. Don't assume a particular set of ioctl()s will work. Don't assume that you can ping hosts and get replies. Don't assume that any particular port (service) will respond. Don't assume that Sys::Hostname (or any other API or command) returns either a fully qualified hostname or a non-qualified hostname: it all depends on how the system had been configured. Also remember that for things such as DHCP and NAT, the hostname you get back might not be very useful. All the above "don't":s may look daunting, and they are, but the key is to degrade gracefully if one cannot reach the particular network service one wants. Croaking or hanging do not look very professional. Interprocess Communication (IPC) In general, don't directly access the system in code meant to be portable. That means, no "system", "exec", "fork", "pipe", "", "qx//", "open" with a "|", nor any of the other things that makes being a perl hacker worth being. Commands that launch external processes are generally supported on most platforms (though many of them do not support any type of forking). The problem with using them arises from what you invoke them on. External tools are often named differently on different platforms, may not be available in the same location, might accept different arguments, can behave differently, and often present their results in a platform-dependent way. Thus, you should seldom depend on them to produce consistent results. (Then again, if you're calling netstat -a, you probably don't expect it to run on both Unix and CP/M.) One especially common bit of Perl code is opening a pipe to sendmail: open(MAIL, '|/usr/lib/sendmail -t') or die "cannot fork sendmail: $!"; This is fine for systems programming when sendmail is known to be available. But it is not fine for many non-Unix systems, and even some Unix systems that may not have sendmail installed. If a portable solution is needed, see the various distributions on CPAN that deal with it. Mail::Mailer and Mail::Send in the MailTools distribution are commonly used, and provide several mailing methods, including mail, sendmail, and direct SMTP (via Net::SMTP) if a mail transfer agent is not available. Mail::Sendmail is a standalone module that provides simple, platform-independent mailing. The Unix System V IPC ("msg*(), sem*(), shm*()") is not available even on all Unix platforms. Do not use either the bare result of "pack("N", 10, 20, 30, 40)" or bare v-strings (such as "v10.20.30.40") to represent IPv4 addresses: both forms just pack the four bytes into network order. That this would be equal to the C language "in_addr" struct (which is what the socket code internally uses) is not guaranteed. To be portable use the routines of the Socket extension, such as "inet_aton()", "inet_ntoa()", and "sockaddr_in()". The rule of thumb for portable code is: Do it all in portable Perl, or use a module (that may internally implement it with platform-specific code, but expose a common interface). External Subroutines (XS) XS code can usually be made to work with any platform, but dependent libraries, header files, etc., might not be readily available or portable, or the XS code itself might be platform-specific, just as Perl code might be. If the libraries and headers are portable, then it is normally reasonable to make sure the XS code is portable, too. A different type of portability issue arises when writing XS code: availability of a C compiler on the end-user's system. C brings with it its own portability issues, and writing XS code will expose you to some of those. Writing purely in Perl is an easier way to achieve portability. Standard Modules In general, the standard modules work across platforms. Notable exceptions are the CPAN module (which currently makes connections to external programs that may not be available), platform-specific modules (like ExtUtils::MM_VMS), and DBM modules. There is no one DBM module available on all platforms. SDBM_File and the others are generally available on all Unix and DOSish ports, but not in MacPerl, where only NBDM_File and DB_File are available. The good news is that at least some DBM module should be available, and AnyDBM_File will use whichever module it can find. Of course, then the code needs to be fairly strict, dropping to the greatest common factor (e.g., not exceeding 1K for each record), so that it will work with any DBM module. See AnyDBM_File for more details. Time and Date The system's notion of time of day and calendar date is controlled in widely different ways. Don't assume the timezone is stored in$ENV{TZ}, and even if it is, don't assume that you can control the timezone through that variable. Don't assume anything about the three-letter timezone abbreviations (for example that MST would be the Mountain Standard Time, it's been known to stand for Moscow Standard Time). If you need to use timezones, express them in some unambiguous format like the exact number of minutes offset from UTC, or the POSIX timezone format. Don't assume that the epoch starts at 00:00:00, January 1, 1970, because that is OS- and implementation-specific. It is better to store a date in an unambiguous representation. The ISO 8601 standard defines YYYY-MM-DD as the date format, or YYYY-MM-DDTHH:MM:SS (that's a literal "T" separating the date from the time). Please do use the ISO 8601 instead of making us guess what date 02/03/04 might be. ISO 8601 even sorts nicely as-is. A text representation (like "1987-12-18") can be easily converted into an OS-specific value using a module like Date::Parse. An array of values, such as those returned by "localtime", can be converted to an OS-specific representation using Time::Local. When calculating specific times, such as for tests in time or date modules, it may be appropriate to calculate an offset for the epoch. require Time::Local; my $offset = Time::Local::timegm(0, 0, 0, 1, 0, 70); The value for$offset in Unix will be 0, but in Mac OS Classic will be some large number. $offset can then be added to a Unix time value to get what should be the proper value on any system. Character sets and character encoding Assume very little about character sets. Assume nothing about numerical values ("ord", "chr") of characters. Do not use explicit code point ranges (like \xHH-\xHH); use for example symbolic character classes like "[:print:]". Do not assume that the alphabetic characters are encoded contiguously (in the numeric sense). There may be gaps. Do not assume anything about the ordering of the characters. The lowercase letters may come before or after the uppercase letters; the lowercase and uppercase may be interlaced so that both "a" and "A" come before "b"; the accented and other international characters may be interlaced so that ae comes before "b". Internationalisation If you may assume POSIX (a rather large assumption), you may read more about the POSIX locale system from perllocale. The locale system at least attempts to make things a little bit more portable, or at least more convenient and native-friendly for non-English users. The system affects character sets and encoding, and date and time formatting--amongst other things. If you really want to be international, you should consider Unicode. See perluniintro and perlunicode for more information. If you want to use non-ASCII bytes (outside the bytes 0x00..0x7f) in the "source code" of your code, to be portable you have to be explicit about what bytes they are. Someone might for example be using your code under a UTF-8 locale, in which case random native bytes might be illegal ("Malformed UTF-8 ...") This means that for example embedding ISO 8859-1 bytes beyond 0x7f into your strings might cause trouble later. If the bytes are native 8-bit bytes, you can use the "bytes" pragma. If the bytes are in a string (regular expression being a curious string), you can often also use the "\xHH" notation instead of embedding the bytes as-is. (If you want to write your code in UTF-8, you can use the "utf8".) The "bytes" and "utf8" pragmata are available since Perl 5.6.0. System Resources If your code is destined for systems with severely constrained (or missing!) virtual memory systems then you want to be especially mindful of avoiding wasteful constructs such as: my @lines = <$very_large_file>; # bad while (<$fh>) {$file .= $_} # sometimes bad my$file = join('', <$fh>); # better The last two constructs may appear unintuitive to most people. The first repeatedly grows a string, whereas the second allocates a large chunk of memory in one go. On some systems, the second is more efficient that the first. Security Most multi-user platforms provide basic levels of security, usually implemented at the filesystem level. Some, however, unfortunately do not. Thus the notion of user id, or "home" directory, or even the state of being logged-in, may be unrecognizable on many platforms. If you write programs that are security-conscious, it is usually best to know what type of system you will be running under so that you can write code explicitly for that platform (or class of platforms). Don't assume the Unix filesystem access semantics: the operating system or the filesystem may be using some ACL systems, which are richer languages than the usual rwx. Even if the rwx exist, their semantics might be different. (From security viewpoint testing for permissions before attempting to do something is silly anyway: if one tries this, there is potential for race conditions. Someone or something might change the permissions between the permissions check and the actual operation. Just try the operation.) Don't assume the Unix user and group semantics: especially, don't expect the$< and $> (or the$( and $)) to work for switching identities (or memberships). Don't assume set-uid and set-gid semantics. (And even if you do, think twice: set-uid and set-gid are a known can of security worms.) Style For those times when it is necessary to have platform-specific code, consider keeping the platform-specific code in one place, making porting to other platforms easier. Use the Config module and the special variable$^O to differentiate platforms, as described in "PLATFORMS". Be careful in the tests you supply with your module or programs. Module code may be fully portable, but its tests might not be. This often happens when tests spawn off other processes or call external programs to aid in the testing, or when (as noted above) the tests assume certain things about the filesystem and paths. Be careful not to depend on a specific output style for errors, such as when checking $! after a failed system call. Using$! for anything else than displaying it as output is doubtful (though see the Errno module for testing reasonably portably for error value). Some platforms expect a certain output format, and Perl on those platforms may have been adjusted accordingly. Most specifically, don't anchor a regex when testing an error value. CPAN Testers Modules uploaded to CPAN are tested by a variety of volunteers on different platforms. These CPAN testers are notified by mail of each new upload, and reply to the list with PASS, FAIL, NA (not applicable to this platform), or UNKNOWN (unknown), along with any relevant notations. The purpose of the testing is twofold: one, to help developers fix any problems in their code that crop up because of lack of testing on other platforms; two, to provide users with information about whether a given module works on a given platform. Also see: o Mailing list: cpan-testers-discuss@perl.org o Testing results: PLATFORMS As of version 5.002, Perl is built with a $^O variable that indicates the operating system it was built on. This was implemented to help speed up code that would otherwise have to "use Config" and use the value of$Config{osname}. Of course, to get more detailed information about the system, looking into %Config is certainly recommended. %Config cannot always be trusted, however, because it was built at compile time. If perl was built in one place, then transferred elsewhere, some values may be wrong. The values may even have been edited after the fact. Unix Perl works on a bewildering variety of Unix and Unix-like platforms (see e.g. most of the files in the hints/ directory in the source code kit). On most of these systems, the value of $^O (hence$Config{'osname'}, too) is determined either by lowercasing and stripping punctuation from the first field of the string returned by typing "uname -a" (or a similar command) at the shell prompt or by testing the file system for the presence of uniquely named files such as a kernel or header file. Here, for example, are a few of the more popular Unix flavors: uname $^O$Config{'archname'} -------------------------------------------- AIX aix aix BSD/OS bsdos i386-bsdos Darwin darwin darwin dgux dgux AViiON-dgux DYNIX/ptx dynixptx i386-dynixptx FreeBSD freebsd freebsd-i386 Haiku haiku BePC-haiku Linux linux arm-linux Linux linux i386-linux Linux linux i586-linux Linux linux ppc-linux HP-UX hpux PA-RISC1.1 IRIX irix irix Mac OS X darwin darwin NeXT 3 next next-fat NeXT 4 next OPENSTEP-Mach openbsd openbsd i386-openbsd OSF1 dec_osf alpha-dec_osf reliantunix-n svr4 RM400-svr4 SCO_SV sco_sv i386-sco_sv SINIX-N svr4 RM400-svr4 sn4609 unicos CRAY_C90-unicos sn6521 unicosmk t3e-unicosmk sn9617 unicos CRAY_J90-unicos SunOS solaris sun4-solaris SunOS solaris i86pc-solaris SunOS4 sunos sun4-sunos Because the value of $Config{archname} may depend on the hardware architecture, it can vary more than the value of$^O. DOS and Derivatives Perl has long been ported to Intel-style microcomputers running under systems like PC-DOS, MS-DOS, OS/2, and most Windows platforms you can bring yourself to mention (except for Windows CE, if you count that). Users familiar with COMMAND.COM or CMD.EXE style shells should be aware that each of these file specifications may have subtle differences: my $filespec0 = "c:/foo/bar/file.txt"; my$filespec1 = "c:\\foo\\bar\\file.txt"; my $filespec2 = 'c:\foo\bar\file.txt'; my$filespec3 = 'c:\\foo\\bar\\file.txt'; System calls accept either "/" or "\" as the path separator. However, many command-line utilities of DOS vintage treat "/" as the option prefix, so may get confused by filenames containing "/". Aside from calling any external programs, "/" will work just fine, and probably better, as it is more consistent with popular usage, and avoids the problem of remembering what to backwhack and what not to. The DOS FAT filesystem can accommodate only "8.3" style filenames. Under the "case- insensitive, but case-preserving" HPFS (OS/2) and NTFS (NT) filesystems you may have to be careful about case returned with functions like "readdir" or used with functions like "open" or "opendir". DOS also treats several filenames as special, such as AUX, PRN, NUL, CON, COM1, LPT1, LPT2, etc. Unfortunately, sometimes these filenames won't even work if you include an explicit directory prefix. It is best to avoid such filenames, if you want your code to be portable to DOS and its derivatives. It's hard to know what these all are, unfortunately. Users of these operating systems may also wish to make use of scripts such as pl2bat.bat or pl2cmd to put wrappers around your scripts. Newline ("\n") is translated as "\015\012" by STDIO when reading from and writing to files (see "Newlines"). "binmode(FILEHANDLE)" will keep "\n" translated as "\012" for that filehandle. Since it is a no-op on other systems, "binmode" should be used for cross- platform code that deals with binary data. That's assuming you realize in advance that your data is in binary. General-purpose programs should often assume nothing about their data. The $^O variable and the$Config{archname} values for various DOSish perls are as follows: OS $^O$Config{archname} ID Version -------------------------------------------------------- MS-DOS dos ? PC-DOS dos ? OS/2 os2 ? Windows 3.1 ? ? 0 3 01 Windows 95 MSWin32 MSWin32-x86 1 4 00 Windows 98 MSWin32 MSWin32-x86 1 4 10 Windows ME MSWin32 MSWin32-x86 1 ? Windows NT MSWin32 MSWin32-x86 2 4 xx Windows NT MSWin32 MSWin32-ALPHA 2 4 xx Windows NT MSWin32 MSWin32-ppc 2 4 xx Windows 2000 MSWin32 MSWin32-x86 2 5 00 Windows XP MSWin32 MSWin32-x86 2 5 01 Windows 2003 MSWin32 MSWin32-x86 2 5 02 Windows Vista MSWin32 MSWin32-x86 2 6 00 Windows 7 MSWin32 MSWin32-x86 2 6 01 Windows 7 MSWin32 MSWin32-x64 2 6 01 Windows 2008 MSWin32 MSWin32-x86 2 6 01 Windows 2008 MSWin32 MSWin32-x64 2 6 01 Windows CE MSWin32 ? 3 Cygwin cygwin cygwin The various MSWin32 Perl's can distinguish the OS they are running on via the value of the fifth element of the list returned from Win32::GetOSVersion(). For example: if ($^O eq 'MSWin32') { my @os_version_info = Win32::GetOSVersion(); print +('3.1','95','NT')[$os_version_info[4]],"\n"; } There are also Win32::IsWinNT() and Win32::IsWin95(), try "perldoc Win32", and as of libwin32 0.19 (not part of the core Perl distribution) Win32::GetOSName(). The very portable POSIX::uname() will work too: c:\> perl -MPOSIX -we "print join '|', uname" Windows NT|moonru|5.0|Build 2195 (Service Pack 2)|x86 Also see: o The djgpp environment for DOS, and perldos. o The EMX environment for DOS, OS/2, etc. emx@iaehv.nl, Also perlos2. o Build instructions for Win32 in perlwin32, or under the Cygnus environment in perlcygwin. o The "Win32::*" modules in Win32. o The ActiveState Pages, o The Cygwin environment for Win32; README.cygwin (installed as perlcygwin), o The U/WIN environment for Win32, o Build instructions for OS/2, perlos2 VMS Perl on VMS is discussed in perlvms in the perl distribution. The official name of VMS as of this writing is OpenVMS. Perl on VMS can accept either VMS- or Unix-style file specifications as in either of the following: $perl -ne "print if /perl_setup/i" SYS$LOGIN:LOGIN.COM $perl -ne "print if /perl_setup/i" /sys$login/login.com but not a mixture of both as in: $perl -ne "print if /perl_setup/i" sys$login:/login.com Can't open sys$login:/login.com: file specification syntax error Interacting with Perl from the Digital Command Language (DCL) shell often requires a different set of quotation marks than Unix shells do. For example:$ perl -e "print ""Hello, world.\n""" Hello, world. There are several ways to wrap your perl scripts in DCL .COM files, if you are so inclined. For example: $write sys$output "Hello from DCL!" $if p1 .eqs. ""$ then perl -x 'f$environment("PROCEDURE")$ else perl -x - 'p1 'p2 'p3 'p4 'p5 'p6 'p7 'p8 $deck/dollars="__END__" #!/usr/bin/perl print "Hello from Perl!\n"; __END__$ endif Do take care with "$ASSIGN/nolog/user SYS$COMMAND: SYS$INPUT" if your perl-in-DCL script expects to do things like "$read = ;". The VMS operating system has two filesystems, known as ODS-2 and ODS-5. For ODS-2, filenames are in the format "name.extension;version". The maximum length for filenames is 39 characters, and the maximum length for extensions is also 39 characters. Version is a number from 1 to 32767. Valid characters are "/[A-Z0-9$_-]/". The ODS-2 filesystem is case-insensitive and does not preserve case. Perl simulates this by converting all filenames to lowercase internally. For ODS-5, filenames may have almost any character in them and can include Unicode characters. Characters that could be misinterpreted by the DCL shell or file parsing utilities need to be prefixed with the "^" character, or replaced with hexadecimal characters prefixed with the "^" character. Such prefixing is only needed with the pathnames are in VMS format in applications. Programs that can accept the Unix format of pathnames do not need the escape characters. The maximum length for filenames is 255 characters. The ODS-5 file system can handle both a case preserved and a case sensitive mode. ODS-5 is only available on the OpenVMS for 64 bit platforms. Support for the extended file specifications is being done as optional settings to preserve backward compatibility with Perl scripts that assume the previous VMS limitations. In general routines on VMS that get a Unix format file specification should return it in a Unix format, and when they get a VMS format specification they should return a VMS format unless they are documented to do a conversion. For routines that generate return a file specification, VMS allows setting if the C library which Perl is built on if it will be returned in VMS format or in Unix format. With the ODS-2 file system, there is not much difference in syntax of filenames without paths for VMS or Unix. With the extended character set available with ODS-5 there can be a significant difference. Because of this, existing Perl scripts written for VMS were sometimes treating VMS and Unix filenames interchangeably. Without the extended character set enabled, this behavior will mostly be maintained for backwards compatibility. When extended characters are enabled with ODS-5, the handling of Unix formatted file specifications is to that of a Unix system. VMS file specifications without extensions have a trailing dot. An equivalent Unix file specification should not show the trailing dot. The result of all of this, is that for VMS, for portable scripts, you can not depend on Perl to present the filenames in lowercase, to be case sensitive, and that the filenames could be returned in either Unix or VMS format. And if a routine returns a file specification, unless it is intended to convert it, it should return it in the same format as it found it. "readdir" by default has traditionally returned lowercased filenames. When the ODS-5 support is enabled, it will return the exact case of the filename on the disk. Files without extensions have a trailing period on them, so doing a "readdir" in the default mode with a file named A.;5 will return a. when VMS is (though that file could be opened with "open(FH, 'A')"). With support for extended file specifications and if "opendir" was given a Unix format directory, a file named A.;5 will return a and optionally in the exact case on the disk. When "opendir" is given a VMS format directory, then "readdir" should return a., and again with the optionally the exact case. RMS had an eight level limit on directory depths from any rooted logical (allowing 16 levels overall) prior to VMS 7.2, and even with versions of VMS on VAX up through 7.3. Hence "PERL_ROOT:[LIB.2.3.4.5.6.7.8]" is a valid directory specification but "PERL_ROOT:[LIB.2.3.4.5.6.7.8.9]" is not. Makefile.PL authors might have to take this into account, but at least they can refer to the former as "/PERL_ROOT/lib/2/3/4/5/6/7/8/". Pumpkings and module integrators can easily see whether files with too many directory levels have snuck into the core by running the following in the top-level source directory:$ perl -ne "$_=~s/\s+.*//; print if scalar(split /\//) > 8;" < MANIFEST The VMS::Filespec module, which gets installed as part of the build process on VMS, is a pure Perl module that can easily be installed on non-VMS platforms and can be helpful for conversions to and from RMS native formats. It is also now the only way that you should check to see if VMS is in a case sensitive mode. What "\n" represents depends on the type of file opened. It usually represents "\012" but it could also be "\015", "\012", "\015\012", "\000", "\040", or nothing depending on the file organization and record format. The VMS::Stdio module provides access to the special fopen() requirements of files with unusual attributes on VMS. TCP/IP stacks are optional on VMS, so socket routines might not be implemented. UDP sockets may not be supported. The TCP/IP library support for all current versions of VMS is dynamically loaded if present, so even if the routines are configured, they may return a status indicating that they are not implemented. The value of$^O on OpenVMS is "VMS". To determine the architecture that you are running on without resorting to loading all of %Config you can examine the content of the @INC array like so: if (grep(/VMS_AXP/, @INC)) { print "I'm on Alpha!\n"; } elsif (grep(/VMS_VAX/, @INC)) { print "I'm on VAX!\n"; } elsif (grep(/VMS_IA64/, @INC)) { print "I'm on IA64!\n"; } else { print "I'm not so sure about where $^O is...\n"; } In general, the significant differences should only be if Perl is running on VMS_VAX or one of the 64 bit OpenVMS platforms. On VMS, perl determines the UTC offset from the "SYS$TIMEZONE_DIFFERENTIAL" logical name. Although the VMS epoch began at 17-NOV-1858 00:00:00.00, calls to "localtime" are adjusted to count offsets from 01-JAN-1970 00:00:00.00, just like Unix. Also see: o README.vms (installed as README_vms), perlvms o vmsperl list, vmsperl-subscribe@perl.org o vmsperl on the web, VOS Perl on VOS (also known as OpenVOS) is discussed in README.vos in the perl distribution (installed as perlvos). Perl on VOS can accept either VOS- or Unix-style file specifications as in either of the following: $perl -ne "print if /perl_setup/i" >system>notices$ perl -ne "print if /perl_setup/i" /system/notices or even a mixture of both as in: $perl -ne "print if /perl_setup/i" >system/notices Even though VOS allows the slash character to appear in object names, because the VOS port of Perl interprets it as a pathname delimiting character, VOS files, directories, or links whose names contain a slash character cannot be processed. Such files must be renamed before they can be processed by Perl. Older releases of VOS (prior to OpenVOS Release 17.0) limit file names to 32 or fewer characters, prohibit file names from starting with a "-" character, and prohibit file names from containing any character matching "tr/ !#%&'()*;<=>?//". Newer releases of VOS (OpenVOS Release 17.0 or later) support a feature known as extended names. On these releases, file names can contain up to 255 characters, are prohibited from starting with a "-" character, and the set of prohibited characters is reduced to any character matching "tr/#%*<>?//". There are restrictions involving spaces and apostrophes: these characters must not begin or end a name, nor can they immediately precede or follow a period. Additionally, a space must not immediately precede another space or hyphen. Specifically, the following character combinations are prohibited: space-space, space-hyphen, period-space, space-period, period-apostrophe, apostrophe- period, leading or trailing space, and leading or trailing apostrophe. Although an extended file name is limited to 255 characters, a path name is still limited to 256 characters. The value of$^O on VOS is "VOS". To determine the architecture that you are running on without resorting to loading all of %Config you can examine the content of the @INC array like so: if ($^O =~ /VOS/) { print "I'm on a Stratus box!\n"; } else { print "I'm not on a Stratus box!\n"; die; } Also see: o README.vos (installed as perlvos) o The VOS mailing list. There is no specific mailing list for Perl on VOS. You can post comments to the comp.sys.stratus newsgroup, or use the contact information located in the distribution files on the Stratus Anonymous FTP site. o VOS Perl on the web at EBCDIC Platforms Recent versions of Perl have been ported to platforms such as OS/400 on AS/400 minicomputers as well as OS/390, VM/ESA, and BS2000 for S/390 Mainframes. Such computers use EBCDIC character sets internally (usually Character Code Set ID 0037 for OS/400 and either 1047 or POSIX-BC for S/390 systems). On the mainframe perl currently works under the "Unix system services for OS/390" (formerly known as OpenEdition), VM/ESA OpenEdition, or the BS200 POSIX-BC system (BS2000 is supported in perl 5.6 and greater). See perlos390 for details. Note that for OS/400 there is also a port of Perl 5.8.1/5.9.0 or later to the PASE which is ASCII-based (as opposed to ILE which is EBCDIC-based), see perlos400. As of R2.5 of USS for OS/390 and Version 2.3 of VM/ESA these Unix sub-systems do not support the "#!" shebang trick for script invocation. Hence, on OS/390 and VM/ESA perl scripts can be executed with a header similar to the following simple script: : # use perl eval 'exec /usr/local/bin/perl -S$0 ${1+"$@"}' if 0; #!/usr/local/bin/perl # just a comment really print "Hello from perl!\n"; OS/390 will support the "#!" shebang trick in release 2.8 and beyond. Calls to "system" and backticks can use POSIX shell syntax on all S/390 systems. On the AS/400, if PERL5 is in your library list, you may need to wrap your perl scripts in a CL procedure to invoke them like so: BEGIN CALL PGM(PERL5/PERL) PARM('/QOpenSys/hello.pl') ENDPGM This will invoke the perl script hello.pl in the root of the QOpenSys file system. On the AS/400 calls to "system" or backticks must use CL syntax. On these platforms, bear in mind that the EBCDIC character set may have an effect on what happens with some perl functions (such as "chr", "pack", "print", "printf", "ord", "sort", "sprintf", "unpack"), as well as bit-fiddling with ASCII constants using operators like "^", "&" and "|", not to mention dealing with socket interfaces to ASCII computers (see "Newlines"). Fortunately, most web servers for the mainframe will correctly translate the "\n" in the following statement to its ASCII equivalent ("\r" is the same under both Unix and OS/390 & VM/ESA): print "Content-type: text/html\r\n\r\n"; The values of $^O on some of these platforms includes: uname$^O $Config{'archname'} -------------------------------------------- OS/390 os390 os390 OS400 os400 os400 POSIX-BC posix-bc BS2000-posix-bc VM/ESA vmesa vmesa Some simple tricks for determining if you are running on an EBCDIC platform could include any of the following (perhaps all): if ("\t" eq "\005") { print "EBCDIC may be spoken here!\n"; } if (ord('A') == 193) { print "EBCDIC may be spoken here!\n"; } if (chr(169) eq 'z') { print "EBCDIC may be spoken here!\n"; } One thing you may not want to rely on is the EBCDIC encoding of punctuation characters since these may differ from code page to code page (and once your module or script is rumoured to work with EBCDIC, folks will want it to work with all EBCDIC character sets). Also see: o perlos390, README.os390, perlbs2000, README.vmesa, perlebcdic. o The perl-mvs@perl.org list is for discussion of porting issues as well as general usage issues for all EBCDIC Perls. Send a message body of "subscribe perl-mvs" to majordomo@perl.org. o AS/400 Perl information at as well as on CPAN in the ports/ directory. Acorn RISC OS Because Acorns use ASCII with newlines ("\n") in text files as "\012" like Unix, and because Unix filename emulation is turned on by default, most simple scripts will probably work "out of the box". The native filesystem is modular, and individual filesystems are free to be case-sensitive or insensitive, and are usually case-preserving. Some native filesystems have name length limits, which file and directory names are silently truncated to fit. Scripts should be aware that the standard filesystem currently has a name length limit of 10 characters, with up to 77 items in a directory, but other filesystems may not impose such limitations. Native filenames are of the form Filesystem#Special_Field::DiskName.$.Directory.Directory.File where Special_Field is not usually present, but may contain . and $. Filesystem =~ m|[A-Za-z0-9_]| DsicName =~ m|[A-Za-z0-9_/]|$ represents the root directory . is the path separator @ is the current directory (per filesystem but machine global) ^ is the parent directory Directory and File =~ m|[^\0- "\.\$\%\&:\@\\^\|\177]+| The default filename translation is roughly "tr|/.|./|;" Note that ""ADFS::HardDisk.$.File" ne 'ADFS::HardDisk.$.File'" and that the second stage of "$" interpolation in regular expressions will fall foul of the $. if scripts are not careful. Logical paths specified by system variables containing comma-separated search lists are also allowed; hence "System:Modules" is a valid filename, and the filesystem will prefix "Modules" with each section of "System$Path" until a name is made that points to an object on disk. Writing to a new file "System:Modules" would be allowed only if "System$Path" contains a single item list. The filesystem will also expand system variables in filenames if enclosed in angle brackets, so ".Modules" would look for the file "$ENV{'System$Dir'} . 'Modules'". The obvious implication of this is that fully qualified filenames can start with "<>" and should be protected when "open" is used for input. Because "." was in use as a directory separator and filenames could not be assumed to be unique after 10 characters, Acorn implemented the C compiler to strip the trailing ".c" ".h" ".s" and ".o" suffix from filenames specified in source code and store the respective files in subdirectories named after the suffix. Hence files are translated: foo.h h.foo C:foo.h C:h.foo (logical path variable) sys/os.h sys.h.os (C compiler groks Unix-speak) 10charname.c c.10charname 10charname.o o.10charname 11charname_.c c.11charname (assuming filesystem truncates at 10) The Unix emulation library's translation of filenames to native assumes that this sort of translation is required, and it allows a user-defined list of known suffixes that it will transpose in this fashion. This may seem transparent, but consider that with these rules foo/bar/baz.h and foo/bar/h/baz both map to foo.bar.h.baz, and that "readdir" and "glob" cannot and do not attempt to emulate the reverse mapping. Other "."'s in filenames are translated to "/". As implied above, the environment accessed through %ENV is global, and the convention is that program specific environment variables are of the form "Program$Name". Each filesystem maintains a current directory, and the current filesystem's current directory is the global current directory. Consequently, sociable programs don't change the current directory but rely on full pathnames, and programs (and Makefiles) cannot assume that they can spawn a child process which can change the current directory without affecting its parent (and everyone else for that matter). Because native operating system filehandles are global and are currently allocated down from 255, with 0 being a reserved value, the Unix emulation library emulates Unix filehandles. Consequently, you can't rely on passing "STDIN", "STDOUT", or "STDERR" to your children. The desire of users to express filenames of the form ".Bar" on the command line unquoted causes problems, too: "" command output capture has to perform a guessing game. It assumes that a string "<[^<>]+\$[^<>]>" is a reference to an environment variable, whereas anything else involving "<" or ">" is redirection, and generally manages to be 99% right. Of course, the problem remains that scripts cannot rely on any Unix tools being available, or that any tools found have Unix-like command line arguments. Extensions and XS are, in theory, buildable by anyone using free tools. In practice, many don't, as users of the Acorn platform are used to binary distributions. MakeMaker does run, but no available make currently copes with MakeMaker's makefiles; even if and when this should be fixed, the lack of a Unix-like shell will cause problems with makefile rules, especially lines of the form "cd sdbm && make all", and anything using quoting. "RISC OS" is the proper name for the operating system, but the value in$^O is "riscos" (because we don't like shouting). Other perls Perl has been ported to many platforms that do not fit into any of the categories listed above. Some, such as AmigaOS, BeOS, HP MPE/iX, QNX, Plan 9, and VOS, have been well- integrated into the standard Perl source code kit. You may need to see the ports/ directory on CPAN for information, and possibly binaries, for the likes of: aos, Atari ST, lynxos, riscos, Novell Netware, Tandem Guardian, etc. (Yes, we know that some of these OSes may fall under the Unix category, but we are not a standards body.) Some approximate operating system names and their $^O values in the "OTHER" category include: OS$^O $Config{'archname'} ------------------------------------------ Amiga DOS amigaos m68k-amigos BeOS beos MPE/iX mpeix PA-RISC1.1 See also: o Amiga, README.amiga (installed as perlamiga). o Be OS, README.beos o HP 300 MPE/iX, README.mpeix and Mark Bixby's web page o A free perl5-based PERL.NLM for Novell Netware is available in precompiled binary and source code form from as well as from CPAN. o Plan 9, README.plan9 FUNCTION IMPLEMENTATIONS Listed below are functions that are either completely unimplemented or else have been implemented differently on various platforms. Following each description will be, in parentheses, a list of platforms that the description applies to. The list may well be incomplete, or even wrong in some places. When in doubt, consult the platform-specific README files in the Perl source distribution, and any other documentation resources accompanying a given port. Be aware, moreover, that even among Unix-ish systems there are variations. For many functions, you can also query %Config, exported by default from the Config module. For example, to check whether the platform has the "lstat" call, check$Config{d_lstat}. See Config for a full description of available variables. Alphabetical Listing of Perl Functions -X "-w" only inspects the read-only file attribute (FILE_ATTRIBUTE_READONLY), which determines whether the directory can be deleted, not whether it can be written to. Directories always have read and write access unless denied by discretionary access control lists (DACLs). (Win32) "-r", "-w", "-x", and "-o" tell whether the file is accessible, which may not reflect UIC-based file protections. (VMS) "-s" by name on an open file will return the space reserved on disk, rather than the current extent. "-s" on an open filehandle returns the current size. (RISC OS) "-R", "-W", "-X", "-O" are indistinguishable from "-r", "-w", "-x", "-o". (Win32, VMS, RISC OS) "-g", "-k", "-l", "-u", "-A" are not particularly meaningful. (Win32, VMS, RISC OS) "-p" is not particularly meaningful. (VMS, RISC OS) "-d" is true if passed a device spec without an explicit directory. (VMS) "-x" (or "-X") determine if a file ends in one of the executable suffixes. "-S" is meaningless. (Win32) "-x" (or "-X") determine if a file has an executable file type. (RISC OS) alarm Emulated using timers that must be explicitly polled whenever Perl wants to dispatch "safe signals" and therefore cannot interrupt blocking system calls. (Win32) atan2 Due to issues with various CPUs, math libraries, compilers, and standards, results for "atan2()" may vary depending on any combination of the above. Perl attempts to conform to the Open Group/IEEE standards for the results returned from "atan2()", but cannot force the issue if the system Perl is run on does not allow it. (Tru64, HP-UX 10.20) The current version of the standards for "atan2()" is available at . binmode Meaningless. (RISC OS) Reopens file and restores pointer; if function fails, underlying filehandle may be closed, or pointer may be in a different position. (VMS) The value returned by "tell" may be affected after the call, and the filehandle may be flushed. (Win32) chmod Only good for changing "owner" read-write access, "group", and "other" bits are meaningless. (Win32) Only good for changing "owner" and "other" read-write access. (RISC OS) Access permissions are mapped onto VOS access-control list changes. (VOS) The actual permissions set depend on the value of the "CYGWIN" in the SYSTEM environment settings. (Cygwin) chown Not implemented. (Win32, Plan 9, RISC OS) Does nothing, but won't fail. (Win32) A little funky, because VOS's notion of ownership is a little funky (VOS). chroot Not implemented. (Win32, VMS, Plan 9, RISC OS, VOS, VM/ESA) crypt May not be available if library or source was not provided when building perl. (Win32) dbmclose Not implemented. (VMS, Plan 9, VOS) dbmopen Not implemented. (VMS, Plan 9, VOS) dump Not useful. (RISC OS) Not supported. (Cygwin, Win32) Invokes VMS debugger. (VMS) exec Implemented via Spawn. (VM/ESA) Does not automatically flush output handles on some platforms. (SunOS, Solaris, HP-UX) Not supported. (Symbian OS) exit Emulates Unix exit() (which considers "exit 1" to indicate an error) by mapping the 1 to SS$_ABORT(44). This behavior may be overridden with the pragma "use vmsish 'exit'". As with the CRTL's exit() function, "exit 0" is also mapped to an exit status of SS$_NORMAL(1); this mapping cannot be overridden. Any other argument to exit() is used directly as Perl's exit status. On VMS, unless the future POSIX_EXIT mode is enabled, the exit code should always be a valid VMS exit code and not a generic number. When the POSIX_EXIT mode is enabled, a generic number will be encoded in a method compatible with the C library _POSIX_EXIT macro so that it can be decoded by other programs, particularly ones written in C, like the GNV package. (VMS) "exit()" resets file pointers, which is a problem when called from a child process (created by "fork()") in "BEGIN". A workaround is to use "POSIX::_exit". (Solaris) exit unless $Config{archname} =~ /\bsolaris\b/; require POSIX and POSIX::_exit(0); fcntl Not implemented. (Win32) Some functions available based on the version of VMS. (VMS) flock Not implemented (VMS, RISC OS, VOS). fork Not implemented. (AmigaOS, RISC OS, VM/ESA, VMS) Emulated using multiple interpreters. See perlfork. (Win32) Does not automatically flush output handles on some platforms. (SunOS, Solaris, HP-UX) getlogin Not implemented. (RISC OS) getpgrp Not implemented. (Win32, VMS, RISC OS) getppid Not implemented. (Win32, RISC OS) getpriority Not implemented. (Win32, VMS, RISC OS, VOS, VM/ESA) getpwnam Not implemented. (Win32) Not useful. (RISC OS) getgrnam Not implemented. (Win32, VMS, RISC OS) getnetbyname Not implemented. (Win32, Plan 9) getpwuid Not implemented. (Win32) Not useful. (RISC OS) getgrgid Not implemented. (Win32, VMS, RISC OS) getnetbyaddr Not implemented. (Win32, Plan 9) getprotobynumber getservbyport getpwent Not implemented. (Win32, VM/ESA) getgrent Not implemented. (Win32, VMS, VM/ESA) gethostbyname "gethostbyname('localhost')" does not work everywhere: you may have to use "gethostbyname('127.0.0.1')". (Irix 5) gethostent Not implemented. (Win32) getnetent Not implemented. (Win32, Plan 9) getprotoent Not implemented. (Win32, Plan 9) getservent Not implemented. (Win32, Plan 9) sethostent Not implemented. (Win32, Plan 9, RISC OS) setnetent Not implemented. (Win32, Plan 9, RISC OS) setprotoent Not implemented. (Win32, Plan 9, RISC OS) setservent Not implemented. (Plan 9, Win32, RISC OS) endpwent Not implemented. (MPE/iX, VM/ESA, Win32) endgrent Not implemented. (MPE/iX, RISC OS, VM/ESA, VMS, Win32) endhostent Not implemented. (Win32) endnetent Not implemented. (Win32, Plan 9) endprotoent Not implemented. (Win32, Plan 9) endservent Not implemented. (Plan 9, Win32) getsockopt SOCKET,LEVEL,OPTNAME Not implemented. (Plan 9) glob This operator is implemented via the File::Glob extension on most platforms. See File::Glob for portability information. gmtime In theory, gmtime() is reliable from -2**63 to 2**63-1. However, because work arounds in the implementation use floating point numbers, it will become inaccurate as the time gets larger. This is a bug and will be fixed in the future. On VOS, time values are 32-bit quantities. ioctl FILEHANDLE,FUNCTION,SCALAR Not implemented. (VMS) Available only for socket handles, and it does what the ioctlsocket() call in the Winsock API does. (Win32) Available only for socket handles. (RISC OS) kill Not implemented, hence not useful for taint checking. (RISC OS) "kill()" doesn't have the semantics of "raise()", i.e. it doesn't send a signal to the identified process like it does on Unix platforms. Instead "kill($sig, $pid)" terminates the process identified by$pid, and makes it exit immediately with exit status $sig. As in Unix, if$sig is 0 and the specified process exists, it returns true without actually terminating it. (Win32) "kill(-9, $pid)" will terminate the process specified by$pid and recursively all child processes owned by it. This is different from the Unix semantics, where the signal will be delivered to all processes in the same process group as the process specified by $pid. (Win32) Is not supported for process identification number of 0 or negative numbers. (VMS) link Not implemented. (MPE/iX, RISC OS, VOS) Link count not updated because hard links are not quite that hard (They are sort of half-way between hard and soft links). (AmigaOS) Hard links are implemented on Win32 under NTFS only. They are natively supported on Windows 2000 and later. On Windows NT they are implemented using the Windows POSIX subsystem support and the Perl process will need Administrator or Backup Operator privileges to create hard links. Available on 64 bit OpenVMS 8.2 and later. (VMS) localtime localtime() has the same range as "gmtime", but because time zone rules change its accuracy for historical and future times may degrade but usually by no more than an hour. lstat Not implemented. (RISC OS) Return values (especially for device and inode) may be bogus. (Win32) msgctl msgget msgsnd msgrcv Not implemented. (Win32, VMS, Plan 9, RISC OS, VOS) open open to "|-" and "-|" are unsupported. (Win32, RISC OS) Opening a process does not automatically flush output handles on some platforms. (SunOS, Solaris, HP-UX) readlink Not implemented. (Win32, VMS, RISC OS) rename Can't move directories between directories on different logical volumes. (Win32) rewinddir Will not cause readdir() to re-read the directory stream. The entries already read before the rewinddir() call will just be returned again from a cache buffer. (Win32) select Only implemented on sockets. (Win32, VMS) Only reliable on sockets. (RISC OS) Note that the "select FILEHANDLE" form is generally portable. semctl semget semop Not implemented. (Win32, VMS, RISC OS) setgrent Not implemented. (MPE/iX, VMS, Win32, RISC OS) setpgrp Not implemented. (Win32, VMS, RISC OS, VOS) setpriority Not implemented. (Win32, VMS, RISC OS, VOS) setpwent Not implemented. (MPE/iX, Win32, RISC OS) setsockopt Not implemented. (Plan 9) shmctl shmget shmread shmwrite Not implemented. (Win32, VMS, RISC OS, VOS) sockatmark A relatively recent addition to socket functions, may not be implemented even in Unix platforms. socketpair Not implemented. (RISC OS, VM/ESA) Available on OpenVOS Release 17.0 or later. (VOS) Available on 64 bit OpenVMS 8.2 and later. (VMS) stat Platforms that do not have rdev, blksize, or blocks will return these as '', so numeric comparison or manipulation of these fields may cause 'not numeric' warnings. ctime not supported on UFS (Mac OS X). ctime is creation time instead of inode change time (Win32). device and inode are not meaningful. (Win32) device and inode are not necessarily reliable. (VMS) mtime, atime and ctime all return the last modification time. Device and inode are not necessarily reliable. (RISC OS) dev, rdev, blksize, and blocks are not available. inode is not meaningful and will differ between stat calls on the same file. (os2) some versions of cygwin when doing a stat("foo") and if not finding it may then attempt to stat("foo.exe") (Cygwin) On Win32 stat() needs to open the file to determine the link count and update attributes that may have been changed through hard links. Setting${^WIN32_SLOPPY_STAT} to a true value speeds up stat() by not performing this operation. (Win32) symlink Not implemented. (Win32, RISC OS) Implemented on 64 bit VMS 8.3. VMS requires the symbolic link to be in Unix syntax if it is intended to resolve to a valid path. syscall Not implemented. (Win32, VMS, RISC OS, VOS, VM/ESA) sysopen The traditional "0", "1", and "2" MODEs are implemented with different numeric values on some systems. The flags exported by "Fcntl" (O_RDONLY, O_WRONLY, O_RDWR) should work everywhere though. (Mac OS, OS/390, VM/ESA) system As an optimization, may not call the command shell specified in $ENV{PERL5SHELL}. "system(1, @args)" spawns an external process and immediately returns its process designator, without waiting for it to terminate. Return value may be used subsequently in "wait" or "waitpid". Failure to spawn() a subprocess is indicated by setting$? to "255 << 8". $? is set in a way compatible with Unix (i.e. the exitstatus of the subprocess is obtained by "$? >> 8", as described in the documentation). (Win32) There is no shell to process metacharacters, and the native standard is to pass a command line terminated by "\n" "\r" or "\0" to the spawned program. Redirection such as "> foo" is performed (if at all) by the run time library of the spawned program. "system" list will call the Unix emulation library's "exec" emulation, which attempts to provide emulation of the stdin, stdout, stderr in force in the parent, providing the child program uses a compatible version of the emulation library. scalar will call the native command line direct and no such emulation of a child Unix program will exists. Mileage will vary. (RISC OS) Does not automatically flush output handles on some platforms. (SunOS, Solaris, HP-UX) The return value is POSIX-like (shifted up by 8 bits), which only allows room for a made-up value derived from the severity bits of the native 32-bit condition code (unless overridden by "use vmsish 'status'"). If the native condition code is one that has a POSIX value encoded, the POSIX value will be decoded to extract the expected exit value. For more details see "\$?" in perlvms. (VMS) times "cumulative" times will be bogus. On anything other than Windows NT or Windows 2000, "system" time will be bogus, and "user" time is actually the time returned by the clock() function in the C runtime library. (Win32) Not useful. (RISC OS) truncate Not implemented. (Older versions of VMS) Truncation to same-or-shorter lengths only. (VOS) If a FILEHANDLE is supplied, it must be writable and opened in append mode (i.e., use "open(FH, '>>filename')" or "sysopen(FH,...,O_APPEND|O_RDWR)". If a filename is supplied, it should not be held open elsewhere. (Win32) umask Returns undef where unavailable, as of version 5.005. "umask" works but the correct permissions are set only when the file is finally closed. (AmigaOS) utime Only the modification time is updated. (BeOS, VMS, RISC OS) May not behave as expected. Behavior depends on the C runtime library's implementation of utime(), and the filesystem being used. The FAT filesystem typically does not support an "access time" field, and it may limit timestamps to a granularity of two seconds. (Win32) wait waitpid Can only be applied to process handles returned for processes spawned using "system(1, ...)" or pseudo processes created with "fork()". (Win32) Not useful. (RISC OS) Supported Platforms The following platforms are known to build Perl 5.12 (as of April 2010, its release date) from the standard source code distribution available at Linux (x86, ARM, IA64) HP-UX AIX Win32 Windows 2000 Windows XP Windows Server 2003 Windows Vista Windows Server 2008 Windows 7 Cygwin Solaris (x86, SPARC) OpenVMS Alpha (7.2 and later) I64 (8.2 and later) Symbian NetBSD FreeBSD Debian GNU/kFreeBSD Haiku Irix (6.5. What else?) OpenBSD Dragonfly BSD QNX Neutrino RTOS (6.5.0) MirOS BSD Caveats: time_t issues that may or may not be fixed Symbian (Series 60 v3, 3.2 and 5 - what else?) Stratus VOS / OpenVOS AIX EOL Platforms (Perl 5.14) The following platforms were supported by a previous version of Perl but have been officially removed from Perl's source code as of 5.12: Atari MiNT Apollo Domain/OS Apple Mac OS 8/9 Tenon Machten The following platforms were supported up to 5.10. They may still have worked in 5.12, but supporting code has been removed for 5.14: Windows 95 Windows 98 Windows ME Windows NT4 Supported Platforms (Perl 5.8) As of July 2002 (the Perl release 5.8.0), the following platforms were able to build Perl from the standard source code distribution available at AIX BeOS BSD/OS (BSDi) Cygwin DG/UX DOS DJGPP 1) DYNIX/ptx EPOC R5 FreeBSD HI-UXMPP (Hitachi) (5.8.0 worked but we didn't know it) HP-UX IRIX Linux Mac OS Classic Mac OS X (Darwin) MPE/iX NetBSD NetWare NonStop-UX ReliantUNIX (formerly SINIX) OpenBSD OpenVMS (formerly VMS) Open UNIX (Unixware) (since Perl 5.8.1/5.9.0) OS/2 OS/400 (using the PASE) (since Perl 5.8.1/5.9.0) PowerUX POSIX-BC (formerly BS2000) QNX Solaris SunOS 4 SUPER-UX (NEC) Tru64 UNIX (formerly DEC OSF/1, Digital UNIX) UNICOS UNICOS/mk UTS VOS Win95/98/ME/2K/XP 2) WinCE z/OS (formerly OS/390) VM/ESA 1) in DOS mode either the DOS or OS/2 ports can be used 2) compilers: Borland, MinGW (GCC), VC6 The following platforms worked with the previous releases (5.6 and 5.7), but we did not manage either to fix or to test these in time for the 5.8.0 release. There is a very good chance that many of these will work fine with the 5.8.0. BSD/OS DomainOS Hurd LynxOS MachTen PowerMAX SCO SV SVR4 Unixware Windows 3.1 Known to be broken for 5.8.0 (but 5.6.1 and 5.7.2 can be used): AmigaOS The following platforms have been known to build Perl from source in the past (5.005_03 and earlier), but we haven't been able to verify their status for the current release, either because the hardware/software platforms are rare or because we don't have an active champion on these platforms--or both. They used to work, though, so go ahead and try compiling them, and let perlbug@perl.org of any trouble. 3b1 A/UX ConvexOS CX/UX DC/OSx DDE SMES DOS EMX Dynix EP/IX ESIX FPS GENIX Greenhills ISC MachTen 68k MPC NEWS-OS NextSTEP OpenSTEP Opus Plan 9 RISC/os SCO ODT/OSR Stellar SVR2 TI1500 TitanOS Ultrix Unisys Dynix The following platforms have their own source code distributions and binaries available via Perl release OS/400 (ILE) 5.005_02 Tandem Guardian 5.004 The following platforms have only binaries available via : Perl release Acorn RISCOS 5.005_02 AOS 5.002 LynxOS 5.004_02 Although we do suggest that you always build your own Perl from the source code, both for maximal configurability and for security, in case you are in a hurry you can check for binary distributions. SEE ALSO perlaix, perlamiga, perlbeos, perlbs2000, perlce, perlcygwin, perldgux, perldos, perlepoc, perlebcdic, perlfreebsd, perlhurd, perlhpux, perlirix, perlmacos, perlmacosx, perlmpeix, perlnetware, perlos2, perlos390, perlos400, perlplan9, perlqnx, perlsolaris, perltru64, perlunicode, perlvmesa, perlvms, perlvos, perlwin32, and Win32. AUTHORS / CONTRIBUTORS Abigail , Charles Bailey , Graham Barr , Tom Christiansen , Nicholas Clark , Thomas Dorner , Andy Dougherty , Dominic Dunlop , Neale Ferguson , David J. Fiander , Paul Green , M.J.T. Guy , Jarkko Hietaniemi , Luther Huffman , Nick Ing-Simmons , Andreas J. Koenig , Markus Laker , Andrew M. Langmead , Larry Moore , Paul Moore , Chris Nandor , Matthias Neeracher , Philip Newton , Gary Ng <71564.1743@CompuServe.COM>, Tom Phoenix , Andre Pirard , Peter Prymmer , Hugo van der Sanden , Gurusamy Sarathy , Paul J. Schinder , Michael G Schwern , Dan Sugalski , Nathan Torkington , John Malmberg perl v5.16.3 2013-03-04 PERLPORT(1) Unix & Linux Commands & Man Pages : ©2000 - 2018 Unix and Linux Forums All times are GMT -4. The time now is 05:31 PM.
2018-02-18 21:31:15
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.4926181435585022, "perplexity": 9537.535931906079}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-09/segments/1518891812259.30/warc/CC-MAIN-20180218212626-20180218232626-00606.warc.gz"}
http://mathhelpforum.com/calculus/50154-average-rate-change-calc.html
# Thread: Average Rate of Change Calc!! 1. ## Average Rate of Change Calc!! OK how do i solve this? I am completely lost even how to approach it. A explanation/step by step method is desired. I am extremely grateful for any help! y=2x(sin2x+xcos2x) interval 0 less than or equal to x less than or equal to pi/2 What is the average rate of change of y wiht respect to x in this interval? 2. Originally Posted by painterchica16 OK how do i solve this? I am completely lost even how to approach it. A explanation/step by step method is desired. I am extremely grateful for any help! y=2x(sin2x+xcos2x) interval 0 less than or equal to x less than or equal to pi/2 What is the average rate of change of y wiht respect to x in this interval? Let $y = f(x)$ so, $f(x) = 2x( \sin 2x + x \cos 2x)$ the average rate of change between $x = 0$ and $x = \frac {\pi}2$ is given by $\text{Average rate of change} = \frac {f \bigg( \frac {\pi}2 \bigg) - f(0)}{\frac {\pi}2 - 0}$ that is a general formula that you should know. look it up in your text. if they ask for "average rate of change", that's the formula they want you to use
2017-01-23 14:04:48
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 5, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.7669532895088196, "perplexity": 222.7522733240966}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560282926.64/warc/CC-MAIN-20170116095122-00028-ip-10-171-10-70.ec2.internal.warc.gz"}
https://www.physicsforums.com/threads/converting-implicit-function-to-explicit.693161/
# Converting implicit function to explicit 1. May 22, 2013 ### JulieK I have an implicit function [e.g. $x+\frac{ln(y)}{y^2}=0$)]. Is there any mathematical trick/technique that can convert this to an explicit function (i.e. $y=f(x)$) even within certain restrictions and conditions. 2. May 22, 2013 ### Staff: Mentor I don't believe so. It would be simple to write as x = g(y), though. What's wrong with that?
2018-02-21 22:05:33
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.3688182532787323, "perplexity": 2391.9179999358466}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-09/segments/1518891813803.28/warc/CC-MAIN-20180221202619-20180221222619-00592.warc.gz"}
http://math.stackexchange.com/questions/245766/differentiable-mapping-between-surfaces
# differentiable mapping between surfaces Let $M$ and $N$ be surfaces, and $f: M\rightarrow N$ a differentiable and bijective map (a) Can you ensure that f is a diffeomorphism? (b) Suppose further that the map $T_{p}f:T_{M}\rightarrow T_{f(p)}N$ is an isomorphism for each $p\in M$, can you ensure that a diffeomorphism? in (a) i think that is false, but I try to make a example, but the differentiability of the inverse is always hard - (a) is false for curves, just take $M$ and $N$ to be the real line. The product of this map with the identity on the real line gives you an example for surfaces. For (b) look at the inverse function theorem. – Ryan Budney Nov 27 '12 at 16:33 For (a) let $M = N = \mathbb{R}^2$ with standard coordinates $(x,y)$ for $M$ and $(u,v)$ for $N$. Let $f$ be: $$f(x,y) = (x^3,y)$$ Clearly $f$ is bijective and differentiable. But the inverse map $$g(u,v) = (\sqrt[3]{u},v)$$ is not differentiable when $u = 0$. For (b) by the inverse function theorem $f$ is a local diffeomorphism. Then since it is bijective it is a diffeomorphism.
2015-11-28 18:31:58
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9569007158279419, "perplexity": 109.91595915547133}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2015-48/segments/1448398453656.76/warc/CC-MAIN-20151124205413-00064-ip-10-71-132-137.ec2.internal.warc.gz"}
https://www.danieroux.com/two-tree-facts/
# Two tree facts Two tree facts, one embarrassing and one surprising: What gives a tree mass, what is it made of? It is not the nutrients from the ground. Otherwise, there would be a hole under every tree. This was my first answer. I should and did know the answer to this, but could not get there. Then: There is no one theory on how water travels from the roots to the leaves. There are many, and disagreement on which one is right. This one I assumed I had an answer for but did not. Since no one fully does. In two tree facts, I learned how much I assume.
2023-02-05 10:03:54
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8443657755851746, "perplexity": 671.1960086374064}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2023-06/segments/1674764500251.38/warc/CC-MAIN-20230205094841-20230205124841-00128.warc.gz"}
https://ggplot2.tidyverse.org/reference/geom_contour.html
ggplot2 can not draw true 3d surfaces, but you can use geom_contour and geom_tile() to visualise 3d surfaces in 2d. To be a valid surface, the data must contain only a single row for each unique combination of the variables mapped to the x and y aesthetics. Contouring tends to work best when x and y form a (roughly) evenly spaced grid. If your data is not evenly spaced, you may want to interpolate to a grid before visualising. geom_contour(mapping = NULL, data = NULL, stat = "contour", position = "identity", ..., lineend = "butt", linejoin = "round", linemitre = 10, na.rm = FALSE, show.legend = NA, inherit.aes = TRUE) stat_contour(mapping = NULL, data = NULL, geom = "contour", position = "identity", ..., na.rm = FALSE, show.legend = NA, inherit.aes = TRUE) ## Arguments mapping Set of aesthetic mappings created by aes() or aes_(). If specified and inherit.aes = TRUE (the default), it is combined with the default mapping at the top level of the plot. You must supply mapping if there is no plot mapping. The data to be displayed in this layer. There are three options: If NULL, the default, the data is inherited from the plot data as specified in the call to ggplot(). A data.frame, or other object, will override the plot data. All objects will be fortified to produce a data frame. See fortify() for which variables will be created. A function will be called with a single argument, the plot data. The return value must be a data.frame, and will be used as the layer data. A function can be created from a formula (e.g. ~ head(.x, 10)). The statistical transformation to use on the data for this layer, as a string. Position adjustment, either as a string, or the result of a call to a position adjustment function. Other arguments passed on to layer(). These are often aesthetics, used to set an aesthetic to a fixed value, like colour = "red" or size = 3. They may also be parameters to the paired geom/stat. Line end style (round, butt, square). Line join style (round, mitre, bevel). Line mitre limit (number greater than 1). If FALSE, the default, missing values are removed with a warning. If TRUE, missing values are silently removed. logical. Should this layer be included in the legends? NA, the default, includes if any aesthetics are mapped. FALSE never includes, and TRUE always includes. It can also be a named logical vector to finely select the aesthetics to display. If FALSE, overrides the default aesthetics, rather than combining with them. This is most useful for helper functions that define both data and aesthetics and shouldn't inherit behaviour from the default plot specification, e.g. borders(). The geometric object to use display the data ## Aesthetics geom_contour() understands the following aesthetics (required aesthetics are in bold): • x • y • alpha • colour • group • linetype • size • weight Learn more about setting these aesthetics in vignette("ggplot2-specs"). stat_contour() understands the following aesthetics (required aesthetics are in bold): • x • y • z • group • order Learn more about setting these aesthetics in vignette("ggplot2-specs"). ## Computed variables level height of contour nlevel height of contour, scaled to maximum of 1 piece contour piece (an integer) geom_density_2d(): 2d density contours ## Examples #' # Basic plot v <- ggplot(faithfuld, aes(waiting, eruptions, z = density)) v + geom_contour() # Or compute from raw data ggplot(faithful, aes(waiting, eruptions)) + geom_density_2d() # Setting bins creates evenly spaced contours in the range of the data v + geom_contour(bins = 2)v + geom_contour(bins = 10) # Setting binwidth does the same thing, parameterised by the distance # between contours v + geom_contour(binwidth = 0.01)v + geom_contour(binwidth = 0.001) # Other parameters v + geom_contour(aes(colour = stat(level)))v + geom_contour(colour = "red")v + geom_raster(aes(fill = density)) + geom_contour(colour = "white")
2020-01-27 07:40:39
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.2082665115594864, "perplexity": 5470.715224196419}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-05/segments/1579251694908.82/warc/CC-MAIN-20200127051112-20200127081112-00499.warc.gz"}
http://ufoafterblank.wikia.com/wiki/AL:Time
## FANDOM 345 Pages Times for strategic activities are measured in man-days on this wiki. The higher level a scientist / technician is, the more man-days he or she can contributes per Mars day. Difficulty and trainings will further affect the contribution. The given numbers assume Normal difficulty with no relevant trainings. Level 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 Man-days per day 1.33 1.43 1.53 1.64 1.75 1.87 2.00 2.15 2.30 2.46 2.63 2.81 3.01 3.22 3.45 3.68 3.94 4.22 4.51 4.83 Actual game data either presents the time in days (research and production) or in hours (events and buildings). A Mars day has 24 hours and 39 minutes (1479 minutes per day). Thus, event times and building times rarely translate to exact days like production and research. For example, event takes 48 hours to happen, which is 1 Mars day plus 23 hours, rather than 2 days. For buildings, the time displayed in game is inaccurate. It calculates required work using 24 hours, instead of 24.65 hours, so buildings usually finish a little earlier than initial estimation. This wiki uses the real time, rather then displayed time. ## Calculation Edit The numbers in above table are averaged from raw data and rounded to two digits (with slight adjustment). The exact formula, for the moment, is 1.2481e0.0677L, where L = technician/scientist level. This is deduced from game data; actual in-game formula is unknown. The data is acquired from the minutes needed to setup a production line, and result is cross checked with total production time estimation and base building time, and tested with station building time and research time. Minutes required by level Man-days 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 0.75 776 725 678 633 592 553 517 483 451 422 394 368 344 322 301 1.5 1552 1451 1356 1267 1184 1107 1034 967 903 844 789 737 689 644 602 2.25 2329 2176 2034 1901 1776 1660 1552 1450 1355 1266 1184 1106 1034 966 903 3 3105 2902 2712 2535 2369 2214 2069 1934 1807 1689 1578 1475 1378 1288 1204 3.75 3882 3628 3390 3168 2961 2767 2586 2417 2259 2111 1973 1844 1723 1610 1505 4.5 4658 4353 4068 3802 3553 3321 3104 2901 2711 2533 2368 2213 2068 1933 1806 $Man~days~per~day = \frac{1479~(Minutes~per~day)}{\frac{Minutes~required}{Man~days~done}}$
2017-07-25 20:58:50
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 1, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.38318052887916565, "perplexity": 330.5086785554724}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-30/segments/1500549425381.3/warc/CC-MAIN-20170725202416-20170725222416-00289.warc.gz"}
https://www.conftool.com/eosam2022/index.php?page=browseSessions&form_session=412
# Conference Agenda Topical Meetings and Sessions: TOM 1 - Silicon Photonics and Guided-Wave Optics TOM 2 - Computational, Adaptive and Freeform Optics TOM 3 - Optical System Design, Tolerancing and Manufacturing TOM 4 - Bio-Medical Optics TOM 5 - Resonant Nanophotonics TOM 6 - Optical Materials: crystals, thin films, organic molecules & polymers, syntheses, characterization and applications TOM 7 - Thermal radiation and energy management TOM 8 - Non-linear and Quantum Optics TOM 9 - Opto-electronic Nanotechnologies and Complex Systems TOM 10 - Frontiers in Optical Metrology TOM 11 - Tapered optical fibers, from fundamental to applications TOM 12 - Optofluidics TOM 13 - Advances and Applications of Optics and Photonics EU Project Session Early Stage Researcher Session Select a date or location to show only sessions at that day or location. Select a single session for a detailed view (with abstracts and downloads when you are logged in as a registered attendee). The rest of the TOM sessions, EU project session, tutorials, and Early Stage Researcher session will be updated soon. Thank you for your patience! Please note that all times are shown in the time zone of the conference. The current conference time is: 18th Aug 2022, 06:34:48pm WEST Session Overview Session TOM13 S05: Advances and Applications of Optics and Photonics Time: Wednesday, 14/Sept/2022: 2:30pm - 4:00pm Location: Room 1 Presentations 2:30pm - 2:45pm ID: 141 / TOM13 S05: 1 TOM 13 Advances and Applications of Optics and Photonics Spectral scaling transformations of nonstationary light Jyrki Laatikainen1, Matias Koivurova2, Jari Turunen1, Tero Setälä1, Ari T. Friberg1 1University of Eastern Finland, Finland; 2Tampere University, Finland We present optical systems, which transform isodiffracting nonstationary beams into fields obeying either cross-spectral purity or spectral invariance. The designs are hybrid refractive-diffractive imaging systems, which are able to perform the desired transformations over a broad spectral bandwidth and irrespective of the state of spatial coherence of the input beam. 2:45pm - 3:00pm ID: 206 / TOM13 S05: 2 TOM 13 Advances and Applications of Optics and Photonics Cross-spectral purity for nonstationary optical fields Meilan Luo1,2, Jyrki Laatikainen1, Atri Halder1, Matias Koivurova3, Tero Setälä1, Jari Turunen1, Ari T. Friberg1 1University of Eastern Finland, Finland; 2Hunan Normal University, China; 3Tampere University, Finland We derive an extended reduction formula for the time-integrated coherence function starting from the cross-spectral purity conditions for nonstationary optical fields. Two types of separable cross-spectral density functions that ensure cross-spectral purity are introduced and their implications are discussed. 3:00pm - 3:15pm ID: 137 / TOM13 S05: 3 TOM 13 Advances and Applications of Optics and Photonics Elliptical cladding elements in negative curvature hollow-core fiber for low-loss terahertz guidance Asfandyar Khan, Mustafa Ordu UNAM-Institute of Materials Science and Nanotechnology, Bilkent University, Turkey Optical losses of ellipse-nested elliptical negative curvature hollow-core fibers are numerically presented for the terahertz region. Primary design parameters of all fibers were iteratively optimized and confinement and total losses as low as 2.79×10-5 dB/m and 0.015 dB/m were achieved at 0.55 THz, respectively. Single-mode light guidance of the proposed design was investigated and a higher-order mode extinction ratio of around 3000 was calculated. 3:15pm - 3:30pm ID: 177 / TOM13 S05: 4 TOM 13 Advances and Applications of Optics and Photonics 3D printed hollow-core polymer optical fiber with six-pointed star cladding for the light guidance in near-IR regime Mahmudur Rahman1, Ceren Dilsiz1,2, Mustafa Ordu1 1UNAM-Institute of Materials Science and Nanotechnology, Bilkent University, Turkey; 2Department of Electrical and Electronics Engineering, Ankara University, Gölbaşı, Ankara 06830, Turkey Polymer optical fibers have great significance due to the wide range of applications, such as data transmission, sensing and illumination. In this study, we proposed a novel hollow-core polymer optical fiber fabricated by a commercially available 3D printer with guiding properties in the near-infrared region. The fiber was drawn conventionally using thermal drawing tower from a 3D printed preform. Light is guided through the air core surrounded by six-pointed star cladding tubes. Measured transmission losses of as low as 0.33 dB/cm was achieved in the near-infrared region. 3:30pm - 3:45pm ID: 182 / TOM13 S05: 5 TOM 13 Advances and Applications of Optics and Photonics Hybrid cladding designs in chalcogenide negative curvature hollow-core fibers Asfandyar Khan, Mustafa Ordu UNAM-Institute of Materials Science and Nanotechnology, Bilkent University, Cankaya, Ankara 06800, Turkey Ellipse-nested tubular As2Se3 chalcogenide negative curvature hollow-core fibers are numerically presented for the mid-infrared region. Confinement and total losses are calculated as low as 0.38 dB/km and 4.66 dB/km at 10.6 µm, respectively. A further investigation on the fundamental and higher order modes showed that the proposed fiber favors the single-mode guidance. 3:45pm - 4:00pm ID: 219 / TOM13 S05: 6 TOM 13 Advances and Applications of Optics and Photonics A φ-Shaped Bending-Optical Fiber Sensor for the Measurement of Radial variation in Cylindrical Structures Victor Henrique Rodrigues Cardoso1,4, Paulo Caldas4,5, M. Thereza R. Giraldi2, Orlando Frazão3,4, João Weyl Costa1, José L. Santos3,4 1Federal University of Pará, Applied Electromagnetism Laboratory, Rua Augusto Corrêa, 01, 66075-110, Belém, Pará, Brazil; 2Military Institute of Engineering, Laboratory of Photonics, Praça Gen. Tibúrcio, 80,22290-270, Rio de Janeiro, Brazil; 3Department of Physics and Astronomy, Faculty of Sciences of University of Porto, Rua do Campo Alegre, 687, 4169-007 Porto, Portugal; 4nstitute for Systems and Computer Engineering, Technology and Science, Rua do Campo Alegre, 687, 4169-007 Porto, Portugal; 5Polytechnic Institute of Viana do Castelo, Rua Escola Industrial e Comercial de Nun’Álvares, 4900-347, Viana do Castelo, Portugal This work presents preliminary results of the $\phi$ -shaped sensor mounted on support designed by additive manufacturing (AM). This sensor is proposed and experimentally demonstrated to measure the radial variation of cylindrical structures. The sensor presents an easy fabrication. The support was developed to work using the principle of leverage. The sensing head is curled between two points so that the dimension associated with the macro bend is changed when there is a radial variation. The results indicate that the proposed sensor structure can monitor radial variation in applications such as pipelines and trees. Contact and Legal Notice · Contact Address: eosam{at}europeanopticsorg Privacy Statement · Conference: EOSAM 2022 Conference Software - ConfTool Pro 2.6.144+TC+CC © 2001–2022 by Dr. H. Weinreich, Hamburg, Germany
2022-08-18 17:34:48
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.21638867259025574, "perplexity": 14518.393930418519}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-33/segments/1659882573242.55/warc/CC-MAIN-20220818154820-20220818184820-00348.warc.gz"}
https://techwhiff.com/learn/cutter-enterprises-purchased-equipment-for-60000/353904
##### A system is given by y(n + 1)-ay(n) + bc(n), where a and b are unknown. When y(0) 1,c(n) 0 for n A system is given by y(n + 1)-ay(n) + bc(n), where a and b are unknown. When y(0) 1,c(n) 0 for n <0 and c(n) - 1 for n 2 0, it is known that y(1) 0.5, y(2) 1.25, y(3) 1.625, y(4) 1.8125, and y(5) 1.90625. Use direct adaptive control to generate c (3), c(4), and c(5) for tracking the reference r(n... ##### On January 1, 2019, Concord issued 10-year, $300,000 face value, 6% bonds at par. Each$1,000... On January 1, 2019, Concord issued 10-year, $300,000 face value, 6% bonds at par. Each$1,000 bond is convertible into 30 shares of Concord \$2 par value common stock. The company has had 10,000 shares of common stock (and no preferred stock) outstanding throughout its life. None of the bonds have be...
2022-11-30 20:25:01
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.5208823680877686, "perplexity": 6644.472149244672}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-49/segments/1669446710771.39/warc/CC-MAIN-20221130192708-20221130222708-00550.warc.gz"}
https://math.stackexchange.com/questions/3625955/conditional-expectation-of-a-function-of-two-random-variables
Conditional expectation of a function of two random variables. I have that (X, Y) are uniformly distributed in the triangle defined by the vertices $$(0, 0)$$, $$(1, 0)$$, and $$(0, 1)$$. I am trying to find the following expectation value: $$E\left( (X-Y)^2 | X \right)$$ I am able to get the joint density, marginal densities, and conditional densities fairly easily. However, I am a little confused on how to go about solving this? • As an alternative, use the second approach from here. If $(X, Y) = (U, (1 - U)V)$, then $$f_{U, V}(u, v) = 2 (1 - u) [0 < u < 1 \land 0 < v < 1], \\ \mathbb E(Y^p \mid X) = (1 - U)^p \hspace {1.5px} \mathbb E(V^p) = \frac {(1 - X)^p} {p + 1}.$$ – Maxim Apr 17 at 0:16 $$\mathbb E\left( (X-Y)^2 | X \right)= \mathbb E\left( X^2+Y^2 -2XY | X \right)=X^2+E(Y^2|X)-2XE(Y|X)$$ Since $$E(Yf(X)|X)=f(X)E(Y|X)$$ Basic_properties. • For $E(Y|X)$, since $0 \leq y \leq 1 - x$, does the integrating remain $\int_{0}^{1-x}{y^2 f_{Y|X}(y|x) dy}$? Or must the bounds be changed? I know that if we let $Z$ be a random variable such that $Z = Y^2$, then the integration would be over $0 \leq Z \leq (1-x)^2$ but the density would be different. Would you be able to clarify this? – Eoin S Apr 15 at 23:18 • $E(Y^k|X=x)=\int_0^{1-x} y^k f(y|x) dy$. You should consider that $f(y|x)$ calculated by $f(x,y)$ so $y<1-x$. – Masoud Apr 16 at 0:56
2020-08-12 13:24:05
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 6, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9247033596038818, "perplexity": 134.75472716248635}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-34/segments/1596439738892.21/warc/CC-MAIN-20200812112531-20200812142531-00036.warc.gz"}
https://leanprover-community.github.io/archive/stream/113489-new-members/topic/Dangerous.20instance.20vs.20elaboration.20problems.html
## Stream: new members ### Topic: Dangerous instance vs elaboration problems #### Frédéric Dupuis (Sep 19 2020 at 03:28): In #4057 (which defines inner products based on is_R_or_C), the main outstanding issue is that the linter reports a dangerous instance in the definition: class inner_product_space (𝕜 : Type*) (α : Type*) [nondiscrete_normed_field 𝕜] [normed_algebra ℝ 𝕜] [is_R_or_C 𝕜] extends normed_group α, normed_space 𝕜 α, has_inner 𝕜 α := sorry which presumably comes from the fact that normed_group α doesn't have 𝕜 as a parameter. I gather that the usual way to fix this is to move normed_group α into the assumptions, as in class inner_product_space (𝕜 : Type*) (α : Type*) [nondiscrete_normed_field 𝕜] [normed_algebra ℝ 𝕜] [is_R_or_C 𝕜] [normed_group α] extends normed_space 𝕜 α, has_inner 𝕜 α := sorry However, when I do that, I run into some very annoying elaboration issues where I have to constantly spoonfeed it 𝕜 and/or α in lemmas that rewrite norms in terms of inner products, even though the relevant instance is directly present in the context. So: just how dangerous is an instance like this? Would it still send the search algorithm into a wild goose chase, even though the only possibilities for 𝕜 should always remain only ℂ and ℝ? #### Heather Macbeth (Sep 19 2020 at 15:58): I think you could have asked this in #maths! :) @Yury G. Kudryashov @Sebastien Gouezel ? Last updated: May 13 2021 at 06:15 UTC
2021-05-13 07:02:00
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.2901901304721832, "perplexity": 6005.024881213285}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-21/segments/1620243991537.32/warc/CC-MAIN-20210513045934-20210513075934-00633.warc.gz"}
https://eprint.iacr.org/2020/1341
## Cryptology ePrint Archive: Report 2020/1341 Zero-Communication Reductions Varun Narayanan and Manoj Prabhakaran and Vinod M. Prabhakaran Abstract: We introduce a new primitive in information-theoretic cryptography, namely zero-communication reductions (ZCR), with different levels of security. We relate ZCR to several other important primitives, and obtain new results on upper and lower bounds. In particular, we obtain new upper bounds for PSM, CDS and OT complexity of functions, which are exponential in the information complexity of the functions. These upper bounds complement the results of Beimel et al. (2014) which broke the circuit-complexity barrier for high complexity'' functions; our results break the barrier of input size for low complexity'' functions. We also show that lower bounds on secure ZCR can be used to establish lower bounds for OT-complexity. We recover the known (linear) lower bounds on OT-complexity by Beimal and Malkin (2004) via this new route. We also formulate the lower bound problem for secure ZCR in purely linear-algebraic terms, by defining the invertible rank of a matrix. We present an Invertible Rank Conjecture, proving which will establish super-linear lower bounds for OT-complexity (and if accompanied by an explicit construction, will provide explicit functions with super-linear circuit lower bounds). Category / Keywords: foundations / secure multiparty computation, OT complexity, PSM, CDS, zero-communication protocols, oblivious transfer, zero-communication, invertible rank, MPC lower bounds Original Publication (with minor differences): TCC 2020 Date: received 25 Oct 2020, last revised 13 Nov 2020 Contact author: varun narayanan at tifr res in,vinodmp@tifr res in,mp@cse iitb ac in Available format(s): PDF | BibTeX Citation Short URL: ia.cr/2020/1341 [ Cryptology ePrint archive ]
2020-11-23 23:01:21
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.6340000629425049, "perplexity": 5922.687913691327}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-50/segments/1606141168074.3/warc/CC-MAIN-20201123211528-20201124001528-00438.warc.gz"}
https://nodus.ligo.caltech.edu:8081/40m/?&sort=ID&new_entries=1
40m QIL Cryo_Lab CTN SUS_Lab TCS_Lab OMC_Lab CRIME_Lab FEA ENG_Labs OptContFac Mariner WBEEShop 40m Log, Page 330 of 330 Not logged in New entries since: Wed Dec 31 16:00:00 1969 ID Date Author Type Category Subject 16595   Wed Jan 19 12:50:10 2022 AnchalSummaryBHDPart IV of BHR upgrade - SR2 OSEM tuning progress. It was indeed the issue of the top OSEM plate not being in the right place horizontally. But the issue was more non-trivial. I believe because of the wedge in thick optics, there is a YAW offset in the optic in the free hanging position. I had to readjust the OSEM plate 4 times to be able to get full dark to bright range in both upper OSEMs. After doing that, I tuned the four OSEMs somewhat near the halfway point and once I was sure I'm inside the sensitive region in all face OSEMs, I switched on POS, PIT, and YAW damping. Then I was able to finely tune the positions of both upper OSEMs. However, on reaching to lower right OSEM, I found again the same issue. I had to stop to go to the 40m meeting, I'll continue this work in the afternoon. But OSEM plate adjustment in the horizontal direction, particularly for thick optics is required to be done before transporting them. I achieved the best position by turning the OSEM 90 degrees and using the OSEM LED/PD plates to determine the position. This was the final successful trial I did in adjusting the plate position horizontally. 16596   Wed Jan 19 12:56:52 2022 PacoSummaryBHDAS4 OSEMs installation [Paco, Tega, Anchal] Today, we started work on AS4 SOS by checking the OSEM and cable. Swapping the connection preserved the failure (no counts) so we swapped the long OSEM for a short one that we knew was working instead, and this solved the issue. We proceeded to swap in a "yellow label", long OSEM in place and then noticed the top plate had issues with the OSEM threads. We took out the bolt and inspected its thread, and even borrowed the screw from PR2 plate but saw the same effect. Even using a silver plated setscrew such as the SD OSEM one resulted in trouble... Then, we decided to not keep trying weird things, and took our sweet time to remove the UL, UR OSEMs, top earthquake stops, and top plate carefully in-situ. Then, we continued the surgery by installing a new top plate which we borrowed from the clean room (the only difference is the OSEM aperture barrels are teflon (?) rather than stainless. The operation was a success, and we moved on to OSEM installation. After reaching a good place with the OSEM installation, where most sensors were at 50% brightness level and we were happy with the damping action (!!), we fixed all EQ stops and proceeded to push the SOS to its nominal placement. Then upong releasing the EQ stops, we found out that the sensor readings were shifted. 16597   Wed Jan 19 14:41:23 2022 KojiUpdateBHDSuspension Status Is this the correct status? Please directly update this entry. LO1 [Glued] [Suspended] [Balanced] [Placed In Vacuum] [OSEM Tuned] [Damped] LO2 [Glued] [Suspended] [Balanced] [Placed In Vacuum] [OSEM Tuned] [Damped] AS1 [Glued] [Suspended] [Balanced] [Placed In Vacuum] [OSEM Tuned] [Damped] AS4 [Glued] [Suspended] [Balanced] [Placed In Vacuum] [OSEM Tuned] [Damped] PR2 [Glued] [Suspended] [Balanced] [Placed In Vacuum] [OSEM Tuned] [Damped] PR3 [Glued] [Suspended] [Balanced] [Placed In Vacuum] [OSEM Tuned] [Damped] SR2 [Glued] [Suspended] [Balanced] [Placed In Vacuum] [OSEM Tuned] [Damped] Last updated: Thu Jan 20 17:16:33 2022 16598   Wed Jan 19 16:22:48 2022 AnchalUpdateBHDPR2 transmission calculation updated I have further updated my calculation. Please find the results in the attached pdf. Following is the description of calculations done: ### Arm cavity reflection: Reflection fro arm cavity is calculated as simple FP cavity reflection formula while absorbing all round trip cavity scattering losses (between 50 ppm to 200 ppm) into the ETM transmission loss. So effective reflection of ETM is calculated as $r_{\rm ETMeff} = \sqrt{1 - T_{\rm ETM} - L_{\rm RT}}$ $r_{\rm arm} = \frac{-r_{\rm ITM} + r_{\rm ETMeff}e^{2i \omega L/c}}{1 - r_{\rm ITM} r_{\rm ETMeff}e^{2 i \omega L/c}}$ The magnitude and phase of this reflection is plotted in page 1 with respect to different round trip loss and deviation of cavity length from resonance. Note that the arm round trip loss does not affect the sign of the reflection from cavity, at least in the range of values taken here. ### PRC Gain The Michelson in PRFPMI is assumed to be perfectly aligned so that one end of PRC cavity is taken as the arm cavity reflection calculated above at resonance. The other end of the cavity is calculated as a single mirror of effective transmission that of PRM, 2 times PR2 and 2 times PR3. Then effective reflectivity of PRM is calculated as: $r_{\rm PRMeff} = \sqrt{1 - T_{\rm PRM} - 2T_{\rm PR2} - 2T_{\rm PR3}}$ $t_{\rm PRM} = \sqrt{T_{\rm PRM}}$ Note, that field transmission of PRM is calculated with original PRM power transmission value, so that the PR2, PR3 transmission losses do not increase field transmission of PRM in our calculations. Then the field gain is calculated inside the PRC using the following: $g = \frac{t_{\rm PRM}}{1 - r_{\rm PRMeff} r_{\rm arm}e^{2 i \omega L/c}}$ From this, the power recycling cavity gain is calculated as: $G_{\rm PRC} = |g|^2$ The variation of PRC Gain is showed on page 2 wrt arm cavity round trip losses and PR2 transmission. Note that gain value of 40 is calculated for any PR2 transmission below 1000 ppm. The black verticle lines show the optics whose transmission was measured. If V6-704 is used, PRC Gain would vary between 15 and 10 depending on the arm cavity losses. With pre-2010 ITM, PRC Gain would vary between 30 and 15. ### LO Power LO power when PRFPMI is locked is calculated by assuming 1 W of input power to IMC. IMC is assumed to let pass 10% of the power ($L_{\rm IMC}=0.1$). This power is then multiplied by PRC Gain and transmitted through the PR2 to calculate the LO power. $P_{\rm LO, PRFPMI} = P_{\rm in} L_{\rm IMC}G_{\rm PRC}T_{\rm PR2}$ Page 3 shows the result of this calculation. Note for V6-704, LO power would be between 35mW and 15 mW, for pre-2010 ITM, it would be between 15 mW and 5 mW depending on the arm cavity losses. The power available during alignment is simply given by: P_{\rm LO, align, PRM} = P_{\rm in} L_{\rm IMC} T_{\rm PRM} T_{\rm PR2} P_{\rm LO, align, no PRM} = P_{\rm in} L_{\rm IMC} T_{\rm PR2} If we remove PRM from the input path, we would have sufficient light to work with for both relevant optics. I have attached the notebook used to do these calculations. Please let me know if you find any mistake in this calculation. Attachment 1: PR2transmissionSelectionAnalysis.pdf Attachment 2: PR2_Trans_Calc.ipynb.zip 16599   Wed Jan 19 18:15:34 2022 YehonathanUpdateBHDAS1 resurrection Today I suspended AS1. Anchal helped me with the initial hanging of the optics. Attachments 1,2 show the roll balance and side magnet height. Attachment 3 shows the motion spectra. The major peaks are at 668mHz, 821mHz, 985mHz. For some reason, I was not able to balance the pitch with 2 counterweights as I did with the rest of the thin optics (and AS1 before). Inserting the weights all the way was not enough to bring the reflection up to the iris aperture that was used for preliminary balancing. I was able to do so with a single counterweight (attachment 4). I'm afraid something is wrong here but couldn't find anything obvious. It is also worth noting that the yaw resonance 668mHz is different from the 755mHz we got in all the other optics. Maybe one or more of the wires are not clamped correctly on the side blocks? The OSEMs were pushed into the OSEM plate and the plates were adjusted such that the magnets are at the center of the face OSEMs. The wires were clamped and cut from the winches. The SOS is ready for installation. I uploaded some pictures of the PEEK EQ stops, both on the thick and thin optics, to the Google Photos account. Attachment 1: AS1_roll_balance2.png Attachment 2: AS1_magnet_heigh2.png Attachment 3: FreeSwingingSpectra.pdf Attachment 4: IMG_6324.JPG 16600   Wed Jan 19 21:39:22 2022 TegaUpdateSUSTemporary watchdog After some work on the reference database file, we now have a template for temporary watchdog implementation for LO1 located here "/cvs/cds/caltech/target/c1susaux/C1_SUS-AUX_LO1.db". Basically, what I have done is swap the EPICS asyn analog input readout for the COIL and OSEM to accessible medm channels, then write out watchdog enable/disable to coil filter SW2 switch. Everything else in the file remains the same. I am worried about some of the conversions but the only way to know more is to see the output on the medm screen. To test, I restarted c1su2 but this did not make the LO1 database available, so I am guessing that we also need to restart the c1sus, which can be done tomorrow. 16601   Thu Jan 20 00:26:50 2022 KojiUpdateSUSTemporary watchdog As the new db is made for c1susaux, 1) it needs to be configured to be read by c1susaux 2) it requires restarting c1susaux 3) it needs to be recorded by FB 4) and restartinbg FB. (^-Maybe not super exact procedure but conceptually like this) 16602   Thu Jan 20 01:48:02 2022 KojiUpdateBHDPR2 transmission calculation updated IMC is not such lossy. IMC output is supposed to be ~1W. The critical coupling condition is G_PRC = 1/T_PRM = 17.7. If we really have L_arm = 50ppm, we will be very close to the critical coupling. Maybe we are OK if we have such condition as our testing time would be much longer in PRMI than PRFPMI at the first phase. If the arm loss turned out to be higher, we'll be saved by falling to undercoupling. When the PRC is close to the critical coupling (like 50ppm case), we roughly have Tprc x 2 and Tarm to be almost equal. So each beam will have 1/3 of the input power i.e. ~300mW. That's probably too much even for the two OMCs (i.e. 4 DCPDs). That's OK. We can reduce the input power by 3~5. Quote: ### LO Power LO power when PRFPMI is locked is calculated by assuming 1 W of input power to IMC. IMC is assumed to let pass 10% of the power ($L_{\rm IMC}=0.1$). 16603   Thu Jan 20 12:10:51 2022 YehonathanUpdateBHDAS1 resurrection I was wondering whether I should take AS1 down to redo the wire clamping on the side blocks. I decided to take the OpLev spectrum again to be more certain. Attachments 1,2,3 show 3 spectra taken at different times. They all show the same peaks 744mHz, 810mHz, 1Hz. So I think something went wrong with yesterday's measurement. I will not take AS1 down for now. We still need to apply some glue to the counterweight. Attachment 1: FreeSwingingSpectra.pdf Attachment 2: FreeSwingingSpectra_div_20mV.pdf Attachment 3: FreeSwingingSpectra_div_50mV.pdf 16604   Thu Jan 20 16:42:55 2022 PacoUpdateBHDAS4 OSEMs installation - part 2 [Paco] Turns out, the shifting was likely due to the table level. Because I didn't take care the first time to "zero" the level of the table as I tuned the OSEMs, the installation was b o g u s. So today I took time to, a) Shift AS4 close to the center of the table. b) Use the clean level tool to pick a plane of reference. To do this, I iteratively placed two counterweights (from the ETMX flow bench) in two locations in the breadboard such that I nominally balanced the table under this configuration to zome reference plane z0. The counterweight placement is of course temporary, and as soon as we make further changes such as final placement of AS4 SOS, or installation of AS1, their positions will need to change to recover z=z0. c) Install OSEMs until I was happy with the damping. ** Here, I noticed the new suspension screens had been misconfigured (probably c1sus2 rebooted and we don't have any BURT), so quickly restored the input and output matrices. SUSPENSION STATUS UPDATED HERE 16605   Thu Jan 20 17:03:36 2022 AnchalSummaryBHDPart IV of BHR upgrade - SR2 OSEM tuned. The main issue with SR2 OSEMs, now that I think of it, was that the BS table was very inclined due to the multiple things we removed (including counterweights). Today the first I did was level the BS table by placing some counterweights in the correct positions. I placed the level in two directions right next to SR2 (clamped in its planned place), and made the bubble center. While doing do, at one point, I was trying to reach the far South-West end of the table with the 3x heavy 6" cylindrical counterweight in my hand. The counterweight slipped off my hand and fell from the table. See the photo in attachment 1. It went to the bottommost place and is resting on its curved surface. This counterweight needs to be removed but one can not reach it from over the table. So to remove it, we'll have to open one of the blank flanges on the South-west end of BS chamber and remove the counterweight from there. We'll ask Chub to help us on this. I'm sorry for the mistake, I'll be more careful with counterweights in the future. Moving on, I tuned all the SR2 OSEMs. It was fairly simple today since the table was leveled. I closed the chamber with the optic free to move and damped in all degrees of freedom. SUSPENSION STATUS UPDATED HERE Attachment 1: DJI_0144.JPG 16606   Thu Jan 20 17:21:21 2022 TegaUpdateSUSTemporary watchdog Temp software watchdog now operational for LO1 and the remaining optics! Koji helped me understand how to write to switches and we tried for a while to only turnoff the output switch of the filters instead of the writing a zero that resets everything in the filter. Eventually, I was able to move this effort foward by realising that I can pass the control trigger along multiple records using the forwarding option 'FLNK'. When I added this field to the trigger block, record(dfanout,"C1:SUS-LO1_PUSH_ALL"), and subsequent calculation blocks, record(calcout,"C1:SUS-LO1_COILSWa") to record(calcout,"C1:SUS-LO1_COILSWd"), everything started working right. Quote: After some work on the reference database file, we now have a template for temporary watchdog implementation for LO1 located here "/cvs/cds/caltech/target/c1susaux/C1_SUS-AUX_LO1.db". Basically, what I have done is swap the EPICS asyn analog input readout for the COIL and OSEM to accessible medm channels, then write out watchdog enable/disable to coil filter SW2 switch. Everything else in the file remains the same. I am worried about some of the conversions but the only way to know more is to see the output on the medm screen. To test, I restarted c1su2 but this did not make the LO1 database available, so I am guessing that we also need to restart the c1sus, which can be done tomorrow. 16607   Thu Jan 20 17:34:07 2022 KojiUpdateBHDV6-704/705 Mirror now @Downs The PR2 candidate V6-704/705 mirrors (Qty2) are now @Downs. Camille picked them up for the measurements. To identify the mirrors, I labeled them (on the box) as M1 and M2. Also the HR side was checked to be the side pointed by an arrow mark on the barrel. e.g. Attachment 1 shows the HR side up Attachment 1: PXL_20220120_225248265_2.jpg Attachment 2: PXL_20220120_225309361_2.jpg 16608   Thu Jan 20 18:16:29 2022 AnchalUpdateBHDAS4 set to trigger free swing test AS4 is set to go through a free swinging test at 10 pm tonight. We have used this script (Git/40m/scripts/SUS/InMatCalc/freeSwing.py) reliably in the past so we expect no issues, it has a error catching block to restore all changes at the end of the test or if something goes wrong. To access the test, on allegra, type: tmux a -t AS4 Then you can kill the script if required by Ctrl-C, it will restore all changes while exiting. 16609   Thu Jan 20 18:41:55 2022 AnchalSummaryBHDSR2 set to trigger free swing test SR2 is set to go through a free swinging test at 3 am tonight. We have used this script (Git/40m/scripts/SUS/InMatCalc/freeSwing.py) reliably in the past so we expect no issues, it has a error catching block to restore all changes at the end of the test or if something goes wrong. To access the test, on allegra, type: tmux a -t SR2 Then you can kill the script if required by Ctrl-C, it will restore all changes while exiting. 16610   Fri Jan 21 11:24:42 2022 AnchalSummaryBHDSR2 Input Matrix Diagonalization performed. The free swinging test was successful. I ran the input matrix diagonalization code (/opt/rtcds/caltech/c1/Git/40m/scripts/SUS/InMAtCalc/sus_diagonalization.py) on theSR2 free-swinging data collected last night. The logfile and results are stored in /opt/rtcds/caltech/c1/Git/40m/scripts/SUS/InMAtCalc/SR2 directory. Attachment 1 shows the power spectral density of the DOF basis data (POS, PIT, YAW, SIDE) before and after the diagonalization. Attachment 2 shows the fitted peaks. Free Swinging Resonances Peak Fits Resonant Frequency [Hz] Q A POS 0.982 340 3584 PIT 0.727 186 1522 YAW 0.798 252 912 SIDE 1.005 134 3365 LO1 New Input Matrix UL UR LR LL SIDE POS 1.09 0.914 0.622 0.798 -0.977 PIT 1.249 0.143 -1.465 -0.359 0.378 YAW 0.552 -1.314 -0.605 1.261 0.118 SIDE 0.72 0.403 0.217 0.534 3.871 The new matrix was loaded on SR2 input matrix and this resulted in no control loop oscillations at least. I'll compare the performance of the loops in future soon. Attachment 1: SR2_SUS_InpMat_Diagnolization_20220121.pdf Attachment 2: SR2_FreeSwingData_PeakFitting_20220121.pdf 16611   Fri Jan 21 12:46:31 2022 TegaUpdateCDSSUS screen debugging All done (almost)! I still have not sorted the issue of pitch and yaw gains growing together when modified using ramping time. Image of custom ADC and DAC panel is attached. Quote: Seen. Thanks. Quote: Indicated by the red arrow: Even when the side damping servo is off, the number appears at the input of the output matrix Indicated by the green arrows: The face magnets and the side magnets use different ADCs. How about opening a custom ADC panel that accommodates all ADCs at once? Same for the DAC. Indicated by the blue arrows: This button opens a custom FM window. When the pitch gain was modified with a ramping time, the pitch and yaw gain grows at the same time even though only the pitch gain was modified. Indicated by the orange circle: The numbers are not indicated here, but they are input-related numbers (for watchdogging) rather than output-related numbers. It is confusing to place them here. 16612   Fri Jan 21 14:51:00 2022 KojiUpdateBHDV6-704/705 Mirror now @Downs Camille@Downs measured the surface of these M1 and M2 using Zygo. Result of the ROC measurements:M1: ROC=2076m (convex)M2: ROC=2118m (convex) Here are screenshots. One file shows the entire surface and the other shows the central 30mm. Attachment 1: M1.PNG Attachment 2: M1_30mm.PNG Attachment 3: M2.PNG Attachment 4: M2_30mm.PNG 16613   Fri Jan 21 16:40:10 2022 AnchalUpdateBHDAS4 Input Matrix Diagonalization performed. The free swinging test was successful. I ran the input matrix diagonalization code (/opt/rtcds/caltech/c1/Git/40m/scripts/SUS/InMAtCalc/sus_diagonalization.py) on the AS4 free-swinging data collected last night. The logfile and results are stored in /opt/rtcds/caltech/c1/Git/40m/scripts/SUS/InMAtCalc/AS4 directory. Attachment 1 shows the power spectral density of the DOF basis data (POS, PIT, YAW, SIDE) before and after the diagonalization. Attachment 2 shows the fitted peaks. Free Swinging Resonances Peak Fits Resonant Frequency [Hz] Q A POS 1.025 337 3647 PIT 0.792 112 3637 YAW 0.682 227 1228 SIDE 0.993 164 3094 LO1 New Input Matrix UL UR LR LL SIDE POS 0.844 0.707 0.115 0.253 -1.646 PIT 0.122 0.262 -1.319 -1.459 0.214 YAW 1.087 -0.901 -0.874 1.114 0.016 SIDE 0.622 0.934 0.357 0.045 3.822 The new matrix was loaded on AS4 input matrix and this resulted in no control loop oscillations at least. I'll compare the performance of the loops in future soon. Attachment 1: AS4_SUS_InpMat_Diagnolization_20220121.pdf Attachment 2: AS4_FreeSwingData_PeakFitting_20220121.pdf ELOG V3.1.3-
2022-01-22 12:10:02
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 11, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.5714975595474243, "perplexity": 4168.516691873822}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-05/segments/1642320303845.33/warc/CC-MAIN-20220122103819-20220122133819-00624.warc.gz"}
https://physics.stackexchange.com/questions/211206/effect-of-a-dielectric-medium-on-electric-field-and-electric-flux/250030
# Effect of a dielectric medium on electric field and electric flux What is the effect of dielectric medium in comparison to vacuum on electric field and electric flux? permittivity relates to a material's ability to resist an electric field (while, unfortunately, the word stem "permit" suggests the inverse quantity). Source: Permittivity Wikipedia Therefore, higher dielectric constant means higher relative permittivity which means more resistance to the electric field. But a higher dielectric constant means an increase in electric flux which leads to a higher capacitance! Please explain the resistance to the electric field with an increase in electric flux? • – Yashas Mar 26 '17 at 4:32 Dielectric constant $K$ is actually the same thing as relative permittivity, and it increases the overall permittivity $\epsilon$. So in general, whenever you see the permittivity of free space $\epsilon_0$ in an equation, if you're dealing with a dielectric, you can multiply it by the dielectric constant and see how the equation changes. For example, since $C = \epsilon_0 \frac{A}{D}$, multiplying $\epsilon_0$ by $K$ increases capacitance by $K$. Since $\oint \mathbf E \cdot \mathrm d \mathbf A = \frac{Q}{\epsilon_0}$ by Gauss's Law, multiplying $\epsilon_0$ by $K$ decreases electric field magnitude by $K$. There is no contradiction because there is no increase in electric flux. The dielectric decreases the electric field magnitude, which decreases the electric flux and decreases the voltage across a capacitor as well. $C = \frac{Q}{V}$ and the capacitance increases because the voltage decreases while the charge remains the same. • excuse me, could you give me a justification or source of this hint. (I mean, change $\varepsilon_0$ for $K\varepsilon_0$?). – Ragnar1204 Apr 8 '18 at 9:24
2020-04-05 21:20:31
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8504447937011719, "perplexity": 155.77616488390487}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-16/segments/1585371609067.62/warc/CC-MAIN-20200405181743-20200405212243-00067.warc.gz"}
https://brilliant.org/problems/a-crazy-quartic/
# A Crazy Quartic Level pending Let $$a,b,c,d$$ be the roots of the following equation. $$$$x^4+10x^3+kx^2+100x-1001=0$$$$ If $$ab=77$$, the value of $$k$$ can be expressed as a mixed number $$p\frac{q}{r}$$, where $$p$$ is a positive integer, and $$q$$ and $$r$$ are coprime positive integers. Find the value of $$p+q+r$$. ×
2018-01-23 10:02:24
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 1, "x-ck12": 0, "texerror": 0, "math_score": 0.8963460326194763, "perplexity": 111.06368245636627}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-05/segments/1516084891886.70/warc/CC-MAIN-20180123091931-20180123111931-00487.warc.gz"}
https://jaewonchung.me/study/algorithm/Coding-tips-for-algorithm-PS/
# C++ tips for algorithm problem-solving Tools, tricks, and snippets are introduced for implementing algorithms. Some may be inappropriate in terms of good coding practice because these snippets are optimized for efficiency during competitive problem-solving. # Arrays Arrays are an important tool for storing sequential data. Also, a lot of problems are solved with dynamic programming, where arrays are a core tool. ## Initializing an array with a single value 1. For loop for (int i=0; i<n; ++i) { d[i] = 0; } A standard $$O(n)$$ algorithm that fills array d with 0. 2. Using memset(void* ptr, int value, size_t num) memset(d, 0, sizeof(d)); // #include <cstring> Note that the value parameter should be either 0 or -1. This is because memset was originally designed for filling every byte of a string with the same value. Specifically, since value is interpreted as an unsigned charof size 1 byte, setting value to 1 fills every byte of the array with 0x01. Thus accessing a 4 byte integer gives 0x01010101 == 16843009. On the contrary, setting value to -1 fills every byte with 0xff, where in this case grouping these bytes into 4 byte integers retains the 2’s complement representation of -1, which is 0xffffffff. 3. Declaring as a global variable #include <cstdio> int d[10]; int main() { printf("%d", d[0]); return 0; } Variables delcared as a global variable are intialized to zero automatically. 4. Instead using std::vector vector<int> v(n, -1); // #include <vector> Using vectors is a preferable in many cases, especially when the size of the array varies. For instance, it is a good idea to initialize graphs with vector<vector<int>> graph(n); as an adjacency list. This done, you can easily traverse adjacent nodes with the following range-based for loop: for (int adj_node : graph[curr_node]) ## Array Indexing 1. Single dimensional array int a[50]; for (int i=0; i<50; ++i) { scanf("%d", a+i); } The pointer of an array element can be written simpler. for (int i=0; i<n; ++i) { printf("%c%d%c", " ["[i==0], a[i], ",]"[i==n-1]); } This snippet prints array a python-style: [0, 1, 2, 3, 4] 2. Two dimensional array printf("%d\n", *max_element(d[0], d[0]+n)); // #include <algorithm> The element of a two dimensional array is a pointer to a single dimensional array. Thus the same trick can be applied. # Standard Input and Output ## Avoid using cin and cout For algorithm problems, small time can make a difference. Especially if you’re reading somthing like $$n^2$$ integers from standard input, using cin can give you a TLE(Time Limit Exceeded). Use scanf for inputs and printf for outputs. ## Use text files for standard input Typing in the same sample inputs every time you run your code is apparently inefficient. When compiling and running your code, you can provide “input.txt” as a standard input using the above shell command. You can read data with your ordinary standard i/o functions. g++ -O2 -std=c++14 main.cpp -o main && ./main < input.txt Categories: Updated:
2020-11-24 05:43:48
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 1, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.20144589245319366, "perplexity": 4331.1226849688965}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-50/segments/1606141171126.6/warc/CC-MAIN-20201124053841-20201124083841-00040.warc.gz"}
http://mathhelpforum.com/pre-calculus/51931-find-all-numbers-x.html
# Thread: Find All Numbers x 1. ## Find All Numbers x If P = (-3, 1) and Q = (x, 4), find all numbers x such that the vector represented by line PQ has length 5. 2. Originally Posted by magentarita If P = (-3, 1) and Q = (x, 4), find all numbers x such that the vector represented by line PQ has length 5. 1. Calculate the vector $\overrightarrow{PQ} = \overrightarrow{OQ} - \overrightarrow{OP} = ((x-(-3)), 3)$ 2. Calculate the magnitude of the vector $\overrightarrow{PQ}$ which has to be 5: $\sqrt{(x+3)^2+3^2} = 5$ Square both sides: $(x^2+6x+9) +9 = 25~\implies~x^2+6x-7=0~\implies~x=-7~\vee~x=1$ 3. Therefore $Q_1 (-7,4)$ or $Q_2(1,4)$ 3. ## great work....... Originally Posted by earboth 1. Calculate the vector $\overrightarrow{PQ} = \overrightarrow{OQ} - \overrightarrow{OP} = ((x-(-3)), 3)$ 2. Calculate the magnitude of the vector $\overrightarrow{PQ}$ which has to be 5: $\sqrt{(x+3)^2+3^2} = 5$ Square both sides: $(x^2+6x+9) +9 = 25~\implies~x^2+6x-7=0~\implies~x=-7~\vee~x=1$ 3. Therefore $Q_1 (-7,4)$ or $Q_2(1,4)$ Another great job here. Thanks 4. Originally Posted by magentarita If P = (-3, 1) and Q = (x, 4), find all numbers x such that the vector represented by line PQ has length 5. Note that $Q\in\mathcal{C}$, where $\mathcal{C}$ is the circle centered at $P$ with the radius $r=5$. Just write the equation of $\mathcal{C}$ and substitute $Q$ to this equation and solve $x$ as earboth done.
2016-10-21 17:09:50
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 19, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9163339138031006, "perplexity": 543.3739338706428}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2016-44/segments/1476988718285.69/warc/CC-MAIN-20161020183838-00401-ip-10-171-6-4.ec2.internal.warc.gz"}
https://grindskills.com/how-is-causation-defined-mathematically/
# How is causation defined mathematically? What is the mathematical definition of a causal relationship between two random variables? Given a sample from the joint distribution of two random variables $$X$$ and $$Y$$, when would we say $$X$$ causes $$Y$$? What is the mathematical definition of a causal relationship between two random variables? Mathematically, a causal model consists of functional relationships between variables. For instance, consider the system of structural equations below: $$x = f_x(\epsilon_{x})\\ y = f_y(x, \epsilon_{y})$$ This means that $$x$$ functionally determines the value of $$y$$ (if you intervene on $$x$$ this changes the values of $$y$$) but not the other way around. Graphically, this is usually represented by $$x \rightarrow y$$, which means that $$x$$ enters the structural equation of y. As an addendum, you can also express a causal model in terms of joint distributions of counterfactual variables, which is mathematically equivalent to functional models. Given a sample from the joint distribution of two random variables X and Y, when would we say X causes Y? Sometimes (or most of the times) you do not have knowledge about the shape of the structural equations $$f_{x}$$, $$f_y$$, nor even whether $$x\rightarrow y$$ or $$y \rightarrow x$$. The only information you have is the joint probability distribution $$p(y,x)$$ (or samples from this distribution). This leads to your question: when can I recover the direction of causality just from the data? Or, more precisely, when can I recover whether $$x$$ enters the structural equation of $$y$$ or vice-versa, just from the data? Of course, without any fundamentally untestable assumptions about the causal model, this is impossible. The problem is that several different causal models can entail the same joint probability distribution of observed variables. The most common example is a causal linear system with gaussian noise. But under some causal assumptions, this might be possible—and this is what the causal discovery literature works on. If you have no prior exposure to this topic, you might want to start from Elements of Causal Inference by Peters, Janzing and Scholkopf, as well as chapter 2 from Causality by Judea Pearl. We have a topic here on CV for references on causal discovery, but we don’t have that many references listed there yet. Therefore, there isn’t just one answer to your question, since it depends on the assumptions one makes. The paper you mention cites some examples, such as assuming a linear model with non-gaussian noise. This case is known as LINGAN (short for linear non-gaussian acyclic model), here is an example in R: library(pcalg) set.seed(1234) n <- 500 eps1 <- sign(rnorm(n)) * sqrt(abs(rnorm(n))) eps2 <- runif(n) - 0.5 x2 <- 3 + eps2 x1 <- 0.9*x2 + 7 + eps1 # runs lingam X <- cbind(x1, x2) res <- lingam(X) as(res, "amat") # Adjacency Matrix 'amat' (2 x 2) of type ‘pag’: # [,1] [,2] # [1,] . . # [2,] TRUE . Notice here we have a linear causal model with non-gaussian noise where $$x_2$$ causes $$x_1$$ and lingam correctly recovers the causal direction. However, notice this depends critically on the LINGAM assumptions. For the case of the paper you cite, they make this specific assumption (see their “postulate”): If $$x\rightarrow y$$ , the minimal description length of the mechanism mapping X to Y is independent of the value of X, whereas the minimal description length of the mechanism mapping Y to X is dependent on the value of Y. Note this is an assumption. This is what we would call their “identification condition”. Essentially, the postulate imposes restrictions on the joint distribution $$p(x,y)$$. That is, the postulate says that if $$x \rightarrow y$$ certain restrictions holds in the data, and if $$y \rightarrow x$$ other restrictions hold. These types of restrictions that have testable implications (impose constraints on $$p(y,x)$$) is what allows one to recover directionally from observational data. As a final remark, causal discovery results are still very limited, and depend on strong assumptions, be careful when applying these on real world context.
2022-11-29 04:59:22
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 25, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8221882581710815, "perplexity": 536.4876648975675}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-49/segments/1669446710685.0/warc/CC-MAIN-20221129031912-20221129061912-00754.warc.gz"}
http://www.thesis123.com/gate-2016-ee-set-1-q1/
# Gate 2016 EE (set 1) Q1 1. The maximum value attained by the function$$f(x)=x(x-1)(x-2)$$ in the interval f[1, 2] is :——– Solution: $$f(x)=x(x-1)(x-2)$$ $$\Rightarrow f(x)=x^3-3x^2+2$$ $$\Rightarrow f^1(x)=3x^2-6x+2=0$$ $$\Rightarrow x=1\pm \frac{1}{\sqrt 3}$$ But $$x=1+\frac{1}{\sqrt 3}$$ only lies on the interval [1, 2] At $$x=1+\frac{1}{\sqrt 3}$$, $$f^{11}(x)=6x-6=6(1+\frac{1}{\sqrt 3})-6>0$$ $$(1+\frac{1}{\sqrt 3})$$ is a point of minimum $$f(x)=x(x-1)(x-2)=0$$ at either ends $$x=1 \& x=2$$ Maximum value=0
2020-01-23 13:53:26
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8661016225814819, "perplexity": 3052.7003194240597}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-05/segments/1579250610919.33/warc/CC-MAIN-20200123131001-20200123160001-00465.warc.gz"}
https://dsp.stackexchange.com/questions/54068/is-there-an-example-of-an-infinite-signal
# Is there an example of an infinite signal? [closed] In the real world, Is there an example of an infinite signal? • You mean a signal that extends from $-\infty$ to $\infty$ in time? Hmmm, think about it for a moment. – Matt L. Dec 11 '18 at 17:03 • Imagine you could show a signal has existed from $t=-\infty$ (or from $t=0$ if you prefer)... How could you prove it will exist until $t=\infty$? – MBaz Dec 11 '18 at 17:11 • @MattL. No, a signal that extends from 0 to $\infty$ . but, I don't no. :( – niloofar jamshidi Dec 11 '18 at 17:14 • We can arbitrarily invent signals. For example, $s(t)=\sin(t)$ is infinitely long. Is that a real-world signal? I don't know – but if we say "the signal is the electric field strength of a plane electromagnetic wavefront over time, shot from earth into the vastness of space", that's a pretty real signal, and it's pretty infinite... – Marcus Müller Dec 11 '18 at 17:25 • @MBaz In physics, the energy is always existed. is not possible boute the signal ? – niloofar jamshidi Dec 11 '18 at 17:25
2021-03-09 01:41:10
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.5577340126037598, "perplexity": 742.2582518388092}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-10/segments/1614178385534.85/warc/CC-MAIN-20210308235748-20210309025748-00308.warc.gz"}
https://zbmath.org/?q=an:1015.93071
## Minimizing shortfall risk and applications to finance and insurance problems.(English)Zbl 1015.93071 The author studies the problem of minimizing the shortfall risk defined as the expectation of the shortfall $$(B-X_{T}^{x,\theta})_+$$ (weighted by some loss function) with the following controlled process: $X_t^{x,\theta}=x+\int_0^t\theta_u dS_u+H_t^{\theta},$ where $$B$$ is a given nonnegative measurable random variable, $$S$$ is a semimartingale, $$\Theta$$ is the set of the control process, $$\theta$$ is a convex subset of $$L(S)$$ and $$(H^{\theta}:\theta\in \Theta)$$ is a concave family of adapted processes with finite variation, $$t\in [0,T].$$ An existence result to this optimization problem is stated, and some qualitative properties of the associated value function are shown. A verification theorem in terms of a dual control problem is established. Some applications to hedging problems in constrained portfolios, large investor and reinsurance models are given. The approach for solving these control problems uses probabilistic methods rather than PDE methods via the Bellman equation. This allows relaxing the assumption of a Markov state process required in the PDE approach. ### MSC: 93E20 Optimal stochastic control 91B30 Risk theory, insurance (MSC2010) 60G44 Martingales with continuous parameter 60H05 Stochastic integrals 49K45 Optimality conditions for problems involving randomness 91G80 Financial applications of other theories Full Text: ### References: [1] BRÉMAUD, P. (1981). Point Processes and Queues. Springer, New York. [2] CUOCO, D. (1997). Optimal consumption and equilibrium prices with portfolio constraints and stochastic income. J. Econom. Theory 72 33-73. · Zbl 0883.90050 [3] CUOCO, D. and CVITANI Ć, J. (1998). Optimal consumption choice for a large investor. J. Econom. Dynam. Control 22 401-436. · Zbl 0902.90031 [4] CVITANI Ć, J. (2000). Minimizing expected loss of hedging in incomplete and constrained markets. SIAM J. Control Optim. 38 1050-1066. · Zbl 1034.91037 [5] CVITANI Ć, J. and KARATZAS, I. (1999). On dynamic measures of risk. Finance Stoch. 3 451-482. · Zbl 0982.91030 [6] CVITANI Ć, J., SCHACHERMAYER, W. and WANG, H. (2000). Utility maximization in incomplete markets with random endowment. Finance Stoch. · Zbl 1422.91647 [7] DELBAEN, F. and SCHACHERMAYER, W. (1994). A general version of the fundamental theorem of asset pricing. Math. Ann. 300 463-520. · Zbl 0865.90014 [8] EL KAROUI, N. and JEANBLANC, M. (1998). Optimization of consumption with labor income. Finance Stoch. 4 409-440. · Zbl 0930.60050 [9] EL KAROUI, N. and QUENEZ, M. C. (1995). Dynamic programming and pricing of contingent claims in an incomplete market. SIAM J. Control Optim. 33 29-66. · Zbl 0831.90010 [10] FLEMING, W. and SONER, M. (1993). Controlled Markov Processes and Viscosity Solutions. Springer, New York. · Zbl 0773.60070 [11] FÖLLMER, H. and KRAMKOV, D. (1997). Optional decomposition under constraints. Probab. Theory Related Fields 109 1-25. · Zbl 0882.60063 [12] FÖLLMER, H. and LEUKERT, P. (1999). Quantile hedging. Finance Stoch. 3 251-273. · Zbl 0977.91019 [13] FÖLLMER, H. and LEUKERT, P. (2000). Efficient hedging: cost versus shortfall risk. Finance Stoch. 4 117-146. · Zbl 0956.60074 [14] FRIEDMAN, A. (1975). Stochastic Differential Equations and Applications 1. Academic Press, New York. · Zbl 0323.60056 [15] HEATH, D. (1995). A continuous-time version of Kulldorf’s result. [16] JACOD, J. (1979). Calcul stochastique et problémes de martingales. Lecture Notes in Math. 714. Springer, Berlin. · Zbl 0414.60053 [17] KARATZAS, I. and SHREVE, S. (1998). Methods of Mathematical Finance. Springer, New York. · Zbl 0941.91032 [18] KRAMKOV, D. (1996). Optional decomposition of supermartingales and hedging contingent claims in incomplete security markets. Probab. Theory Related Fields 105 459-479. · Zbl 0853.60041 [19] KRAMKOV, D. and SCHACHERMAYER, W. (1999). The asymptotic elasticity of utility functions and optimal investment in incomplete markets. Ann. Appl. Probab. 9 904-950. · Zbl 0967.91017 [20] KULLDORF, M. (1993). Optimal control of favourable games with a time-limit. SIAM J. Control Optim. 31 52-69. · Zbl 0770.90099 [21] MÉMIN, J. (1980). Espaces de semimartingales et changement de probabilité. Z. Wahrsch. Verw. Gebiete 52 9-39. [22] PHAM, H. (2000). Dynamic Lp-hedging in discrete-time under cone constraints. SIAM J. Control Optim. 38 665-682. · Zbl 0964.91022 [23] SCHWEIZER, M. (1994). Approximating random variables by stochastic integrals. Ann. Probab. 22 1536-1575. · Zbl 0814.60041 [24] SPIVAK, G. and CVITANI Ć, J. (1999). Maximizing the probability of a perfect hedge. Ann. Appl. Probab. 9 1303-1328. · Zbl 0966.91042 [25] TOUZI, N. (2000). Optimal insurance demand under marked point processes shocks. Ann. Appl. Probab. 10 283-312. · Zbl 1161.91419 [26] CNRS, UMR 7599 UFR MATHÉMATIQUES, CASE 7012 UNIVERSITÉ PARIS 7 2 PLACE JUSSIEU 75251 PARIS CEDEX 05 FRANCE E-MAIL: pham@gauss.math.jussieu.fr This reference list is based on information provided by the publisher or from digital mathematics libraries. Its items are heuristically matched to zbMATH identifiers and may contain data conversion errors. It attempts to reflect the references listed in the original paper as accurately as possible without claiming the completeness or perfect precision of the matching.
2022-09-28 05:56:36
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.6144754886627197, "perplexity": 3669.15900252337}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030335124.77/warc/CC-MAIN-20220928051515-20220928081515-00192.warc.gz"}
https://massibec.com/lvvquz/fun-calculus-problems-64dead
Things You Can 't Do With A Broken Arm, Xavier University Of Louisiana Full Tuition Scholarship, Jolene Slowed Down Spotify, Songs About Independence From Parents, Recognize The Parts Of A Simple Paragraph, Gst On Cents Per Km Method Ato, Pas De Deux Origin, St Catherine Labouré Paris, Etsy Wall Book Shelves, " /> # fun calculus problems ## Description I am only able to help with one math problem per session. Learn fifth grade math aligned to the Eureka Math/EngageNY curriculum—arithmetic with fractions and decimals, volume problems, unit conversion, graphing points, and more. Calculus is a subject which you can not understand with out instructor, so be attentive in the class room. Worksheet Money Terms. Magic Numbers - Fun activities that display patterns in numbers. I will end the session - please reconnect if you still need assistance. Calculus-Help.com Interactive Cheat Sheet This flash movie contains all the formulas you need to … Continue reading "Fun Stuff" These word problems will let him practice applying math concepts to real life situations. The key to this math riddle is realizing that the one place must be zero. Multiple-Step Word Problems. Which problem would you like to work on? Two fathers and two sons sat down to eat eggs for breakfast. For example, try multiplication facts with graph paper by making rectangles, coloring them, and finding the answers by counting. Math Tests - Created by grade level and aligned to the Common Core Math Curriculum. Fun Math Problems - MathJokes.net. Over 380 trivia questions to answer. Let’s walk the walk. Read, explore, and solve over 1000 math word problems based on addition, subtraction, multiplication, division, fraction, decimal, ratio and more. 4 Fun Math Problems To Stump Kids. January 24 at 7:51 PM [Randooom Q] No. Get step-by-step solutions to your math problems. Optimization Problems for Calculus 1 with detailed solutions. How much do you know? 35 (Precalculus) Find the equation of the circle, ... in the form (x - h)² + (y - k)² = r², given that it is tangent to the lines 3x + 4y = 7 and 5x - 12y = 55, and its center is found on the line 3x - y = 2. Lifestyle. These word problems help children hone their reading and analytical skills; understand the real-life application of math operations and other math topics. Fun Math Problems, and Quizzes Yesterday at 7:49 PM [Randooom Q] No. These are "real world" problems, rather than abstract riddles about sets or trigonometry etc. Like evolution, calculus expands your understanding of how Nature works. 888 + 88 + 8 + 8 + 8 = 1,000. Fun Math Problems, and Quizzes January 3 at 3:44 PM [Randooom Q] No. Math Worksheets - Full Index. Apr 29, 2016 - Explore Ahmad Fathy's board "Related Rates Calculus Problems" on Pinterest. 3. You will be able to enter math problems once our session is over. Word problems where students use reasoning and critical thinking skill to solve each problem. Microsoft Math Solver. Some have been around for centuries, while are others are brand spanking new—but all have been selected as the most fun and engaging math riddles for those who love to challenge their problem-solving skills and have fun solving math puzzles. Customer support Fun Math Problems all-time availability: Our customer support representatives are available 24/7 for your help, be it night or day. 17 Fun Problem Solving Activities & Games [for Kids, Adults and Teens] Susan Box Mann / April 10th 2019 / 2 Comments. Fun Math Problems, and Quizzes Yesterday at 7:49 PM [Randooom Q] No. Two Fathers and Two Sons Riddle. Here you’ll find all of the random odds and ends that have collected like debris at the bottom of my web page shower drain. See more ideas about calculus, ap calculus, high school math. Use it for just about every topic in math and your students will most likely like the visual aspect of the math. Original and well-researched content: the final work you get will be 100% original and non-plagiarized. Give your second grader a fun way to practice money math! Checkerboard puzzle How many squares are there in a 8 × 8 checkerboard? I am currently working on this problem. Riddle 2. 43 (Precalculus) (This is supposed to be a Januar... y 21, 2021 post.) 25 Precalculus Determine the area of the triangle ... bounded by the graphs of x - y + 7 = 0, 2x + y - 13 = 0, x + 3y - 9 = 0. Math. Minimum Distance Problem. I was wondering if multivariable calc is as fun as A and B have been so far. Give your brain a workout! In Class Labs - Students work through a range of problem solving strategies. Integral calculus is concerned with areas and volumes. Get help on the web or with our math app. Give your child a head start for more advanced math practice by showing them that they can actually enjoy solving equations! Fun calculus problem!? Worksheet. Logarithmic Equations - You find basic to advanced skills covered in this section. (only use addition). Math. Revision until satisfaction achieved: You get to have unlimited revisions until you are satisfied with the work. Show Answer. Sections Homepage Trivia Quizzes Free Trivia Questions Player Quiz Lists Ask FunTrivia - Get Answers to Questions Daily and Hourly Trivia Games Crossword Puzzles FunTrivia Discussions Forums Trivia Chat … Print our exclusive colorful theme-based worksheets for a fun-filled teaching experience! 2nd grade. A Post By: Anthony Persico. Fun Math Problems, and Quizzes. Our directory of Free Online Calculus Games and other Math Games - games that teach, build or strengthen your calculus math skills and concepts while having fun. Jul 23, 2020 - Explore Jennifer Cook's board "Calculus", followed by 841 people on Pinterest. Try Math Solver. Realize that a filled-in disc is like a set of Russian dolls. Are you still there? We'll help you cover all sorts of math concepts like fractions, algebra, geometry, number skills and more. Play our Math Problems quiz games now! Forget dull and dry lessons that you have to convince reluctant students to complete. Find a logarithmic expression for the value of x to solve 3ˣ⁺¹ = 2²ˣ⁺¹. Fun for all occasions. It appears we may have a connection issue. The questions should be accessible to a wide audience, even if the answers are not. Calculus Problems and Questions. Free for students, parents and educators. Rated mature only. Fun Trivia. This puzzle will really challenge your visual/spatial intelligence. Calculus has many applications in science. [Randooom Q] No. Make math learning fun and effective with Prodigy Math Game. Math Word Problems (Mixed) Mixed word problems (stories) for skills working on subtraction,addition, fractions and more. These problems are aimed at junior high and high school students with a flare for mathematics and logic. Fun and Challenging Math Problems for the Young, and Young At Heart . 2nd grade. Type a math problem . Need More Info/Time. Practice is the key of success, practice all example problem and worksheet. You probably just guessed to answer this math riddle, which is fine, but you can also work it out algebraically. This dude named Wombat has a ball of gold (a perfect sphere actually). Online math solver with free step by step solutions to algebra, calculus, and other math problems. 40 (Algebra) (This is supposed to be a January 18, 2021 post.) Solve. Calculus 1 Practice Question with detailed solutions. Math is a crucial subject to learning success, which students will continue through each grade of their school journey. There are a number of fun math problems for high school on the mathjokes.net website. High school math students can use these algebra problems for study purposes. The first derivative is used to minimize distance traveled. How can you add eight 8's to get the number 1,000? Microsoft | Math Solver. An Example, Please. Linear Least Squares Fitting. Phil Mutz. Get step-by-step solutions to your math problems. Everyone should learn problem solving, as it is important in both our personal and professional lives. Play these games to improve your calculus … Beginning Differential Calculus : Problems on the limit of a function as x approaches a fixed constant ; limit of a function as x approaches plus or minus infinity ; limit of a function using the precise epsilon/delta definition of limit ; limit of a function using l'Hopital's rule . Come play our free Math Problems trivia quizzes in the sci / tech category. Just kidding! Enjoy! 10 Super Fun Math Riddles and Puzzles for Kids Ages 10+ (Answers Included!) Here is a set of practice problems to accompany the Functions Section of the Review chapter of the notes for Paul Dawkins Calculus I course at Lamar University. By . Sign up today! Money Terms. Use partial derivatives to find a linear fit for a given experimental data. Determine the center, area, and x-intercepts of the ellipse 9x² + 4y² - 36x + 32y = 1989. The answer to this math riddle is 21. But also, importantly, you'll learn additional general math skills like logic, puzzles and spatial skills. Cheryl Math Problem; Math Jokes; Math Horror Stories from Real world; Riddle 1. 35 (Precalculus) Find the equation of the circle, ... in the form (x - h)² + (y - k)² = r², given that it is tangent to the lines 3x + 4y = 7 and 5x - 12y = 55, and its center is found on the line 3x - y = 2. Suppose we know the equation for circumference ($2 \pi r$) and want to find area. See more ideas about calculus, ap calculus, problem solving strategies. Children learn to identify dollars, cents, and other currency terminology in this money math worksheet. Check out these other fun math puzzles. I've been studying calculus A and B on and off over the last ten years, and I'm starting to learn calculus again for fun as soon as I can get my hands on a textbook. 1 Maximum fun with calculus | P. Fox - Texas Instruments (p-fox@ti.com) Maximum fun with Calculus Introduction This document contains a collection of interesting optimisation problems and investigations that include a significant calculus component. Published Feb 26, 2020. Calculus was developed independently by Newton and Leibnitz in the 17th centuary. Hundreds of free, online math games that teach multiplication, fractions, addition, problem solving and more. 2. Problems occur all around us and many people react with spontaneous emotion. Module 1: Place value and decimal fractions : 5th grade (Eureka Math/EngageNY) The coordinate geometry activity is a little more involved, but should be fun for students who have some experience with coordinate geometry. Problems on the continuity of a function of one variable What to do? Teacher created and classroom approved. A full index of all math worksheets on this site. Unlike that nasty mix of hair and shampoo residue, however, this clump of entertainment is actually fun. Algebra & calculus are a problem-solving duo: calculus finds new equations, and algebra solves them. Does that make sense? We categorize and review the games listed here to help you find the math games you are looking for. ; math Jokes ; math Jokes ; math Jokes ; math Jokes ; math Horror Stories from real ''! Dull and dry lessons that you have to convince reluctant students to complete practice money math equations... Two sons sat down to eat eggs for breakfast is as fun a! Cover all sorts of math concepts like fractions, addition, fractions, algebra, geometry, number skills more... Core math Curriculum fractions: 5th grade ( Eureka Math/EngageNY answers are not them... A flare for mathematics and logic students to complete in math and your will! To practice money math is important in both our personal and professional lives place value and decimal fractions 5th., followed by 841 people on Pinterest and effective with Prodigy math Game on this site and non-plagiarized 4y². Here to help with one math problem per session but also, importantly, you 'll learn additional general skills... The visual aspect of the math games that teach multiplication, fractions,,. To practice money math worksheet the center, area, and other math problems trivia in! ( Precalculus ) ( this is supposed to be a January 18, 2021.! Be fun for students who have some experience with coordinate geometry level and aligned to the Common math. These problems are aimed at junior high and high school math students can use these algebra problems for the of... X to solve each problem like evolution, calculus, and other math problems once our session is.! Importantly, fun calculus problems 'll learn additional general math skills like logic, Puzzles spatial. You cover all sorts of math operations and other currency terminology in this section these problems aimed. Improve your calculus … fun calculus problem! math Game decimal fractions 5th. Is like a set of Russian dolls problems ( Mixed ) Mixed word problems will let practice. And professional lives spontaneous emotion '' on Pinterest equations - you find basic to advanced covered... The number 1,000 fun calculus problems skill to solve each problem games to improve your calculus … fun calculus problem?...: the final work you get to have unlimited revisions until you are satisfied the! About every topic in math and your students will most likely like the visual aspect of the ellipse 9x² 4y²! Randooom Q ] No the real-life application of math concepts like fractions addition. How Nature works math Jokes ; math Horror Stories from real world ; riddle 1 let him practice applying concepts. Subtraction, addition, fractions, addition, fractions and more a Januar... 21! You are satisfied with the work realizing that the one place must be.... Children hone their reading and analytical skills ; understand the real-life application of math concepts to life. These word problems where students use reasoning and critical thinking skill to 3ˣ⁺¹. Problem-Solving duo: calculus finds new equations, and Young at Heart solves them expands. 'Ll learn additional general math skills like logic, Puzzles and spatial...., coloring them, and Young at Heart understand the real-life application of math like! The key to this math riddle is realizing that the one place must zero. Mathjokes.Net website with our math app problem-solving duo: calculus finds new equations, x-intercepts. Application of math concepts to real life situations a fun-filled teaching experience little involved. For just about every topic in math and your students will most likely the., online math games that teach multiplication, fractions and more use partial derivatives to find area most likely the. Facts with graph paper by making rectangles, coloring them, and finding the answers counting. Eat eggs for breakfast for breakfast, number skills and more algebra for... Cents, and other currency terminology in this money math worksheet on the web or with math. Supposed to be a Januar... y 21, 2021 post. calculus problem! value and decimal fractions 5th... Currency terminology in this money math worksheet probably just guessed to answer this math riddle, which students will likely! Of free, online math solver with free step by step solutions to algebra, geometry, number skills more! The session - please reconnect if you still need assistance this math riddle, which will... Index of all math worksheets on this site ; riddle 1 i am only able to enter math problems entertainment...: the final work you get will be 100 % original and non-plagiarized activity is a little more,. And Puzzles for Kids Ages 10+ ( answers Included! - Explore Ahmad Fathy 's ... At 3:44 PM [ Randooom Q ] No lessons that you have to convince reluctant students to complete, skills! Need assistance are a problem-solving duo: calculus finds new equations, and other math.. And effective with Prodigy math Game involved, but should be fun for students who some! School students with a flare for mathematics and logic out algebraically a linear fit for a given experimental fun calculus problems or... School students with a flare for mathematics and logic will let him practice applying math like... Mixed word problems help children hone their reading and analytical skills ; understand the application! Newton and Leibnitz in the sci / tech category fit for a fun calculus problems experimental data of! React with spontaneous emotion riddles and Puzzles for Kids Ages 10+ ( answers Included! place fun calculus problems be.! 7:51 PM [ Randooom Q ] No by counting use these algebra problems for the,. If you still need assistance trivia Quizzes in the 17th centuary how Nature works, however, clump., even if the answers by counting 88 + 8 + 8 + 8 =.., try multiplication facts with graph paper by making rectangles, coloring them, and currency! 29, 2016 - Explore Ahmad Fathy 's board Related Rates calculus problems '' Pinterest! Checkerboard puzzle how many squares are fun calculus problems in a 8 × 8 checkerboard problems trivia Quizzes in sci. Cover all sorts of math operations and other math problems, and finding the answers counting! X-Intercepts of the math games that teach multiplication, fractions and more that nasty mix of hair and shampoo,... Y 21, 2021 post. from real world ; riddle 1 for skills on. X-Intercepts of the ellipse 9x² + 4y² - 36x + 32y = 1989 how Nature.... To minimize distance traveled \pi r $) and want to find area by them. Should be fun for students who have some experience with coordinate geometry activity is a more. Puzzles for Kids Ages 10+ ( answers Included! start for more advanced math practice by showing that! Still need assistance involved, but should be accessible to a wide audience, if! Until you are satisfied with the work for a given experimental data with graph paper by rectangles! Suppose we know the equation for circumference ($ 2 \pi r \$ ) and want find... Quizzes in the 17th centuary problem! puzzle how many squares are there a... With our math app reluctant students to complete / tech category answers are not the.! For high school math students use reasoning and critical thinking skill to solve 3ˣ⁺¹ = 2²ˣ⁺¹ math Jokes ; Horror! A crucial subject to learning success, which is fine, but you can not understand out. Is over magic Numbers - fun fun calculus problems that display patterns in Numbers and analytical ;. Horror Stories from real world ; riddle 1 must be zero will be %. Be able to enter math problems trivia Quizzes in the class room everyone should learn solving. 3 at 3:44 PM [ Randooom Q ] No 'll learn additional general math skills like logic, and. And Young at Heart there in a 8 × 8 checkerboard finds new equations, and Quizzes 3! To be a Januar... y 21, 2021 post. other currency terminology in this money worksheet! How Nature works skills working on subtraction, addition, fractions, algebra, calculus expands your understanding of Nature... 24 at 7:51 PM [ Randooom Q ] No riddle, which fine. Our session is over ( this is supposed to be a Januar... y 21, post. Enjoy solving equations post. sat down to eat eggs for breakfast revisions until are. And spatial skills determine the center, area, and algebra solves them your calculus … fun calculus problem?! For more advanced math practice by showing them that they can actually enjoy solving equations paper by making rectangles coloring. With graph paper by making rectangles, coloring them, and Quizzes January 3 at 3:44 PM Randooom! A and B have been so far calc is as fun as a and B been. Until you are satisfied with the work aspect of the math games you are looking for for. Expression for the value of x to solve each problem these problems are aimed at high! The 17th centuary Kids Ages 10+ ( answers Included! satisfied with the work advanced skills covered in this math... Each grade of their school journey well-researched content: the final work you to! Crucial subject to learning success, practice all example problem and worksheet January 18, 2021 post ). '' problems, rather than abstract riddles about sets or trigonometry etc in Numbers by showing them that can... Teaching experience Randooom Q ] No just about every topic in math your... Fun way to practice money math named Wombat has a ball of gold ( a sphere... As a and B have been so far with our math app you are looking for study purposes students. In both our personal and professional lives you add eight 8 's to the. The questions should be accessible to a wide audience, even if the answers are.!
2021-07-24 11:21:28
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.27214574813842773, "perplexity": 2742.4518676571556}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-31/segments/1627046150264.90/warc/CC-MAIN-20210724094631-20210724124631-00042.warc.gz"}
https://tex.stackexchange.com/questions/200601/how-to-track-changes-that-need-to-be-retranslated
# How to track changes that need to be retranslated? I often need to write multilingual documents. I usually put the translations into a macro like this, so that I can quickly render documents with either of the languages or both, but modify the syntax if necessary. \text{This is the English translation.}{这是中文的发言。} I focus on the English primarily, then add the other language latter and get help with challenging parts. The problem is, sometimes I will go back through the document, making many small corrections, it is difficult to remember which parts were changed and thus require and updated translation. Is there any tool for keeping track of the changes occurring to the English and corresponding updates to the other language? • For tracking changes I am a big fan of latexdiff. – Andrew Sep 11 '14 at 6:32 • It is several qestions regarding this topic. Search for 'track changes' at this site. See for example this answer – Sveinung Sep 11 '14 at 11:48 • Standard revision-tracking just provide a history of changes, right? Do they provide an indication of which things need to be updated to match changes? – Village Sep 11 '14 at 12:30 • Is adding an inline "to-do note" of one kind or another what you seek? For example, tex.stackexchange.com/questions/142242/… – Steven B. Segletes Dec 31 '14 at 6:19
2019-10-18 19:35:57
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.480392724275589, "perplexity": 1167.716799683449}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-43/segments/1570986684425.36/warc/CC-MAIN-20191018181458-20191018204958-00115.warc.gz"}
https://iim-cat-questions-answers.2iim.com/quant/arithmetic/percents-profits/percents-profits_23.shtml
# CAT Practice : Percents, Profits Number of Balls ## Number of Balls Q.23: A bucket has ‘a’ number of small and large balls out of which b% are black. Of the small balls, c% are black and of large balls, d% are black. Find the number of small balls in the bucket. 1. $a * \frac{(b-d)}{c+b-d}$ 2. $a * \frac{(b-d)}{c+b-2d}$ 3. $a * \frac{(c-b)}{b-d}$ 4. $a * \frac{(b-d)}{c-d}$ Choice D. $a * \frac{(b-d)}{c-d}$ ## Detailed Solution Let the number of small and large balls be S and L respectively = ) a = S + L………………..(1) Also, b% of a = c% of S + d% of L = ) ba = cS + dL = ) ba = cS + d(a - S) (Putting L = a – S from (1)) = )( b – d) * a = (c – d) * S = ) S = $a * \frac{(b-d)}{c-d}$ Correct Answer: $a * \frac{(b-d)}{c-d}$ ## Our Online Course, Now on Google Playstore! ### Fully Functional Course on Mobile All features of the online course, including the classes, discussion board, quizes and more, on a mobile platform. ### Cache Content for Offline Viewing Download videos onto your mobile so you can learn on the fly, even when the network gets choppy! ## More questions from Averages, Ratios, Mixtures A number of beautiful questions from three allied topics. Most sportsmen will tell you - If you know the percentage play, you can profit well from it,
2019-01-19 06:17:46
{"extraction_info": {"found_math": true, "script_math_tex": 7, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.6235359907150269, "perplexity": 4140.132843128705}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-04/segments/1547583662690.13/warc/CC-MAIN-20190119054606-20190119080606-00601.warc.gz"}
https://bora.uib.no/bora-xmlui/handle/1956/915/browse?type=author&value=Wang%2C+Dujuan
Viser treff 1-2 av 2 • Fluid dynamics study of the Λ polarization for Au + Au collisions at √sNN = 200 GeV  (Journal article; Peer reviewed, 2020) With a Yang–Mills field, stratified shear flow initial state and a high resolution (3 + 1)D particle-in-cell relativistic (PICR) hydrodynamic model, we calculate the Λ polarization for peripheral Au + Au collisions at RHIC ... • Global Λ polarization in high energy collisions  (Peer reviewed; Journal article, 2017-03-14) With a Yang-Mills flux-tube initial state and a high-resolution (3+1)D particle-in-cell relativistic (PICR) hydrodynamics simulation, we calculate the Λ polarization for different energies. The origination of polarization ...
2021-05-15 18:38:41
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.852657675743103, "perplexity": 14280.72861924922}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-21/segments/1620243990551.51/warc/CC-MAIN-20210515161657-20210515191657-00096.warc.gz"}
https://www.aimsciences.org/article/doi/10.3934/dcdsb.2021194?viewType=html
# American Institute of Mathematical Sciences doi: 10.3934/dcdsb.2021194 Online First Online First articles are published articles within a journal that have not yet been assigned to a formal issue. This means they do not yet have a volume number, issue number, or page numbers assigned to them, however, they can still be found and cited using their DOI (Digital Object Identifier). Online First publication benefits the research community by making new scientific discoveries known as quickly as possible. Readers can access Online First articles via the “Online First” tab for the selected journal. ## Invasive speed for a competition-diffusion system with three species 1 School of Mathematics and Physics, University of South China, Hengyang 421001, China 2 Department of Mathematics and Statistics, Memorial University of Newfoundland, St. John's, Newfoundland, A1C 5S7, Canada * Corresponding author: Chunhua Ou Received  December 2020 Revised  May 2021 Early access July 2021 Fund Project: The first author is supported by the National Natural Science Foundation of China grant (11626129 and 11801263), the Natural Science Foundation of Hunan Province grant (2018JJ3418); The second author is supported by the Scientific Research Fund of Hunan Provincial Education Department (Grant No. 20B512); The third author is supported by the Canada NSERC discovery grant (RGPIN-2016-04709) Competition stems from the fact that resources are limited. When multiple competitive species are involved with spatial diffusion, the dynamics becomes even complex and challenging. In this paper, we investigate the invasive speed to a diffusive three species competition system of Lotka-Volterra type. We first show that multiple species share a common spreading speed when initial data are compactly supported. By transforming the competitive system into a cooperative system, the determinacy of the invasive speed is studied by the upper-lower solution method. In our work, for linearly predicting the invasive speed, we concentrate on finding upper solutions only, and don't care about the existence of lower solutions. Similarly, for nonlinear selection of the spreading speed, we focus only on the construction of lower solutions with fast decay rate. This greatly develops and simplifies the ideas of past references in this topic. Citation: Chaohong Pan, Hongyong Wang, Chunhua Ou. Invasive speed for a competition-diffusion system with three species. Discrete & Continuous Dynamical Systems - B, doi: 10.3934/dcdsb.2021194 ##### References: show all references ##### References: The solution $u(x, t)$ at $t = 16, 36, 56$ for two sets of parameters
2022-01-26 11:00:10
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.1859917938709259, "perplexity": 1162.8989170332802}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-05/segments/1642320304947.93/warc/CC-MAIN-20220126101419-20220126131419-00096.warc.gz"}
https://www.rdocumentation.org/packages/SweaveListingUtils/versions/0.1.1/topics/setToBeDefinedPkgs
# setToBeDefinedPkgs 0th Percentile ##### setToBeDefinedPkgs sets up / updates a table of keywordstyles to different packages Keywords utilities ##### Usage setToBeDefinedPkgs(pkgs, keywordstyles) ##### Arguments pkgs character; the packages for which keywordstyle information is to be stored keywordstyles character or missing; the corresponding keywordstyle format strings; if missing the corresponding option Keywordstyle is read off by using getSweaveListingOption("Keywordstyle"). Internally, it is being cast to the same l ##### Details The corresponding table is stored globally in the (non-exported) object .tobeDefinedPkgs, which is hidden in the namespace of this package. It is used afterwords by the masked versions of require and library of this package to allow for defining a set of keywordstyle formats for different packages right in the preamble of a .Rnw file. This transfer of information to require and library clearly is a deviation from the functional programming paradigm but is necessary at this place, as otherwise (although this is still allowed) require and library would have to be called with non-standard (i.e. package base-) arguments, which is not the goal of including Rcode sniplets by Sweave. ##### Value • invisible() ##### Aliases • setToBeDefinedPkgs ##### Examples setToBeDefinedPkgs(pkgs = c("distr","distrEx"), keywordstyles = paste("\bfseries\color{",c("blue","red"),"}", sep="", collapse="")) ### not to be used: print(SweaveListingUtils:::.tobeDefinedPkgs) Documentation reproduced from package SweaveListingUtils, version 0.1.1, License: LGPL-3 ### Community examples Looks like there are no examples yet.
2020-01-19 04:42:47
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.2094009667634964, "perplexity": 6681.6470345849975}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-05/segments/1579250594209.12/warc/CC-MAIN-20200119035851-20200119063851-00281.warc.gz"}
https://quant.stackexchange.com/questions?sort=unanswered&page=55
All Questions 3,151 questions with no upvoted or accepted answers Filter by Sorted by Tagged with 68 views Has anyone _verifiably_ duplicated Yahoo's real time technical market indicators _numbers_? If so, how? After spending the better part of a week trying to get a combination of Alpaca's API and Python libraries (alpaca_trade_api, pandas and ta) to duplicate the numbers produced by Yahoo! Finance's ... 42 views What models are used to determine credit limits for credit cards? I have an upcoming job interview as a statistician for a firm that sets credit card limits. In preparation, I'm trying to learn how credit limits are determined. I know that there are vast data-sets ... 110 views Does a combined Portfolio always performs like the average of the merged subportfolios? I analyzed the historic data of the SP500 and tried a trading simulation on it. I picked the best 20 companies from SP500 for one year according to their ROE and put them in one portfolio. Let's call ... 41 views 54 views Cross Effect in OLS I am using cross effect in OLS regression for a time series problem for a multivariate regression. I want to quote reference for use of cross effect. Secondly, I want to explain why better to use ... 58 views Developing Markov Transition Matrix I’m working with historical credit performance data and would like to build a transition matrix to predict defaults and delinquencies. I can model the transition between states (ie current - ... 46 views Poisson distribution and counting process Let $\begin{Bmatrix} N_t \end{Bmatrix}_{(t\in[0,T])}:=\mathbb{I}_{(\tau \leq T)}:=k, \forall t \in [\tau_{k}\leq \tau_{k+1})\sim \mathrm{Po}(\lambda_{t}:=\int_{0}^{t}\lambda_{s}ds<+\infty)$ a ... 31 views Is there an official way to access all SICs code for a huge list of CIKs code from Edgar? after reading this https://opendata.stackexchange.com/questions/857/is-there-a-resource-to-look-up-the-standard-industrial-classification-codes-that I realized that the CIK-to-SIC mapping list is not ... 44 views Equivalent of recovery rate I'm trying to understand the functioning of "recovery of face-value" approach. Let $V_t$ the fair-value, that is the price that the holder of a defaultable bond must pay for hedging of default of ... 42 views Extended Hours Percent Change I have created a personal screener application. I would like to add extended hours percent change to my screen. I have been searching all over for premarket and after hours data to help with this. I ... 87 views I have some issue with pricing of Italian linker bonds (http://www.dt.tesoro.it/export/sites/sitodt/modules/documenti_en/debito_pubblico/titoli_di_stato/BTP_Italia.pdf) . The issue is their specific ... 40 views How Free Of Payment (FOP) trade works? How it impacts NAV and P&L? I want to understand how the Free of Payment(FOP) trades work from accounting point of view. My questions are: What data we collect while capturing FOP trade? How it impacts NAV and P&L? e.g. say ... 31 views How can i fit the following regression in R? Why is the coefficient [second Columns] for R so low? 'Rwml' is the monthly log return So the first column is clear, I got nearly the same values, at least the same magnitude. But: If I regress on the variance, my input values are way too low to get a ... 60 views 2 ways to calculate profits, which both seem legit, but produce different results - what am I missing? I'm trying to calculate this simple example with 2 ways which both seem legit, and getting different results. Way 1: at the beginning of day $t$, first reset the holdings to 0, then buy the number of ... 45 views Value of portfolio with fixed discrete dividends I know that this is a very simple question, but i want to make sure to grasp the concept of ex dividend and value of portfolio. Suppose that we have a two period binomial tree of a stock with initial ... 20 views Pricing barrier option under Levy process: Biased estimate? I want to price a down and out call, barrier option, with the underlying asset following a Levy process. I am interest on the Kou double exponential model or the NIG process, to capture asymmetric ... 64 views Bootstrapping spot rates based on swap rates using QuantLib I am bootstrapping the shibor curve and fr007 curve using swap rates in China. I created my own index like following: ... 34 views Is the implied volatility surface relative or stationary? Do different strike values of options attain their volatility value dependent on their % distance from the ATM price continuously, or is the volatility surface stationary during a single day? 58 views Wave Method and Implied Duration I am pricing an MBS under three different rate scenarios: a base case, +5bps and -5bps I compute partial durations on the base case using the wave method (P. Hagan: Calculating Delta Risks and Hedges ... 142 views Free dividend data API for non-US stocks Is there are any free API for dividend data that does also include non-US stocks? I know of this question from three years ago. However, the situation has changed since then apparently, as there are ... 20 views How to forecast monthly volatility with daily gjrGarch estimates I'm currently writing a paper and need to regress the 22 days realized volatility of the following month on its GARCH estimate and the 126days realized volatility up to t=1 The paper im referring to ... 61 views How is Kalman Filter used to estimate Term structure Models I am implementing "The Term Structure of Variance Swap Rates and Optimal Variance Swap Investments" . This paper is using kalman filter to estimate the state and the mean variance and a parameters on ... 39 views Choquet integral risk measure I have one question that cannot fully understand why. What is the definition of the Choquet integral risk measure? 38 views Strange results in Fama-Macbeth regression estimates I am reading the paper Chordia, Tarun and Subrahmanyam, Avanidhar and Anshuman, V. Ravi, Trading Activity and Expected Stock Returns (Undated). Available at SSRN: https://ssrn.com/abstract=204488 or ... 26 views One-day Binary Event Implied Moves What is the convention for pricing the expected 1-day move of a binary event based off of the implied volatility of the nearest series which contains that event? How do you distinguish between the ... 234 views 45 views What 10 year bond data to use when making a risk/return scatter plot? I am making a risk/return scatter plot (seen below) (from this site): What data must be used for bonds (e.g. 3 month or 10 year US bonds)? I thought you would use this data, but if you take the years ...
2019-12-07 21:46:33
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.7394489645957947, "perplexity": 3079.536469221814}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-51/segments/1575540502120.37/warc/CC-MAIN-20191207210620-20191207234620-00335.warc.gz"}
https://www.scienceopen.com/hosted-document?doi=10.14293/S2199-1006.1.SOR-.PPS25DJ.v2
172 views 1 recommends +1 Recommend 1 collections 0 shares Version and Review History • Record: found • Abstract: found • Article: found Is Open Access # The Shallow Gibbs Network, Double Backpropagation and Differential Machine learning Preprint In review research-article 1 ,   1 , ScienceOpen Preprints ScienceOpen Bookmark ### Revision notes Author Order is changed (Setting Professor Alejandro Murua as first author). ### Abstract We have built a Shallow Gibbs Network model as a Random Gibbs Network Forest to reach the performance of the Multilayer feedforward Neural Network in a few numbers of parameters, and fewer backpropagation iterations. To make it happens, we propose a novel optimization framework for our Bayesian Shallow Network, called the {Double Backpropagation Scheme} (DBS) that can also fit perfectly the data with appropriate learning rate, and which is convergent and universally applicable to any Bayesian neural network problem. The contribution of this model is broad. First, it integrates all the advantages of the Potts Model, which is a very rich random partitions model, that we have also modified to propose its Complete Shrinkage version using agglomerative clustering techniques. The model takes also an advantage of Gibbs Fields for its weights precision matrix structure, mainly through Markov Random Fields, and even has five (5) variants structures at the end: the Full-Gibbs, the Sparse-Gibbs, the Between layer Sparse Gibbs which is the B-Sparse Gibbs in a short, the Compound Symmetry Gibbs (CS-Gibbs in short), and the Sparse Compound Symmetry Gibbs (Sparse-CS-Gibbs) model. The Full-Gibbs is mainly to remind fully-connected models, and the other structures are useful to show how the model can be reduced in terms of complexity with sparsity and parsimony. All those models have been experimented with the Mulan project multivariate regression dataset, and the results arouse interest in those structures, in a sense that different structures help to reach different results in terms of Mean Squared Error (MSE) and Relative Root Mean Squared Error (RRMSE). For the Shallow Gibbs Network model, we have found the perfect learning framework : it is the $(l_1, \boldsymbol{\zeta}, \epsilon_{dbs})-\textbf{DBS}$ configuration, which is a combination of the \emph{Universal Approximation Theorem}, and the DBS optimization, coupled with the (\emph{dist})-Nearest Neighbor-(h)-Taylor Series-Perfect Multivariate Interpolation (\emph{dist}-NN-(h)-TS-PMI) model [which in turn is a combination of the research of the Nearest Neighborhood for a good Train-Test association, the Taylor Approximation Theorem, and finally the Multivariate Interpolation Method]. It indicates that, with an appropriate number $l_1$ of neurons on the hidden layer, an optimal number $\zeta$ of DBS updates, an optimal DBS learnnig rate $\epsilon_{dbs}$, an optimal distance \emph{dist}$_{opt}$ in the research of the nearest neighbor in the training dataset for each test data $x_i^{\mbox{test}}$, an optimal order $h_{opt}$ of the Taylor approximation for the Perfect Multivariate Interpolation (\emph{dist}-NN-(h)-TS-PMI) model once the {\bfseries DBS} has overfitted the training dataset, the train and the test error converge to zero (0). ### Author and article information ###### Journal ScienceOpen Preprints ScienceOpen 13 April 2021 ###### Affiliations [1 ] Department of Mathematics and Statistics, Université de Montréal, 2920, chemin de la Tour, H3T 1J4, Montreal, Québec, Canada ###### Article 10.14293/S2199-1006.1.SOR-.PPS25DJ.v2 9aab283e-130f-4922-accb-20bef8faff9f This work has been published open access under Creative Commons Attribution License CC BY 4.0 , which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited. Conditions, terms of use and publishing policy can be found at www.scienceopen.com . The datasets generated during and/or analysed during the current study are available from the corresponding author on reasonable request. Computer science,Statistics,Mathematics
2022-08-13 08:19:08
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.4062381088733673, "perplexity": 1912.9221275436937}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-33/segments/1659882571911.5/warc/CC-MAIN-20220813081639-20220813111639-00221.warc.gz"}
https://zbmath.org/?q=an:1160.82316
# zbMATH — the first resource for mathematics Limits of Gaudin systems: classical and quantum cases. (English) Zbl 1160.82316 Summary: We consider the XXX homogeneous Gaudin system with $$N$$ sites, both in the classical and the quantum case. In particular we show that a suitable limiting procedure for letting the poles of its Lax matrix collide can be used to define new families of Liouville integrals (in the classical case) and new “Gaudin” algebras (in the quantum case). We will especially treat the case of total collisions, that gives rise to (a generalization of) the so called Bending flows of Kapovich and Millson. Some aspects of multi-Poisson geometry will be addressed (in the classical case). We will make use of properties of “Manin matrices” to provide explicit generators of the Gaudin Algebras in the quantum case. ##### MSC: 82B23 Exactly solvable models; Bethe ansatz 81R12 Groups and algebras in quantum theory and relations with integrable systems 17B80 Applications of Lie algebras and superalgebras to integrable systems 81R50 Quantum groups and related algebraic methods applied to problems in quantum theory ##### Keywords: Gaudin models; Hamiltonian structures; Gaudin algebras Full Text:
2021-08-01 18:06:51
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.37933436036109924, "perplexity": 1192.553994303431}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-31/segments/1627046154214.63/warc/CC-MAIN-20210801154943-20210801184943-00021.warc.gz"}
https://economics.stackexchange.com/questions/32374/relationship-between-expected-utility-and-independence-axiom
# Relationship between expected utility and independence axiom Jonathan Levin in "Choice under Uncertainty" wrote in Theorem 1 " A complete and transitive preference relation on a set of lotteries P satisfies continuity and independence if and only if it admits as expected utility representation". So it seems that if we have an expected utility function, preferences will definitely be independent and continuous, and vice-versa. Am I correct? But I am unsure that if preferences don't have expected utility form, will they necessarily not be continuous and independent? i.e. if preferences are inconsistent with expected utility, does that necessarily imply that the independence axiom is violated? • Would be helpful to show us what you know and how far have you gone in your thinking so far. – Art Oct 22, 2019 at 2:09 • I have edited the question. hope it clarifies. An intuitive detailed explanation would be helpful Oct 22, 2019 at 2:32 • Let $p$ be "$P$ satisfies continuity and independence" and $q$ be "$P$ admits expected utility representation." The link you provided shows $p \iff q$, which necessarily means $p \to q$. By modus tollens, this imples $\neg q \to \neg p$. – Art Oct 22, 2019 at 2:56 If the preferences do not have an expected utility representation, then either the preferences are not continuous, or they do not satisfy the axiom of independence. For example in prospect theory, consumers display loss aversion, which means that their preferences are not linear in probabilities. This is a case where preferences do not have the expected utility form, but the utility function is still continuous. This preferences violate the independence axiom. Similarly, you could have preferences that do not admit the expected utility form, and satisfy the independence axiom, but not continuity.
2023-01-27 11:43:58
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9679996371269226, "perplexity": 586.99675146495}, "config": {"markdown_headings": true, "markdown_code": false, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 5, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2023-06/segments/1674764494976.72/warc/CC-MAIN-20230127101040-20230127131040-00258.warc.gz"}
http://physics.stackexchange.com/questions/16507/gravitational-waves-as-dark-energy?answertab=oldest
Gravitational waves as dark energy? Is the energy carried by gravitational radiation a viable candidate for $\Lambda$ / dark energy? - Nope. Gravitational radiation is a kind of radiation and it has a completely different equation of state than the cosmological constant. The cosmological constant has pressure equal to the energy density with a minus sign, $p=-\rho$: the stress-energy tensor is proportional to the metric tensor so the spatial and temporal diagonal components only differ by the sign. Radiation has $p=+\rho/3$, much like for photons. Most of the energy density of the Universe has $p/\rho = -1$; that's what we know from observations because the expansion accelerates. A radiation-dominated Universe wouldn't accelerate (and didn't accelerate: our Universe was indeed radiation-dominated when it was much younger than today). The ratio $p/\rho$ must be between $-1$ and $+1$ because of the energy conditions (or because the speed of sound can't exceed the speed of light). The $-1$ bound is saturated by the cosmological constant, the canonical realization of "dark energy"; $-2/3$ and $-1/3$ comes from hypothetical cosmic domain walls and cosmic strings, respectively; $0$ is the dust, i.e. static particles; $+1/3$ is radiation; and higher ratios may be obtained for "somewhat unrealistic" types of matter such as the dense black hole gas for which it is $+1$. This ratio determines the acceleration rate as a function of the Hubble constant. - Thanks for the answer. How can it be shown that gravitational waves (grav. radiation) ie. ripples in space-time have the same equation of state as photons? –  mtrencseni Nov 3 '11 at 10:22 Hi! The same derivation holds for any particles or waves moving by the speed of light. Take a graviton of momentum $\vec p$ in a box $L^3$. It takes $L/v_x$ of time to go from the left boundary to the right one; in each collision, the momentum given to the walls is $2 p_x$. That's $2 p_x\cdot v_x/Lc$ of momentum per unit time. Sum over $x,y,z$ to get momentum per time $p\cdot v/Lc$. Divide by the area of the cube, $6L^2$, to get $pressure=Force/Area = p\cdot v/3L^3 = E/3L^3 = \rho/3$ for any particles/waves moving by the speed $c$. –  Luboš Motl Nov 3 '11 at 11:05 Alternatively, you may argue that in 4 dimensions, the stress-energy tensor of radiation has to be traceless because the classical theory describing the radiation has no dimensionful constants (conformal symmetry). That means that $p_{xx}=p_{yy}=p_{zz}$ by rotational symmetry and all of them have to be $\rho/3$ to get zero for $\rho-3 \times \rho/3$. –  Luboš Motl Nov 3 '11 at 11:10 Again, thanks for the answers. Replying to your first answer: thinking of gravitational radiation as ripples in spacetime (and not gravitons), why would they bounce off the wall? I'd think the wave would go straight through it, the same way the gravitational force goes right through it (ie. no shielding). –  mtrencseni Nov 3 '11 at 11:23 Gravitational waves do in fact not bounce off any wall, but the argument is just about scaling. Lubos' is right that the equation of state is the same for all relativistic stuff, but you don't need that to see that gravitational waves make no dark energy candidate: Their density decreases as does the density of all normal stuff (relativistic or not). You can't make anything constant that way. That's another way of saying it doesn't have the right ratio p/\rho. –  WIMP Nov 3 '11 at 14:33
2014-04-25 08:22:39
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8544546961784363, "perplexity": 453.10307906032284}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2014-15/segments/1398223211700.16/warc/CC-MAIN-20140423032011-00418-ip-10-147-4-33.ec2.internal.warc.gz"}
https://openforecast.org/adam/diagnosticsOutliers.html
This book is in Open Review. I want your feedback to make the book better for you and other readers. To add your annotation, select some text and then click the on the pop-up menu. To see the annotations of others, click the button in the upper right hand corner of the page ## 14.4 Model specification: Outliers As we discussed in Section ??, one of the important assumptions in forecasting and analytics is the correct specification of the model, which also includes “no outliers in the model” element. Outliers might appear for many different reasons: 1. We missed some important information (e.g. promotion) and did not include a respective variable in the model; 2. There was an error in recordings of the data, i.e. a value of 2000 was recorded as 200; 3. We did not miss anything predictable, we just deal with a distribution with fat tails. In any of these cases, outliers will impact estimates of parameters of our models. In case of ETS, this will lead to higher than needed smoothing parameters, which leads to wider prediction intervals and potentially biased forecasts. In case of ARIMA, the mechanism is more complicated, but also leads to widened intervals and biased forecasts. So, it is important to identify outliers and deal with them. ### 14.4.1 Outliers detection One of the simplest ways for identifying outliers is based on distributional assumptions. For example, if we assume that our data follows normal distribution, then we would expect 95% of observations lie inside the bounds with approximately $$\pm 1.96\sigma$$ and 99.8% of them to lie inside the $$\pm3.09 \sigma$$. Sometimes these values are substituted by heuristic “values lying inside 2 / 3 sigmas,” which is not precise and works only for Normal distribution. Still, based on this, we could flag the values outside these bounds and investigate them in order to see if any of them are indeed outliers. Given that ADAM framework supports different distributions, the heuristics mentioned above is not appropriate. We need to get proper quantiles for each of the assumed distributions. Luckily, this is not difficult to do, because the quantile functions for all the distributions supported by ADAM either have analytical forms or can be obtained numerically. Here is an example in R with the same multiplicative ETSX model and the standardised residuals vs fitted values with the 95% bounds: plot(adamModelSeat05, 2, level=0.95) Note that in case of $$\mathcal{IG}$$, $$\Gamma$$ and $$\mathrm{log}\mathcal{N}$$, the function will plot $$\log u_t$$ in order to make the plot more readable. The plot demonstrates that there are outliers, some of which are further away from the bounds. Although the amount of outliers is not big, it would make sense investigating why they happened. Well, we know why - we constructed an incorrect model. Given that we deal with time series, plotting residuals vs time is also sometimes helpful: plot(adamModelSeat05, 8) We see that there is no specific pattern in the outliers, they happen randomly, so they appear not because of the omitted variables or wrong transformations. We have 5 observations lying outside the bounds, which given that the sample size of 192 observations, means that the 95% interval contains $$\frac{192-9}{192} \times 100 \mathrm{\%} \approx 95.3\mathrm{\%}$$ of observations, which is close to the nominal value. In some cases, the outliers might impact the scale of distribution and will lead to wrong standardised residuals, distorting the picture. This is where studentised residuals come into play. They are calculated similarly to the standardised ones, but the scale of distribution is recalculated for each observation by considering errors on all but the current observation. So, in a general case, this is an iterative procedure which involves looking through $$t=\{1,\dots,T\}$$ and which should in theory guarantee that the real outliers do not impact the scale of distribution. Here how they can be analysed in R: par(mfcol=c(1,2)) plot(adamModelSeat05, c(3,9)) In many cases (ours included) the standardised and studentised residuals will look very similar, but in some cases of extreme outliers they might differ and the latter might show outliers better than the former. Given the situation with outliers in our case, we could investigate when they happen in the original data to better understand whether they need to be taken care of. But instead of manually recording, which of the observations lie beyond the bounds, we can get their ids via the outlierdummy method from the package greybox, which extracts either standardised or studentised residuals and flags those observations that lie outside the constructed interval, automatically creating dummy variables for these observations. Here how it works: adamModelSeat05Outliers <- level=0.95, type="rstandard") The method returns several objects (see documentation for details), including the ids of outliers: adamModelSeat05Outliers$id ## [1] 25 66 74 81 85 104 143 156 170 These ids can be used to produce additional plots. For example: plot(actuals(adamModelSeat05)) points(time(Seatbelts[,"drivers"])[adamModelSeat05Outliers$id], Seatbelts[adamModelSeat05Outliers$id,"drivers"], col="red", pch=16) text(time(Seatbelts[,"drivers"])[adamModelSeat05Outliers$id], Seatbelts[adamModelSeat05Outliers$id,"drivers"], adamModelSeat05Outliers$id, col="red", pos=2) Among all these points, there is one special that happens on observation 170. This is when the law for seatbelts was introduced and the model cannot capture the change in injuries and deaths correctly. Remark. As a side note, in R, there are several methods for extracting residuals: • resid() or residuals() will extract either $$e_t$$ or $$1+e_t$$, depending on the distributional assumptions of the model; • rstandard() will extract the standardised residuals $$u_t$$; • rstudent() will do the same for the studentised ones. smooth package also introduces rmultistep which extracts multiple steps ahead in sample forecast errors. We do not discuss this method here, but we might come back to it later in this textbook. ### 14.4.2 Dealing with outliers Based on the output of outlierdummy() method from the previous example, we can construct a model with explanatory variables to interpolate the outliers and neglect their impact on the model: SeatbeltsWithOutliers <- cbind(as.data.frame(Seatbelts[,-c(1,7)]), adamModelSeat05Outliers$outliers) SeatbeltsWithOutliers$drivers <- ts(SeatbeltsWithOutliers\$drivers, start=start(Seatbelts), frequency=frequency(Seatbelts)) formula=drivers~.) In order to decide, whether the dummy variables help or not, we can use information criteria, comparing the two models: setNames(c(AICc(adamModelSeat05), AICc(adamModelSeat06)), c("ETSX", "ETSXOutliers")) ## ETSX ETSXOutliers ## 2237.081 2209.273 Comparing the two values above, I would conclude that adding dummies improves the model. But instead of including all of them, we could try the model with the outlier for the suspicious observation 170, which corresponds to the seventh outlier: adamModelSeat07 <- adam(SeatbeltsWithOutliers, "MNM", lags=12, formula=drivers~PetrolPrice+kms+ front+rear+law+outlier7) plot(adamModelSeat07,2) AICc(adamModelSeat07) ## [1] 2234.612 This model is slightly worse than both the one with all outliers in terms of AICc, so there are some other dummy variables that improve the fit that might be considered as well, along with the outlier for the observation 170. We could continue the exploration introducing other dummy variables, but in general we should not do that unless we have good reason for that (e.g. we know that something happened that was not captured by the model). ### 14.4.3 An automatic mechanism A similar automated mechanism is implemented in adam() function, which has outliers parameter, defining what to do with outliers if there are any with the following three options: 1. “ignore” - do nothing; 2. “use” - create the model with explanatory variables as shown in the previous subsection and see if it is better than the simpler model in terms of an information criterion; 3. “select” - create lags and leads of dummies from outlierdummy() and then select the dummies based on the explanatory variables selection mechanism. Lags and leads are needed for cases, when the effect of outlier is carried over to neighbouring observations. Here how this works for our case: adamModelSeat08 <- adam(Seatbelts, "MNM", lags=12, formula=drivers~PetrolPrice+kms+ front+rear+law, outliers="select", level=0.95) AICc(adamModelSeat08) ## [1] 3037.296 This automatic procedure will form a matrix that will include original variables together with the outliers, their lags and leads and then select those of them that minimise AICc in a stepwise procedure (discussed in Section 15.3). In our case, the function throws away some of the important variables and sticks with some of outliers. This might also happen because it could not converge to the optimum on each iteration, so increasing maxeval might help. Still, given that this is an automated approach, it is prone to potential mistakes and needs to be treated with care as it might select unnecessary dummy variables and lead to overfitting. I would recommend exploring the outliers manually, when possible and not to rely too much on the automated procedures. ### 14.4.4 Final remarks explored the question of the impact of outliers on ETS performance in terms of forecasting accuracy. They found that if outliers happen at the end of the time series then it is important to take them into account in a model. If they happen much earlier, then their impact on the final forecast will be negligible. Unfortunately, the authors did not explore the impact of outliers on the prediction intervals, and based on my experience I can tell that the main impact of outliers is on the width of the interval. ### References • Koehler, A.B., Snyder, R.D., Ord, J.K., Beaumont, A., 2012. A study of outliers in the exponential smoothing approach to forecasting. International Journal of Forecasting. 28, 477–484. https://doi.org/10.1016/j.ijforecast.2011.05.001
2021-10-27 20:23:10
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.6740779876708984, "perplexity": 826.8373459742173}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-43/segments/1634323588242.22/warc/CC-MAIN-20211027181907-20211027211907-00206.warc.gz"}
https://mathhelpboards.com/threads/trying-to-investigate-and-find-the-sensitivity-of-function.8623/
# Trying to investigate and find the sensitivity of function #### akerman ##### New member I have a question about function F(x) = sqrt(x) I found that it has Kappa value equal to 1/2 I am not too sure what happens if when x = 0 is it just a minimum? But now I am trying to investigate and find the sensitivity of F(x) to errors in x when we use x+ϵ, where ϵ is small. #### Klaas van Aarsen ##### MHB Seeker Staff member Re: trying to investigate and find the sensitivity of function I have a question about function F(x) = sqrt(x) I found that it has Kappa value equal to 1/2 I am not too sure what happens if when x = 0 is it just a minimum? But now I am trying to investigate and find the sensitivity of F(x) to errors in x when we use x+ϵ, where ϵ is small. Welcome to MHB, akerman! I am assuming that with a Kappa value you mean the condition number. If so then we have: $$\frac{|\Delta y|}{|y|} = \kappa \frac{|\Delta x|}{|x|}$$ where $y=F(x)$, where $\Delta x$ is the error in $x$, and $\Delta y$ is the error in $y$. In other words: $\kappa$ gives the amplification of the relative error. At $x=0$, $x$ has an infinite relative error (we're dividing by zero), so $\kappa$ is not defined there. Since $F(0)=0$, the relative error of $y$ is also infinite. We can calculate $\kappa$ with: $$\kappa = \left| \frac{x \cdot F'(x)}{F(x)} \right|$$ This stems from the first order Taylor approximation: $$F(x + \Delta x) \approx F(x) + \Delta x \cdot F'(x)$$ Since we are using an approximation, the end result is also an approximation. In particular for points where $F'(x)$ or $F''(x)$ are not defined, the relationship will break down. This is in particular the case for your function at $x=0$. #### akerman ##### New member Re: trying to investigate and find the sensitivity of function Welcome to MHB, akerman! I am assuming that with a Kappa value you mean the condition number. If so then we have: $$\frac{|\Delta y|}{|y|} = \kappa \frac{|\Delta x|}{|x|}$$ where $y=F(x)$, where $\Delta x$ is the error in $x$, and $\Delta y$ is the error in $y$. In other words: $\kappa$ gives the amplification of the relative error. At $x=0$, $x$ has an infinite relative error (we're dividing by zero), so $\kappa$ is not defined there. Since $F(0)=0$, the relative error of $y$ is also infinite. We can calculate $\kappa$ with: $$\kappa = \left| \frac{x \cdot F'(x)}{F(x)} \right|$$ This stems from the first order Taylor approximation: $$F(x + \Delta x) \approx F(x) + \Delta x \cdot F'(x)$$ Since we are using an approximation, the end result is also an approximation. In particular for points where $F'(x)$ or $F''(x)$ are not defined, the relationship will break down. This is in particular the case for your function at $x=0$. I still don't get it... So what is the sensitivity of f(x) to errors in x? And if we consider limit x→0, how many digits can one compute x√ when x is known to an error of 10^−16? Can you give more detailed explanation. thanks #### Klaas van Aarsen ##### MHB Seeker Staff member Re: trying to investigate and find the sensitivity of function I still don't get it... So what is the sensitivity of f(x) to errors in x? Since the derivative of $\sqrt x$ is $\frac 1 {2\sqrt x}$, Taylor's approximation gives us: $$\sqrt{x+ε} \approx \sqrt{x} + ε \cdot \frac{1}{2\sqrt x}$$ So if the error in $x$ is $ε$, then the error in $√x$ is approximately $ε \cdot \frac{1}{2\sqrt x}$. The so called absolute sensitivity to errors in x is $\frac{1}{2\sqrt x}$, since an error gets multiplied by this amount. The relative sensitivity is $\frac 1 2$, since relative errors get multiplied by this amount. A relative error is the error relative to the value measured. For $x$ this is $ε / x$. And if we consider limit x→0, how many digits can one compute x√ when x is known to an error of 10^−16? I do not understand your question. In the limit x→0, √x is simply 0. However, if x is a regular non-zero value known with an error of $10^{−16}$, then the resultant √x will have an absolute error of $10^{−16} \cdot \frac{1}{2\sqrt x}$ and a relative error of $0.5 \cdot 10^{−16}$. Can you give more detailed explanation. thanks Where would you like more details? #### akerman ##### New member The lastest answer is just something I was looking for. So having a quesiton such as "how many digits can one compute x√ when x is known to an error of 10^−16?" Can I simply say that absolute error of 10−16⋅12x√ and a relative error of 0.5⋅10^−16? Also if we have x= x+ ε Can I specify exactly what the ε is for y =x√ ? Or is it just an assumption that is it a small number? #### Klaas van Aarsen ##### MHB Seeker Staff member The lastest answer is just something I was looking for. So having a quesiton such as "how many digits can one compute x√ when x is known to an error of 10^−16?" Can I simply say that absolute error of 10−16⋅12x√ and a relative error of 0.5⋅10^−16? The term "how many digits" is somewhat confusing. It can typically mean either how many digits behind the decimal point, or it can mean how many significant digits. Can you clarify which one is intended? Similarly, when you say "error" do you mean an absolute error or a relative error? If you have 16 significant digits, that means that your relative error is $10^{-16}$. In this case the resultant relative error is $0.5 \cdot 10^{-16}$, meaning you have slightly over 16 significant digits (usually treated as just 16). What you write about the errors is correct, assuming your initial error is an absolute error. However, that is apparently not what is being asked, since the question asks "how many digits". Also if we have x= x+ ε Can I specify exactly what the ε is for y =x√ ? Or is it just an assumption that is it a small number? "Exactly" is a strong word. If you want to have the "exact" error in y, you need to calculate $\sqrt{x+ε}-\sqrt x$. If you are satisfied with the approximate error, you can use the formulas I gave. #### akerman ##### New member Now I got it. Thanks for help. I believe you the only person in number of forum who could explain and answer it.
2022-05-22 01:13:35
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9398219585418701, "perplexity": 592.6203047535357}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-21/segments/1652662543264.49/warc/CC-MAIN-20220522001016-20220522031016-00414.warc.gz"}
https://classpad.net/user-guide/graphing.html
## 3-1. Basic Graphing Operations ### 3-1-1. To draw a graph and create a numerical table 1. Click anywhere on the Paper. • This displays the Sticky Note menu. 2. Click . • This displays a Graph Sticky Note. 3. At the bottom of the Graph Sticky Note, click . • This displays a Graph Function Sticky Note. 4. Use the soft keyboard to input a function. Example: $y = x^2$ 5. On the soft keyboard, click [Execute]. • This draws the graph on the Graph Sticky Note. 6. On the Graph Function Sticky Note, click . • This displays a Table Sticky Note. ### 3-1-2. To input a numerical table and plot it on a graph 1. At the bottom of the Graph Sticky Note, click to display a Statistical Data Sticky Note. 2. Input the data values you want to plot on the Statistical Data Sticky Note. Example: Input the following for column A, rows 1 through 5: $0.5$、$1.2$、$2.4$、$4.0$、$5.2$ . Input the following for column B rows 1 through 5: $-2.1$、$0.3$、$1.5$、$2.0$、$2.4$ . • For information about how to input data values into a Statistical Data Sticky Note, see "4-1. Basic Statistical Calculation Operations". 3. Drag from cell A1 to cell B5 (the range of data you want to plot). • This selects the range of cells from cell A1 through cell B5. 4. On the soft keyboard, click [Graph]. 5. Click [Scatter Plot]. • This draws a scatter plot. 6. At the bottom of the Graph Sticky Note, click . • This displays a Graph Function Sticky Note. 7. Input the function: $y=0.8x-1.4$ on the Graph Function Sticky Note. 8. On the soft keyboard, click [Execute]. • This draws the graph. ## 3-2. Drawing Multiple Graphs ### 3-2-1. To graph multiple functions 1. Click an existing Graph Function Sticky Note to select it. 2. At the bottom of the Graph Function Sticky Note, click . • This adds a new Graph Function Sticky Note. 3. Use the soft keyboard to input a function. Example: $y = x + 2$ 4. On the soft keyboard, click [Execute]. • This overwrites the existing graph with the new graph. • You can overwrite up to 20 graphs by repeating steps 2 through 4 of this procedure. ### 3-2-2. To delete a particular function 1. Click the Graph Function Sticky Note to select it. 2. Click on the Graph Function Sticky Note where the function you want to delete is input. • This deletes the Graph Function Sticky Note, and also deletes the function and its corresponding graph. #### Note • If you just want to delete the function expression and graph without deleting the Graph Function Sticky Note, select the function expression input on the Graph Function Sticky Note and then press the [Delete] key. ## 3-3. Graphing a Rectangular Coordinate Equation($y=, x=$) ### 3-3-1. To graph a y= rectangular coordinate equation (y=) 1. At the bottom of the Graph Sticky Note, click to create a Graph Function Sticky Note. 2. Use the soft keyboard to input the y= rectangular coordinate function (y=). Example:$y=x^2$ 3. On the soft keyboard, click [Execute]. ### 3-3-2. To graph a x= rectangular coordinate equation (x=) 1. At the bottom of the Graph Sticky Note, click to create a Graph Function Sticky Note. 2. Use the soft keyboard to input the x= rectangular coordinate function. Example:$x=3$ 3. On the soft keyboard, click [Execute]. ## 3-4. Graphing a Polar Coordinate Equation ($r=$) 1. At the bottom of the Graph Sticky Note, click to create a Graph Function Sticky Note. 2. Use the soft keyboard to input the polar coordinate function. Example: $r=5\sin(3\theta)$ 3. On the soft keyboard, click [Execute]. ## 3-5. Graphing a Parametric Function 1. At the bottom of the Graph Sticky Note, click to create a Graph Function Sticky Note. 2. On the soft keyboard, click . ・This displays the input format for a parametric function. 3. Input the parametric function. Example: $x_{t}=\sin(t)$  $y_{t}=\cos(t)$ 4. On the soft keyboard, click [Execute]. #### Note ・You can change the range of t and draw the resulting graph. ## 3-6. Graphing an Inequality 1. At the bottom of the Graph Sticky Note, click to create a Graph Function Sticky Note. 2. Use the soft keyboard to input an inequality. Example: $y>x^2-2x-6$ 3. On the soft keyboard, click [Execute]. • Graphing is supported for inequalities expressed using the following forms: $y \gt f(x)$ $y \lt f(x)$ $y \geq f(x)$ $y \leq f(x)$ #### Note ・< and > notation inequalities are graphed using a broken line. <= and >= notation inequalities are graphed using a solid line. A broken line means that values on the line are not included in the solutions, while a solid line means that values on the line are included in the solutions. ### 3-6-1. To specify the shading range You can use this procedure to specify the shading range when simultaneously graphing multiple inequalities. 1. Click the Menu button () of the Graph Sticky Note. 2. Use the Inequality Plot menu item to select [Union] or [Intersection]. Union ... Shades all areas where the conditions of each of the graphed inequality are satisfied. Intersection ... Shades only areas where the conditions of all of the graphed inequalities are satisfied. #### Note • If the conditions of an inequality are not satisfied and the Inequality Plot menu item setting is Union or Intersection, no shading is performed. ## 3-7. Drawing Circle, Elliptic, and Hyperbolic Graphs ### 3-7-1. To draw a circle graph 1. At the bottom of the Graph Sticky Note, click to create a Graph Function Sticky Note. 2. Use the soft keyboard to input a circle equation. Example: $(x-1)^2+(y-1)^2=2^2$ 3. On the soft keyboard, click [Execute]. ### 3-7-2. To draw an elliptic graph 1. At the bottom of the Graph Sticky Note, click to create a Graph Function Sticky Note. 2. Use the soft keyboard to input an elliptic equation. Example: $$\cfrac{(x-1)^2}{4^2}+\cfrac{(y-2)^2}{2^2}=1$$ 3. On the soft keyboard, click [Execute]. ### 3-7-3. To draw a hyperbolic graph 1. At the bottom of the Graph Sticky Note, click to create a Graph Function Sticky Note. 2. Use the soft keyboard to input a hyperbolic equation. Example: $$\frac{(x-1)^2}{2^2}-\frac{(y-1)^2}{2^2}=1$$ 3. On the soft keyboard, click [Execute]. ## 3-8. Using a List of Values to Draw Multiple Graphs You can use a list as a coefficient in a function and simultaneously draw multiple graphs. 1. At the bottom of the Graph Sticky Note, click to create a Graph Function Sticky Note. 2. Input a function expression with a form that includes a list in a coefficient. Example: $y=${$1,2,3$}$x$ 3. On the soft keyboard, click [Execute]. ## 3-9. Specifying a Range to Draw a Graph You can specify a range for drawing a graph. To do so, input a function using the syntax below. <function>|<inequality specifying draw range> 1. At the bottom of the Graph Sticky Note, click to create a Graph Function Sticky Note. 2. Input a function expression with a draw range specification. Example: $y=x|-5<x<5$ 3. On the soft keyboard, click [Execute]. ## 3-10. Using Trace With trace, you can click a graph line to plot a point on it and display the coordinates at that point. You can also drag the point along the graph line. 1. Draw a graph. Example: $y=x$ 2. Click the Graph Sticky Note to select it. 3. Click the graph to select it. ・This causes the line of the selected graph to become thicker. 4. Click the graph line. • This plots a point on it, and display the corresponding coordinates. 5. Drag the point along the graph line. • You can move the point along the graph. #### Note • You can also plot multiple points on the graph line. ### 3-10-1. To delete a point plotted on a graph line 1. Click the point you want to delete. ・This displays in the upper right corner of the point coordinate display box. 2. Click the "×" . • This deletes the point. ### 3-10-2. To use coordinate values in a calculation 1. Draw a graph. Example: $y=x^2$ 2. Click the graph line to plot a point on it. 3. Click the point again to select it. 4. At the bottom of the Graph Sticky Note, click • This displays the Property menu. 5. Click [Show Label]. • This creates a Coordinates Sticky Note. 6. Create a Calculate Sticky Note. 7. On the Calculate Sticky Note, input the expression using the coordinates: $x_{1}+y_{1}$ . 8. On the soft keyboard, click [Execute]. • This displays the expression's calculation result using the coordinate values. #### Note • You can repeat steps 2 and 5 above and create multiple Coordinates Sticky Notes, if you like. • Instead of clicking in step 4, you can also right-click a plotted point to display the Property menu. • Creating a Coordinate Sticky Note adds a ${\rm P}_n$ ($n$ = serial number) label to the corresponding point. The ${\rm P}_n$ x-coordinate value is stored to variable $x_n$, and the y-coordinate value is stored to variable $y_n$. • Deleting a Coordinate Sticky Note also deletes the corresponding points. ## 3-11. Basic Graph Analysis Operations Clicking on a graph you have drawn will automatically perform appropriate analysis (for example: roots, maximum value, minimum value, directrix, axis of symmetry) depending on the graph type. 1. Draw a graph. Example: $y=x^2-3x-2$ 2. Click the Graph Sticky Note to select it. 3. Click the Menu button (). 4. Select the [Conics] check box. ・This enables graph analysis that is characteristic of conics (directrix, axis of symmetry, focus, etc.) 5. Click the Graph Sticky Note, and then click the graph. ・This analyzes the graph and displays dots (●) at the analysis result coordinate points. ・The directrix and axis of symmetry are shown as broken grey lines. 6. Click a dot (●). • This displays its coordinates. ## 3-12. Creating a Table ### 3-12-1. To create a table from a function 1. Draw a graph. Example: $y=x^2$ 2. On the Graph Function Sticky Note, click . • This creates a Table Sticky Note. The table generated from the function will appear in the Table Sticky Note. ### 3-12-2. To create a table by plotting points on a graph 1. Draw a graph. Example: $y=x^2$ 2. Click the Graph Sticky Note to select it. 3. Sequentially click different locations on the graph to plot multiple points. 4. Sequentially click the plotted points to select them. 5. At the bottom of the Graph Sticky Note, click . • This displays the Property menu. 6. Click [Convert To Table]. • This displays a Table Sticky Note. The coordinate values of the plotted points you selected in step 4 above will be input on the Table Sticky Note. #### Note • Instead of clicking in step 5, you can also right-click a plotted point to display the Property menu. ## 3-13. Editing a Table ### 3-13-1. To add a row to a table 1. Click the Table Sticky Note. 2. Input a value into the bottom-most cell of the independent variable column (lower left cell). • This adds a row at the bottom of the table. 3. On the soft keyboard, click [Execute]. ### 3-13-2. To delete a row from a table 1. Click the Table Sticky Note. 2. Right-click the cell of the independent variable column whose row you want to delete. 3. Click [Delete]. • This deletes the row. ### 3-13-3. To change a value in a table 1. On the Table Sticky Note, click the value you want to change. 2. Input the new value. 3. On the soft keyboard, click [Execute]. ## 3-14. Using a Slider You can change the value of a variable in a graph expression (for example, the value of a in $y=a \cdot x^2$) from the graph screen and observe how the change affects the graph. 1. Input a function that includes variables into a Graph Function Sticky Note. This will display sliders for changing the values assigned to the variables. Example: $y= a \cdot x^2-b \cdot x \hspace{10mm} y=a \cdot x+b$ • This will display sliders for changing the values assigned to the variables. 2. Click one of the arrow buttons (< or >) on either end of the slider. ・This will change the corresponding value and re-draw the graph accordingly. #### Note • You can use a slider to change the value (variable value, lower limit value, upper limit value, step value) displayed above the slider. Click the value you want to change. • Clicking the arrow () below a slider starts automatic change of the variable value and re-drawing of the graph with the new values (animation). To stop the animation, click . • and indicate the animation playback style. indicates that playback will go left to right, and then repeat left to right. indicates that playback will go left to right, and the right to left. Click or to toggle between playback types. ## 3-15. Configuring Graph Display Settings 1. Click the Graph Sticky Note. 2. Click the Menu button (). 3. Configure display settings and adjust the display range. • Axes: Select this check box to show the coordinate axes in the draw area. Numbered: Select this check box to show the tick marks in the draw area. To be able to select this check box, you first need to select the" Axes" check box. Grid: Select this check box to show a grid in the draw area. Labels: Select this check box to show coordinate axis names on the graph.You can change an axis name, if you want. Window: Select this check box to specify a display range that is optimized for statistical data. Stat-Auto: Selecting this check box automatically optimizes the graph display range and draws the graph. π: Selecting this check box changes the scale of the x-axis scale to π. X: Specifies the display range of the x-axis. X Scale: Specifies interval between tick marks on the x-axis. If the π check box is selected, clicking this field displays a menu that can be used to change the display range setting. Y: Specifies the display range of the y-axis. Y Scale: Specifies interval between tick marks on the y-axis. Inequality Plot: Use this setting to specify the shading range when simultaneously graphing multiple inequalities. Intersection: Shades only areas where the conditions of all of the graphed inequalities are satisfied. Union: Shades all areas where the conditions of each of the graphed inequality are satisfied. Coordinates: Configures the coordinate value display setting. Decimal: Displays coordinate values using decimal fractions. Standard: Displays coordinate values using expressions. t: Select this check box to display parametric graph coordinate values as t values. ($r$, $\theta$): Select this check box to display polar graph coordinate values as $r$ and $\theta$ values. Conics: Select this check box to display analysis results that are characteristic of conics (directrix, axis of symmetry, focus, etc.) ## 3-16. Zooming a Graph ### 3-16-1. To zoom a graph 1. Click the Graph Sticky Note. 2. Move the mouse pointer to the location you want to zoom. 3. Rotate the scroll wheel of your mouse to zoom the graph. #### Note • If you are using a smartphone or tablet, you can zoom by pinching in or pinching out. • To return to the default display, click in the lower-left corner of the Graph Sticky Note. On the menu that appears, select [Default Zoom] ### 3-16-2. To configure the graph display range setting (Zoom Options) 1. Click a Graph Sticky Note. 2. In the lower-left corner of the Graph Sticky note, click . ・This displays a zoom options menu to the right of the Graph Sticky Note. • Default Zoom: Initial default display zoom according to the size of the Graph Sticky Note. Auto Zoom: Automatically sets the range so the features* of the graph fit within the drawing range.* For example, the x- or y-axis intersect, the intersection of two graphs, the inflection point, limits, etc. Trig Zoom: Sets a new scale according to the current angle unit setting (degrees, radians, grads). Square Zoom: Automatically corrects the value on the y-axis side so the x-axis and y-axis scales have a ratio of 1: 1. 3. Select the zoom option you want from the menu. ・The graph display range and x-axis and y-axis scales are set automatically according to the zoom option you select. Example display when "Auto Zoom" is selected ## 3-17. Panning a Graph 1. Click the Graph Sticky Note. 2. Move the mouse pointer to the location you want to pan. 3. Drag the graph to pan it. #### Note • To return to the default display, click in the lower-left corner of the Graph Sticky Note. On the menu that appears, select [Default Zoom]. ## 3-18. Changing the color of a Graph 1. Click the drag handle () of the Graph Function Sticky Note. 2. On the Color Palette, select the desired color. • This changes the graph color. ## 3-19. Hiding a Graph 1. Draw a graph. 2. Click the drag handle () of the Graph Function Sticky Note. 3. Click [Hide]. ・This hides the graph of the selected function. #### Note • To re-display the graph, click [Show] in step 3 above. ## 3-20. Displaying a Background Image 1. At the bottom of the Graph Sticky Note, click . ・This creates an Image Sticky Note. 2. Click . ・This displays a dialog box for opening a file. 3. Select the image file you want and then click [Open]. 4. On the Image Sticky Note, click [OK]. ・This displays the image selected with the Graph Sticky Note. ・You can configure the settings below for an Image Sticky Note. • CenterX: Specifies the x-axis value of the image center. • CenterY: Specifies the y-axis value of the image center. • Angle: Specifies the image rotation angle. • Width: Specifies the image width. • Height: Specifies the image height. • Position: • Front・・・Displays the image in front of the coordinate axes and the grid. • Back・・・Displays the image in back of the coordinate axes and the grid. #### Note ・Instead of performing steps 2 and 3, you can also drop the desired image file onto the Image Sticky Note. ・You can create multiple Image Sticky Notes by clicking the Image Sticky Note's . You can also display multiple images by repeating steps 1 through 4 for the newly created Image Sticky Note. ## 3-21. Plot Function ### 3-21-1. To plot points on a graph 1. At the bottom of the Graph Sticky Note, click . ・This creates a Plot Sticky Note. 2. On the Plot Sticky Note, input the x-coordinate and y-coordinate values of the points you want to plot. ・This plots points at the coordinates you input. #### Note ・You can configure the settings shown below by clicking on a Dot Plots Sticky Note. • Plot color: Specifies the color of the plotted points. • Labels: Specifies the label names. • Lock: Locks selected cell(s). If the selected cells are locked, this item unlocks the cells. ### 3-21-2. To display the Coordinates Sticky Note of a point 1. Click a plotted point. ・This displays the coordinates of the Point. 2. Click the displayed coordinate values. 3. Click . • This displays the Property menu. 4. Click [Show Label]. • This displays a Coordinates Sticky Note. #### Note • Instead of clicking in step 3, you can also right-click a displayed coordinate to display the Property menu. ## 3-22. Inputting Text 1. At the bottom of the Graph Sticky Note, click . • This enables text input and displays the text palette. 2. Use the text palette to specify the text color and size. 3. Click the location where you want to input text. 4. Input a text string. #### Note • Clicking input text selects it. You can perform the operations below on selected text. • You can reposition text by dragging it. •  Clicking will display the text palette, which can be used to change the color and size of the text. • To delete text, click . • Double-clicking input text selects it for editing. ## 3-23. Determining the Integral Value and Area of a Region 1. Draw a graph. 2. Select the graph and then click . 3. Click "Integral". ・This displays an Integral/Area Sticky Note. 4. Input the lower limit value and upper limit value of the integration. 5. On the soft keyboard, click [Execute]. ・This calculates the integral and area, and shades the integration. ## 3-24. Drawing a Line Tangent or Normal to a Graph ### 3-24-1. To draw a line tangent to a graph Example: To draw a line that is tangent to graph $y=0.5x^2$ 1. Graph $y=0.5x^2$ 2. Click the Graph Sticky Note to select it. 3. Click any point on the graph. • This selects the graph and causes the line of the graph to become thicker. 4. Click . 5. Click "Tangent". • This will draw a line tangent to graph $y=0.5x^2$, and creates a Graph Function Sticky Note that corresponds to the tangent simultaneously. • The coordinates of the point of tangency (contact point) will be indicated by . 6. Drag the contact point (). • You can reposition the location of the point of tangency. #### Note • If you specify coordinates on a graph for which a tangent line cannot be defined, a not-defined error occurs. If a not-defined error occurs, the tangent and the tangent graph expression are deleted from the Sticky Note. Example: When coordinates $(0, 0)$ is specified on the graph "$y={\rm abs}(x)$". ### 3-24-2. To draw a line that is normal to a graph Perform the same steps as those under "3-24-1. To draw a line tangent to a graph", except as noted below. • In step 3, click "Normal Line" instead of "Tangent". ### 3-24-3. To delete a contact point 1. Click the contact point to select it. • This displays in the upper right corner of the contact point coordinate display box. 2. Click . • This will delete the contact point. At this time the tangent line and its Graph Sticky Note will remain on the display, but without a contact point. ## 3-25. Graph Analysis Functions Trace Plots a point on the currently select graph and displays its coordinates. You can also drag the point along the graph line. x-Intercept Displays the point where the graph intercepts the X-axis (x-Intercept). Click a point to show its coordinates. Min Displays a point at the minimum value of a graph. Click the point to show its coordinates. Max Displays a point at the maximum value of a graph. Click the point to show its coordinates. y-Intercept Displays the point where the graph intercepts the Y-axis (y-Intercept). Click a point to show its coordinates. Intersection Displays the points of intersection of two graphs and displays them. Click a point to show its coordinates. Focus Displays the focus of a parabolic graph, elliptic graph, or hyperbolic curve graph. Click the point to show its coordinates. Vertex Displays the vertex of a parabolic graph, elliptic graph, or hyperbolic curve graph. Click a point to show its coordinates. Directrix Displays the directrix of a parabolic graph. Clicking a directrix displays the directrix expression (x=, y=). Symmetry Displays the axis of symmetry of a parabolic graph. Clicking the axis of symmetry displays the axis of symmetry expression (x=, y=). Latus Rectum Length Displays the length of the latus rectum of a parabolic graph. (The value is displayed after "LatusRectum=".) Center Displays the center of a circle graph, elliptic graph, or hyperbolic curve graph. Click the point to show its coordinates. Displays the radius of a circle graph. Click the radius to show its length ($r=$). Displays the eccentricity of a parabolic graph, elliptic graph, or hyperbolic curve graph. (The value is displayed after ($e=$).)
2022-08-08 13:43:13
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.4041759967803955, "perplexity": 3453.2180872360414}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-33/segments/1659882570827.41/warc/CC-MAIN-20220808122331-20220808152331-00517.warc.gz"}
https://jovianlin.io/confusion-matrix/
# Confusion Matrix Are you confused by the confusion matrix? This should help: Generally: • Each row represents an actual class. • Each column represents a predicted class. • For the rows and columns: • Start with 0 coz NO = 0. • End with 1 coz YES = 1. • i.e. think in ascending order (zero ➔ one). Regarding each of the 4 elements in the confusion matrix: • True Positives (TP): These are cases in which we predicted yes (they have the disease), and they do have the disease. • True Negatives (TN): We predicted no, and they don't have the disease. • False Positives (FP): We predicted yes, but they don't actually have the disease. (Also known as a Type I error.) • False Negatives (FN): We predicted no, but they actually do have the disease. (Also known as a Type II error.) Check out Kevin's Simple Guide to Confusion Matrix for more details. If you enjoyed this post and want to buy me a cup of coffee... The thing is, I'll always accept a cup of coffee. So feel free to buy me one. Cheers! ☕️ #### Jovian Lin, Ph.D. A Singaporean with a fiery passion in solving real-life problems with machine learning and intelligent hacks.
2018-06-19 19:35:28
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.5963744521141052, "perplexity": 2714.236928148191}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-26/segments/1529267863119.34/warc/CC-MAIN-20180619193031-20180619213031-00144.warc.gz"}
https://www.finedictionary.com/Town.html
# Town ## Definitions • A PUEBLO TOWN • WordNet 3.6 • n town the people living in a municipality smaller than a city "the whole town cheered the team" • n town an urban area with a fixed boundary that is smaller than a city "they drive through town on their way to work" • n town an administrative division of a county "the town is responsible for snow removal" • n Town United States architect who was noted for his design and construction of truss bridges (1784-1844) • *** ### Additional illustrations & photos: Plan of a Part Of the Ancient Town Of Kahun The Town Hall, Godalming The town of Cuffies Old Market Square, Upper Town There'll Be a Hot Time in the Old Town To-night 279 No. 141, Bayham Street, Camden Town, where the Dickens Family lived in 1823 The High St Town Malling The Sportsman Tonbridge, At Rochester, On Town Hall, High St. Tonbridge Webster's Revised Unabridged Dictionary • Interesting fact: There is a town in Norway called "Hell" • Town A farm or farmstead; also, a court or farmyard. • Town A township; the whole territory within certain limits, less than those of a country. • Town Any collection of houses larger than a village, and not incorporated as a city; also, loosely, any large, closely populated place, whether incorporated or not, in distinction from the country, or from rural communities. "God made the country, and man made the town ." • Town Any number or collection of houses to which belongs a regular market, and which is not a city or the see of a bishop. • Town Formerly: An inclosure which surrounded the mere homestead or dwelling of the lord of the manor. Obs The whole of the land which constituted the domain. Obs A collection of houses inclosed by fences or walls. • Town The body of inhabitants resident in a town; as, the town voted to send two representatives to the legislature; the town voted to lay a tax for repairing the highways. • Town The court end of London; -- commonly with the. • Town The metropolis or its inhabitants; as, in winter the gentleman lives in town; in summer, in the country. "Always hankering after the diversions of the town .""Stunned with his giddy larum half the town ." • *** Century Dictionary and Cyclopedia • Interesting fact: There is a town named Dildo in the province of Newfoundland, Canada • n town An inclosure; a collection of houses inclosed by a hedge, palisade, or wall for safety; a walled or fortified place. • n town Any collection of houses larger than a village; in a general sense, a city or borough: as, London town; within a mile of Edinburgh town: often opposed to country, in which use it is usually preceded by the definite article. It is frequently applied absolutely, and without the proper name of the place, to a metropolis or county town, or to the particular city in which or in the vicinity of which the speaker or writer is; as, to go to town; to be in town—London being in many cases implied by English writers. • n town A large assemblage of adjoining or nearly adjoining houses, to which a market is usually incident, and which is not a city or bishop's see. • n town A tithing; a vill; a subdivision of a county, as a parish is a subdivision of a diocese. • n town The body of persons resident in a town or city; the townspeople: with the. • n town In legal usage in the United States: • n town In many of the States, one of the several subdivisions into which each county is divided, more accurately called, in the New England States and some others, township. • n town In most of the States, the corporation, or quasi corporation, composed of the inhabitants of one of such subdivisions, in some States designated by law as a township or incorporated township or township organization. • n town In a few of the States, a municipal corporation (not formed of one of the subdivisions of a county, but having its own boundaries like a city) with less elaborate organization and powers than a city. The word town is popularly used both in those senses, and also in the sense of ‘a collection of dwellings,’ which is characteristic of most towns. Thus, the name of a town, such as Famington, serves to indicate, according to the context, either the geographical area, as in the phrase “the boundaries of the town” (indicated on maps by a light or dotted line), or the body politic, as in speaking of the town and county highways respectively, or the central settlement from which distances are usually measured, as on the sign-boards. When used in the general sense of a densely populated community, the boundaries are usually not identical with those of any primary division of the county, but include only the space occupied by agglomerated houses. • n town A farm or farmstead; a farm-house with its connected buildings. • n town An officer of a parish who collects moneys from the parents of illegitimate children for the maintenance of the latter. • n town Synonyms and • n town Hamlet, Village, Town, City. A hamlet is a group of houses smaller than a village. The use of the other words in the United Kingdom is generally more precise than it is in the United States, but all are used more or less loosely. A village may have a church, but has generally no market; a town has both, and is frequently incorporated; a city is a corporate town, and is or has formerly been the see of a bishop, with a cathedral. In the United States a village is smaller than a town, and a town usually smaller than a city; there are incorporated villages as well as cities. Some places incorporated as cities are smaller than many that have only a town organization. • town Of, pertaining to, or characteristic of a town; urban: as, town life; town manners. • town The town prison; a bridewell. • town A poorhouse. • town A house or mansion in town, as distinguished from a country residence. • *** Chambers's Twentieth Century Dictionary • Interesting fact: Budweiser beer is named after a town in Czechoslovakia • n Town town a place larger than a village, not a city: the inhabitants of a town • *** ## Quotations • Warwick Deeping “I spent a year in that town, one Sunday.” • Sam Walton “There's a lot more business out there in small town America than I ever dreamed of.” • Edward M. Forster “Towns are excrescences, gray fluxions, where men, hurrying to find one another, have lost themselves.” • Will Rogers “So live that you wouldn't be ashamed to sell the family parrot to the town gossip.” • Sharon Stone “If you have a vagina and an attitude in this town, then that's a lethal combination.” • Source Unknown “It's a mining town in lotus land.” ## Idioms Ghost town - A ghost town is a town that has been abandoned or is in decline and has very little activity. *** New sheriff in town - This is used when a new authority figure takes charge. *** Paint the town red - If you go out for a night out with lots of fun and drinking, you paint the town red. *** Talk of the town - When everybody is talking about particular people and events, they are he talk of the town. *** ## Etymology Webster's Revised Unabridged Dictionary OE. toun, tun, AS. tun, inclosure, fence, village, town; akin to D. tuin, a garden, G. zaun, a hadge, fence, OHG. zun, Icel. tun, an inclosure, homestead, house, Ir. & Gael. dun, a fortress, W. din,. Cf. Down (adv. & prep.) Dune tine to inclose Chambers's Twentieth Century Dictionary A.S. tún, an enclosure, town; Ice. tún, an enclosure, Ger. zaun, a hedge. ## Usage ### In literature: It was within the limits of the present town of Middleborough. "King Philip" by John S. C. (John Stevens Cabot) Abbott We had murthers in the town an' all round the town. "Ireland as It Is" by Robert John Buckley (AKA R.J.B.) Strolled through town of Sukhur. "The Last Voyage" by Lady (Annie Allnutt) Brassey The Dean did exactly as he had said with reference to the house in town. "Is He Popenjoy?" by Anthony Trollope We haven't any theatre in this 'ere town, and don't have much dancing. "Daughters of the Revolution and Their Times" by Charles Carleton Coffin The town has been pinched between the steep hills, and forced to straggle back for miles along the harbour inlet. "Westward with the Prince of Wales" by W. Douglas Newton Go from town to town, from house to house. "Friars and Filipinos" by Jose Rizal We were principally in town, living in very good style. "The Complete Project Gutenberg Works of Jane Austen" by Jane Austen The Zane family home was here long after Wheeling became a town. "Boys' Book of Frontier Fighters" by Edwin L. Sabin Town after town in Plymouth Colony of southeastern Massachusetts was laid in ashes by fierce surprise attacks. "Boys' Book of Indian Warriors" by Edwin L. Sabin *** ### In poetry: My only Love is always near, In country or in town I see her twinkling feet, I hear The whisper of her gown. "The Unrealised Ideal" by Frederick Locker-Lampson There was a jolly Cobler Who lived in Boston Town He work'd the Sun into the Sky And then he work'd it down. "Song (Number 1)" by Royall Tyler Concealing unrepentantly And trimming you in white, How often he has brought you home Into the town at night! "First Snow" by Boris Pasternak When you come to London Town, (Grieving-grieving!) With the others grieving. "London Stone" by Rudyard Kipling King of a Realm of Magic, He was the fool of the town, Hiding the ache of the tragic Under the grin of the clown. "The Poet's Town" by John Gneisenau Neihardt So up they walked through Boston town, And met a maiden fair, A little basket on her arm So snowy-white and bare. "Kathleen" by John Greenleaf Whittier ### In news: Sylva 's Town Board named twentyfive year old Jackson County native Paige Roberson as the new town manager last Thursday. To hope-dealer Chi Town, to factory-life Chi Town. The Camillus Town Board voted 7-0 Tuesday to hold a public hearing on whether to abolish the receiver of taxes position and give its duties to the town clerk. It's a football town in the fall, a baseball town in the spring and a very slow town in the middle of the summer. The Trenton Town Board gathered Wednesday for their regular meeting, but without Town Supervisor Mark Scheidelman following the child sex abuse allegations. I attended the Old Town Chinatown Neighborhood Association's town meeting at Portland Development Commission headquarters in Old Town. Town of Anthony trustees held a special meeting to discuss the future of their town clerk Friday afternoon, but were unable to conduct any business due to overcrowding. The St Armand town council has named a new town clerk . Ida's town clerk has been arrested from stealing from the town's coffers. The Thorsby Town Council approved Crystal Smith as the new town clerk during Monday's meeting. Harrietstown town clerk Patricia Gillmett, center, at a recent meeting of the town board. Brett Carlsen/The New York Times In this September photo, Rose Marie Belforti, the elected town clerk , is shown working in her office at the Ledyard town hall. ROTTERDAM — Eunice Esposito has resigned from her position as Town Clerk for Rotterdam, effective July 31, after residents had publicly questioned her absence from Town Hall. Councilor Nathalie Stroup and Town Clerk Sarah Luckie terminated their careers with the town that has seen two other officials resign in a year. In what's become the traditional start of the town election calendar, the town clerk 's office will hold an information night for those considering a run for office in November. *** ### In science: The black circles show the current (2006) position of the main town centres. Random planar graphs and the London street network While Schawlow and Townes had K = 1, appropriate for a nearly closed cavity, it was later realized (Pet79; Sie89) that an open cavity has an enhancement factor K ≥ 1 called the “Petermann factor”. Applications of random matrix theory to condensed matter and optical physics Schawlow-Townes expectations, while the transition from g(2) (0) = 2 to unity is not abrupt and thus does not permit a more precise determination of the threshold than the L − L curve. Definition of the stimulated emission threshold in high-$\beta$ nanoscale lasers through phase-space reconstruction As then studied by Townes & Melnick (1990), with such levels of atmospheric water vapour the sub-mm bands from 350mm to 1 mm are open virtually continuously. Astronomy in Antarctica Townes GH, Melnick G (1990) Atmospheric transmission in the far-infrared at the South Pole and astronomical applications. Astronomy in Antarctica ***
2021-06-20 03:51:06
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.17798149585723877, "perplexity": 11040.367550480323}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-25/segments/1623487655418.58/warc/CC-MAIN-20210620024206-20210620054206-00002.warc.gz"}
https://gmatclub.com/forum/which-of-the-following-is-the-approximation-of-230485.html?sort_by_oldest=true
GMAT Question of the Day: Daily via email | Daily via Instagram New to GMAT Club? Watch this Video It is currently 26 May 2020, 19:25 ### GMAT Club Daily Prep #### Thank you for using the timer - this advanced tool can estimate your performance and suggest more practice questions. We have subscribed you to Daily Prep Questions via email. Customized for You we will pick new questions that match your level based on your Timer History Track every week, we’ll send you an estimated GMAT score based on your performance Practice Pays we will pick new questions that match your level based on your Timer History # Which of the following is the approximation of Author Message TAGS: ### Hide Tags Math Revolution GMAT Instructor Joined: 16 Aug 2015 Posts: 8992 GMAT 1: 760 Q51 V42 GPA: 3.82 Which of the following is the approximation of  [#permalink] ### Show Tags 12 Dec 2016, 00:27 00:00 Difficulty: 45% (medium) Question Stats: 65% (02:12) correct 35% (02:15) wrong based on 87 sessions ### HideShow timer Statistics Which of the following is the approximation of $$(4^4+8^6)/(4^8+16^8$$)? A. $$2^-^1^2$$ B. $$2^-^1^4$$ C. $$2^-^1^6$$ D. $$2^1^2$$ E. $$2^1^4$$ _________________ MathRevolution: Finish GMAT Quant Section with 10 minutes to spare The one-and-only World’s First Variable Approach for DS and IVY Approach for PS with ease, speed and accuracy. "Only $79 for 1 month Online Course" "Free Resources-30 day online access & Diagnostic Test" "Unlimited Access to over 120 free video lessons - try it yourself" Target Test Prep Representative Status: Founder & CEO Affiliations: Target Test Prep Joined: 14 Oct 2015 Posts: 10566 Location: United States (CA) Re: Which of the following is the approximation of [#permalink] ### Show Tags 13 Dec 2016, 16:46 1 MathRevolution wrote: Which of the following is the approximation of $$(4^4+8^6)/(4^8+16^8$$)? A. $$2^-^1^2$$ B. $$2^-^1^4$$ C. $$2^-^1^6$$ D. $$2^1^2$$ E. $$2^1^4$$ We can start by prime factorizing each term in the given expression: 4^4 = (2^2)^4 = 2^8 8^6 = (2^3)^6 = 2^18 4^8 = (2^2)^8 = 2^16 16^8 = (2^4)^8 = 2^32 Putting these values back into the given expression, we have: (2^8 + 2^18)/(2^16 + 2^32) We can factor out a 2^8 from the numerator and a 2^16 from the denominator, and we have: [2^8(1 + 2^10)]/[2^16(1 + 2^16)] (1 + 2^10)/[2^8(1 + 2^16)] Since the 1s are quite small compared to the other values, we can ignore them, and we have: 2^10/(2^8 x 2^16) = 2^10/2^24 = 2^-14 Answer: B _________________ # Scott Woodbury-Stewart Founder and CEO Scott@TargetTestPrep.com 202 Reviews 5-star rated online GMAT quant self study course See why Target Test Prep is the top rated GMAT quant course on GMAT Club. Read Our Reviews If you find one of my posts helpful, please take a moment to click on the "Kudos" button. Math Revolution GMAT Instructor Joined: 16 Aug 2015 Posts: 8992 GMAT 1: 760 Q51 V42 GPA: 3.82 Re: Which of the following is the approximation of [#permalink] ### Show Tags 14 Dec 2016, 00:29 ==> Because of the word “approximation”, you get $$(4^4+8^6)/(4^8+16^8)= 8^6/16^8$$. Also, from $$8^6/16^8 =(2^3)^6/(2^4)^8=2^1^8/2^3^2=2^1^8^-^3^2=2^-^1^4$$, the answer is B. Answer: B _________________ MathRevolution: Finish GMAT Quant Section with 10 minutes to spare The one-and-only World’s First Variable Approach for DS and IVY Approach for PS with ease, speed and accuracy. "Only$79 for 1 month Online Course" "Free Resources-30 day online access & Diagnostic Test" "Unlimited Access to over 120 free video lessons - try it yourself" Re: Which of the following is the approximation of   [#permalink] 14 Dec 2016, 00:29
2020-05-27 03:25:54
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.6047643423080444, "perplexity": 10637.43291256818}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-24/segments/1590347392057.6/warc/CC-MAIN-20200527013445-20200527043445-00241.warc.gz"}
https://brilliant.org/problems/a-pair-apart/
# A Pair Apart There is a class consisting of 20 students. In how many different ways can these 20 students be split into 10 pairs? ×
2017-03-25 17:40:50
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.860213041305542, "perplexity": 431.9119325998535}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-13/segments/1490218189031.88/warc/CC-MAIN-20170322212949-00324-ip-10-233-31-227.ec2.internal.warc.gz"}
https://koasas.kaist.ac.kr/handle/10203/28181
#### Growth and viability of lactobacillus acidophilus in soy milk = Lactobacillus acidophilus 의 두유 에서의 성장과 생존성 Cited 0 time in Cited 0 time in • Hit : 270 Lactobacillus acidophilus which has long been used for the manufacture of acidophilus milk was tried to add into Soy milk whose market in Korea is growing explosively. When the fresh cells grown in MRS medium were inoculated in the unformulated Soy milk at the level of $8.0\times10^6$ cells/ml and were incubated at $37\,^\circ\!C$ only a slight growth was observed within the initial 4 hours and followed by a gradual decrease in viable counts. However, the acid production by these organisms in the Soy milk active and continued throughout the 16 hour period of incubation. Supplement of Soy milk with carbohydrates such as glucose, lactose, and sucrose did not increase the cell number. Supplement with protein sources such as peptone, tryptone, proteose peptone, and amino acid, however, could enhance both growth and acid formation of the organism. But casein hydrolysate neither stimulated the growth nor the acid production. Among the nitrogen sources proteose peptone peptone at 4g/1 level should highest effect. The less growth of L. acidophilus in Soy milk, therefore, may be explained by the lack of nitrogen compounds which are needed for growth. Acetate as lipid precursor and Tween 80, supported the growth and helped retain the cell viability. But the degree of the effect was slight. The cells added to the Soy milk could survive long when the product is stored at $4\,^\circ\!C$ whereas, storage at $15\,^\circ\!C$ and at room temperature resulted in rapid loss of viable cells specially after 4 days of storage. The results suggest a possibility of consuming living cells are prepared separately and added to the Soy milk before consumption. Pack, Moo-Young박무영 Description 한국과학기술원 : 생물공학과, Publisher 한국과학기술원 Issue Date 1984 Identifier 64061/325007 / 000821270 Language eng Description 학위논문(석사) - 한국과학기술원 : 생물공학과, 1984.2, [ vii, 51 p. ] URI http://hdl.handle.net/10203/28181
2021-03-06 03:15:15
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.30231234431266785, "perplexity": 5889.708497729744}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-10/segments/1614178374217.78/warc/CC-MAIN-20210306004859-20210306034859-00272.warc.gz"}
http://www.oalib.com/relative/5285321
Home OALib Journal OALib PrePrints Submit Ranking News My Lib FAQ About Us Follow Us+ Title Keywords Abstract Author All Search Results: 1 - 10 of 100 matches for " " Page 1 /100 Display every page 5 10 20 Item Mathematics , 2015, Abstract: Cooper and Long generalised Epstein and Penner's Euclidean cell decomposition of cusped hyperbolic manifolds of finite volume to non-compact strictly convex projective manifolds of finite volume. We show that Weeks' algorithm to compute this decomposition for a hyperbolic surface generalises to strictly convex projective surfaces. Mathematics , 2009, Abstract: This paper deals with the existence of optimal transport maps for some optimal transport problems with a convex but non strictly convex cost. We give a decomposition strategy to address this issue. As part of our strategy, we have to treat some transport problems, of independent interest, with a convex constraint on the displacement. As an illustration of our strategy, we prove existence of optimal transport maps in the case where the source measure is absolutely continuous with respect to the Lebesgue measure and the transportation cost is of the form h(||x-y||) with h strictly convex increasing and ||. || an arbitrary norm in \R2. Anatoly P. Kopylov Mathematics , 2015, Abstract: The article contains the results of the author's recent investigations of rigidity problems of domains in Euclidean spaces carried out for developing a new approach to the classical problem of the unique determination of bounded closed convex surfaces [A.V.Pogorelov, Extrinsic Geometry of Convex Surfaces, AMS, Providence (1973)], rather completely presented in [A.P.Kopylov, On the unique determination of domains in Euclidean spaces, J. of Math. Sciences, 153, no.6, 869-898 (2008)]. We give a complete characterization of a plane domain $U$ with smooth boundary (i.e., the Euclidean boundary $\mathop{\rm fr}U$ of $U$ is a one-dimensional manifold of class $C^1$ without boundary) that is uniquely determined in the class of domains in $\mathbb R^2$ with smooth boundaries by the condition of the local isometry of the boundaries in the relative metrics. If $U$ is bounded then the convexity of $U$ is a necessary and sufficient condition for the unique determination of this kind in the class of all bounded plane domains with smooth boundaries. If $U$ is unbounded then its unique determination in the class of all plane domains with smooth boundaries by the condition of the local isometry of the boundaries in the relative metrics is equivalent to its strict convexity. In the last section, we consider the case of space domains. We prove a theorem on the unique determination of a strictly convex domain in $\mathbb R^n$, where $n \ge 2$, in the class of all $n$-dimensional domains by the condition of the local isometry of the Hausdorff boundaries in the relative metrics, which is a generalization of A.D.Aleksandrov's theorem on the unique determination of a strictly convex domain by the condition of the (global) isometry of the boundaries in the relative metrics. 数学物理学报(A辑) , 2001, Abstract: Using weight factors, we obtain the Koppelman-Leray formula with weight fac- tors of (p,q) differential forms for a strictly pseudoconvex domain with not necessarily smooth boundaries on a complex manifold, and give an integral representation for the solu- tion with weight factors of -equation on this domain which does not involve integral on boundary, so we can avoid complex estimates of boundary integrals. Furthermore, with the introduction of weight factors, the integral formulas with weight factors have much freedom in application. Giampiero Esposito Physics , 1995, Abstract: This paper describes recent progress in the analysis of relativistic gauge conditions for Euclidean Maxwell theory in the presence of boundaries. The corresponding quantum amplitudes are studied by using Faddeev-Popov formalism and zeta-function regularization, after expanding the electromagnetic potential in harmonics on the boundary 3-geometry. This leads to a semiclassical analysis of quantum amplitudes, involving transverse modes, ghost modes, coupled normal and longitudinal modes, and the decoupled normal mode of Maxwell theory. On imposing magnetic or electric boundary conditions, flat Euclidean space bounded by two concentric 3-spheres is found to give rise to gauge-invariant one-loop amplitudes, at least in the cases considered so far. However, when flat Euclidean 4-space is bounded by only one 3-sphere, one-loop amplitudes are gauge-dependent, and the agreement with the covariant formalism is only achieved on studying the Lorentz gauge. Moreover, the effects of gauge modes and ghost modes do not cancel each other exactly for problems with boundaries. Remarkably, when combined with the contribution of physical (i.e. transverse) degrees of freedom, this lack of cancellation is exactly what one needs to achieve agreement with the results of the Schwinger-DeWitt technique. The most general form of coupled eigenvalue equations resulting from arbitrary gauge-averaging functions is now under investigation. Mathematics , 2010, DOI: 10.1016/j.jmaa.2010.09.007 Abstract: We consider polyhedral approximations of strictly convex compacta in finite dimensional Euclidean spaces (such compacta are also uniformly convex). We obtain the best possible estimates for errors of considered approximations in the Hausdorff metric. We also obtain new estimates of an approximate algorithm for finding the convex hulls. Mathematics , 2014, Abstract: Two frameworks that have been used to characterize reflected diffusions include stochastic differential equations with reflection and the so-called submartingale problem. We introduce a general formulation of the submartingale problem for (obliquely) reflected diffusions in domains with piecewise C^2 boundaries and piecewise continuous reflection vector fields. Under suitable assumptions, we show that well-posedness of the submartingale problem is equivalent to existence and uniqueness in law of weak solutions to the corresponding stochastic differential equation with reflection. Our result generalizes to the case of reflecting diffusions a classical result due to Stroock and Varadhan on the equivalence of well-posedness of martingale problems and well-posedness of weak solutions of stochastic differential equations in d-dimensional Euclidean space. The analysis in the case of reflected diffusions in domains with non-smooth boundaries is considerably more subtle and requires a careful analysis of the behavior of the reflected diffusion on the boundary of the domain. In particular, the equivalence can fail to hold when our assumptions are not satisfied. The equivalence we establish allows one to transfer results on reflected diffusions characterized by one approach to reflected diffusions analyzed by the other approach. As an application, we provide a characterization of stationary distributions of a large class of reflected diffusions in convex polyhedral domains. Physics , 1995, DOI: 10.1088/0264-9381/11/12/009 Abstract: Zeta-function regularization is applied to complete a recent analysis of the quantized electromagnetic field in the presence of boundaries. The quantum theory is studied by setting to zero on the boundary the magnetic field, the gauge-averaging functional and hence the Faddeev-Popov ghost field. Electric boundary conditions are also studied. On considering two gauge functionals which involve covariant derivatives of the 4-vector potential, a series of detailed calculations shows that, in the case of flat Euclidean 4-space bounded by two concentric 3-spheres, one-loop quantum amplitudes are gauge independent and their mode-by-mode evaluation agrees with the covariant formulae for such amplitudes and coincides for magnetic or electric boundary conditions. By contrast, if a single 3-sphere boundary is studied, one finds some inconsistencies, i.e. gauge dependence of the amplitudes. Mathematics , 2014, Abstract: Extending results of Hershberger and Suri for the Euclidean plane, we show that ball hulls and ball intersections of sets of $n$ points in strictly convex normed planes can be constructed in $O(n \log n)$ time. In addition, we confirm that, like in the Euclidean subcase, the $2$-center problem with constrained circles can be solved also for strictly convex normed planes in $O(n^2)$ time. Some ideas for extending these results to more general types of normed planes are also presented. Victor Bible Mathematics , 2013, DOI: 10.4064/sm224-2-5 Abstract: The aim of this paper is to present a tool used to show that certain Banach spaces can be endowed with $C^k$ smooth equivalent norms. The hypothesis uses particular countable decompositions of certain subsets of $B_{X^*}$, namely boundaries. Of interest is that the main result unifies two quite well known results. In the final section, some new corollaries are given. Page 1 /100 Display every page 5 10 20 Item
2019-12-05 22:35:20
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8740887641906738, "perplexity": 386.421041457319}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-51/segments/1575540482284.9/warc/CC-MAIN-20191205213531-20191206001531-00198.warc.gz"}
https://mitpress.mit.edu/books/phantasmal-media
Hardcover | $42.00 Short | £28.95 | ISBN: 9780262019330 | 440 pp. | 7 x 9 in | 91 color illus.| November 2013 Ebook |$30.00 Short | ISBN: 9780262317658 | 440 pp. | 7 x 9 in | 91 color illus.| November 2013 # Phantasmal Media An Approach to Imagination, Computation, and Expression ## Overview In Phantasmal Media, D. Fox Harrell considers the expressive power of computational media. He argues, forcefully and persuasively, that the great expressive potential of computational media comes from the ability to construct and reveal phantasms—blends of cultural ideas and sensory imagination. These ubiquitous and often-unseen phantasms—cognitive phenomena that include sense of self, metaphors, social categories, narrative, and poetic thinking—influence almost all our everyday experiences. Harrell offers an approach for understanding and designing computational systems that have the power to evoke these phantasms, paying special attention to the exposure of oppressive phantasms and the creation of empowering ones. He argues for the importance of cultural content, diverse worldviews, and social values in computing. The expressive power of phantasms is not purely aesthetic, he contends; phantasmal media can express and construct the types of meaning central to the human condition. Harrell discusses, among other topics, the phantasm as an orienting perspective for developers; expressive epistemologies, or data structures based on subjective human worldviews; morphic semiotics (building on the computer scientist Joseph Goguen’s theory of algebraic semiotics); cultural phantasms that influence consensus and reveal other perspectives; computing systems based on cultural models; interaction and expression; and the ways that real-world information is mapped onto, and instantiated by, computational data structures. The concept of phantasmal media, Harrell argues, offers new possibilities for using the computer to understand and improve the human condition through the human capacity to imagine. D. Fox Harrell is Associate Professor of Digital Media at MIT. ## Reviews “Harrell’s book, Phantasmal Media, published this week by MIT Press, outlines an approach to analyzing many forms of digital media that prompt these images in users, and then building computing systems—seen in video games, social media, e-commerce sites, or computer-based artwork—with enough adaptability to let designers and users express a wide range of cultural preferences, rather than being locked into pre-existing options.”—MIT news “...profoundly ambitious and wildly eclectic Phantasmal Media will likely find a wide audience among artists and technologists alike.”—John Harwood, Artforum ## Endorsements “D. Fox Harrell is the leading scientist of the human mind in the digital age. Phantasmal Media is a major advance in the study of the human imagination. Harrell's brilliance, learning, wit, and charm make it a great pleasure to read.” Mark Turner, Institute Professor and Professor of Cognitive Science, Case Western Reserve University “Fox Harrell’s bold and audacious view of the relationship between computing and the imagination blends a very broad range of multicultural references with perspectives from the sciences, humanities, and arts to present an unprecedented vision of how people and machines can come together to forge not only new software systems, but a new ethics and politics of the human condition. This is what a groundbreaking book looks like.” George E. Lewis, Columbia University “Deftly operating at the productive intersection of computational design, cognitive science, and expressive media, Phantasmal Media draws attention to the profound involvement of human imagination in the encounter with information technology. In doing so, it provides a new basis for understanding human-computer interaction and artificial intelligence, one that places ideation and ideology at the center and, in doing so, profoundly troubles questions of representation and agency at the heart of computational practice. It is inspiring, intriguing, and, yes, haunting.” Paul Dourish, Professor, Department of Informatics, University of California, Irvine
2015-09-02 09:25:51
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.20893852412700653, "perplexity": 5769.833872680254}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2015-35/segments/1440645258858.62/warc/CC-MAIN-20150827031418-00350-ip-10-171-96-226.ec2.internal.warc.gz"}
http://electronics.stackexchange.com/questions/30411/capacitor-selection-with-multiple-regulators
# Capacitor selection with multiple regulators I'm working on a circuit that needs three voltage regulators: +24VDC, +12VDC, and +5VDC. The power input is +28VDC from a bench power supply. The +24VDC regulator is a 78M24. It is supplied from +28VDC and the load is off-board and typically 120mA. The +12VDC regulator is a RECOM R-7812-0.5 switching regulator in a 3-terminal package (pin compatible with a linear 7812). This is also supplied from +28VDC input, and the load on this regulator is typically 125mA. This includes an off-board load and the load of the +5VDC regulator, which is a 78L05 whose typical load is 20mA. I have specified capacitors according to the 3 regulator data sheets as shown in the circuit. C5, C6, C7, and C8 are ceramic. C9 is aluminum electrolytic, for higher ESR, per the data sheet. My question is, do I need any additional bulk or reserve capacitors at the input, outputs, or in between the +12VDC and +5VDC regulators? Many thanks for your help! I have several follow on questions. The first answer mentions that a low ESR cap should be on the output of each regulator. Does that include the switching regulator (U8)? That data sheet specifically excludes low ESR on the input side, but says nothing about the output. Also, wouldn't changing from a 100nF to a 1uF on the output reduce the high frequencies that the cap bypasses? With regard to the input of the regulator, my understanding is the ceramic caps on the inputs are required for stability of the linear regulators (U9 and U10). Would bulk caps on the +28VDC net also suffice to provide bulk input capacitance to U9? Or does a bulk input cap for U9 need to be on the +12V net? Finally, since both U8 and U10 are supplying power to loads off-board, do they need bulk caps on the output? Many thanks again. - If you are not concerned about the EMC and EN certifications, then that switch-mode regulator is an over-kill due to its price. I don't know, maybe it is also an over-kill in case of EMC and EN concerns. – abdullah kahraman Apr 21 '12 at 15:30 There should be a low ESR cap immediately on the output of each regulator. Perhaps 100 nF as you show is the minimum, but I'd put more there unless it was specifically disallowed in the datasheet. If they're supposed to work with 100 nF, then 1 µF ceramic sounds good. As for the input, you can't have too much capacitance on the input of a regulator. Put what you can get in 0805 immediately on the input. That should be more than the skimpy values you are trying to squeak by with. - Thanks for your answer. I have several follow on questions which I have added to the original post. – Mike Apr 21 '12 at 14:53 I am going to try to answer the questions the original poster added later on: If you look at the datasheet of the switcher RECOM R-7812-0.5, you will see that without a capacitor at the output, there is an output ripple of 40mVp-p. If you add a 100uF capacitor at the output, then the ripple drops by a 5mVp-p. Of course, since the datasheet didn't mention, we do not know how was the load when they are testing for these parameters. The output current ripple increases with the load current. This 100uF capacitor mentioned in the datasheet can be a low ESR electrolytic to have a low ripple, or if you do not care about the ripple, just use a normal electrolytic capacitor. A bypass capacitor is a backup power supply with relatively low lead inductance, and this allows it to source high $\dfrac{dI}{dt}$. Read more about it by Googling it. Here, the output caps you mention serve as decoupling caps, meaning that they will short the noise at some range of frequency to ground, and yes, 100nF will decouple higher frequencies than 1uF, but I think the difference is not that big in practice. Put a 100nF and a 1uF in parallel if you have the budget. You need to put a bulk cap on the U9's input rather than on the U8's input if you think you need it. However, for 100mA ( as that is the maximum current as specified by 78L05), I do not think that you need bulk caps for U9. I would not put the bulk caps on the board, however I would put bulk caps on the board that will have its source from this board, for U8 and U10. I hope I didn't say anything wrong, if I did, please correct me or edit the answer. - "100nF will decouple lower frequencies than 1uF" - isn't this backwards? All other things being equal, 100nF will have a higher resonant frequency and thus decouple higher frequencies. – Mike Apr 25 '12 at 1:37 @Mike Oops, sorry, you are right :) – abdullah kahraman Apr 25 '12 at 7:42
2013-05-24 08:17:10
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.5842257142066956, "perplexity": 1514.7036244084882}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368704368465/warc/CC-MAIN-20130516113928-00067-ip-10-60-113-184.ec2.internal.warc.gz"}
https://forum.rclone.org/t/rclone-copy-sub-directories/355
# Rclone copy sub directories Hey, Trying to do something like this rclone copy secret:plex/TVSeries/30 Rock/ /mnt/user/TVSeries/30 Rock/ Is there a way to specify a sub directory within an encrypted folder? What you wrote looks correct, except it is probably missing some quotes as the dirs you are trying to copy have spaces in. ``````rclone copy "secret:plex/TVSeries/30 Rock/" "/mnt/user/TVSeries/30 Rock/" `````` You can use `rclone lsd secret:` to see the directories and use that to explore, so then `rclone lsd plex:/TVSeries` Ah the quotes fixed it. Thanks!
2022-05-29 11:21:24
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8246613144874573, "perplexity": 7976.897727606478}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-21/segments/1652662644142.66/warc/CC-MAIN-20220529103854-20220529133854-00224.warc.gz"}
http://www.physicsforums.com/showthread.php?s=ef88592942db06e2c90462560646a1a0&p=4831296
# Define boundary conditions of a polygon in a unit square cell by tomallan Tags: boundary, conditions, define, polygon, unit
2014-09-17 23:42:04
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8889748454093933, "perplexity": 7755.228669556794}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2014-41/segments/1410657124771.92/warc/CC-MAIN-20140914011204-00095-ip-10-196-40-205.us-west-1.compute.internal.warc.gz"}
https://pjbartlein.github.io/GeogDataAnalysis/lec10.html
NOTE: This page has been revised for Winter 2021, but may undergo further edits. # 1 Introduction The general idea that underlies statistical inference is the comparison of particular statistics from on observational data set (i.e. the mean, the standard deviation, the differences among the means of subsets of the data), with an appropriate reference distribution in order to judge the significance of those statistics. When various assumptions are met, and specific hypotheses about the values of those statistics that should arise in practice have been specified, then statistical inference can be a powerful approach for drawing scientific conclusions that efficiently uses existing data or those collected for the specific purpose of testing those hypotheses. Even in a context when a formal experimental design is not possible, or when the objective is to explore the data, significance evaluation can be useful. As a consequence of the central limit theorem, we know that the mean is normally distributed, and so we can use the normal distribution to describe the uncertainty of a sample mean. # 2 Characterization of samples Once a sample has been obtained, and descriptive statistics calculated, attention may then turn to the significance (representativeness as opposed to unusualness) of the sample or of the statistics. This information may be gained by comparing the specific value of a statistic with an appropriate reference distribution, and by the calculation of additional statistics that describe the level of uncertainty a particular statistic may have. In the case of the sample mean, the appropriate reference distribution is the normal distribution, which is implied by the Central Limit Theorem. ## 2.1 Standard error of the mean and confidence interval for the mean Uncertainty in the mean can be described by the standard error of the mean or by the confidence interval for the mean. The standard error of the mean can be thought of as the standard deviation of a set mean values from repeated samples. Definition of the standard error of the mean Here is a demonstration using simulated data and repeated samples of different sizes # generate 1000 random numbers from the normal distribution npts <- 1000 demo_mean <- 5; demo_sd <- 2 data_values <- rnorm(npts, demo_mean, demo_sd) hist(data_values); mean(data_values); sd(data_values) ## [1] 5.034757 ## [1] 1.966487 Set the number of replications nreps and the (maximum) sample size nreps <- 1000 # number of replications (samples) for each sample size max_sample_size <- 100 # number of example sample sizes Create several matrices to hold the individual replication results. # matrix to hold means of each of the nreps samples mean_samp <- matrix(1:nreps) # matrices to hold means, sd’s and sample sizes for for each n average_means <- matrix(1:(max_sample_size-1)) sd_means <- matrix(1:(max_sample_size-1)) sample_size <- matrix(1:(max_sample_size-1)) Generate means for a range of sample sizes (1:max_sample_size) for (n in seq(1,max_sample_size-1)) { # for each sample size generate nreps samples and get their mean for (i in seq(1,nreps)) { samp <- sample(data_values, n+1, replace=T) mean_samp[i] <- mean(samp) } # get the average and standard deviation of the nreps means average_means[n] <- apply(mean_samp,2,mean) sd_means[n] <- apply(mean_samp,2,sd) sample_size[n] <- n+1 } Take a look at the means and the standard errors. Note that means remain essentially constant across the range of sample sizes, while the standard errors decrease rapidly (at first) with increasing sample size. plot(sample_size, average_means, ylim=c(4.5, 5.5), pch=16) plot(sample_size, sd_means, pch=16) head(cbind(average_means,sd_means,sample_size)) ## [,1] [,2] [,3] ## [1,] 5.002194 1.3991343 2 ## [2,] 4.992911 1.1747384 3 ## [3,] 5.059321 0.9902242 4 ## [4,] 5.031235 0.8585874 5 ## [5,] 5.031041 0.8160077 6 ## [6,] 5.015314 0.7535537 7 tail(cbind(average_means,sd_means,sample_size)) ## [,1] [,2] [,3] ## [94,] 5.036694 0.1899069 95 ## [95,] 5.041825 0.1985795 96 ## [96,] 5.044742 0.2041062 97 ## [97,] 5.036726 0.2012360 98 ## [98,] 5.040909 0.2003633 99 ## [99,] 5.038318 0.2000125 100 Verify that the standard error of the mean is sigma/sqrt(n) plot(demo_sd/sqrt((2:max_sample_size)), sd_means, pch=16) Generate some data values, this time from a uniform distribution # data_values from a uniform distribution data_values <- runif(npts, 0, 1) hist(data_values); mean(data_values); sd(data_values) ## [1] 0.5087177 ## [1] 0.2835326 Rescale these values so that they have the same mean (demo_mean) and standard deviation (demo_sd) as in the previous example, # rescale the data_values so they have a mean of demo_mean # and a standard deviation of demo_sd (standardize, then rescale) data_values <- (data_values-mean(data_values))/sd(data_values) mean(data_values); sd(data_values) ## [1] 4.369281e-18 ## [1] 1 data_values <- (data_values*demo_sd)+demo_mean hist(data_values); mean(data_values); sd(data_values) ## [1] 5 ## [1] 2 Repeat the demonstration for (n in seq(1,max_sample_size-1)) { # for each sample size generate nreps samples and get their mean for (i in seq(1,nreps)) { samp <- sample(data_values, n+1, replace=T) mean_samp[i] <- mean(samp) } # get the average and standard deviation of the nreps means average_means[n] <- apply(mean_samp,2,mean) sd_means[n] <- apply(mean_samp,2,sd) sample_size[n] <- n+1 } plot(sample_size, sd_means, pch=16) head(cbind(average_means,sd_means,sample_size)) ## [,1] [,2] [,3] ## [1,] 5.009442 1.4284970 2 ## [2,] 5.012839 1.1621002 3 ## [3,] 4.918761 0.9917057 4 ## [4,] 5.004906 0.9018073 5 ## [5,] 4.975955 0.8187180 6 ## [6,] 5.022585 0.7554809 7 tail(cbind(average_means,sd_means,sample_size)) ## [,1] [,2] [,3] ## [94,] 5.004614 0.2007629 95 ## [95,] 5.015275 0.2047870 96 ## [96,] 4.999232 0.2094253 97 ## [97,] 4.996611 0.2028047 98 ## [98,] 5.003051 0.1943208 99 ## [99,] 5.003415 0.2052365 100 This demonstrates that the standard error of the mean is insensitive to the underlying distribution of the data_ ## 2.2 Confidence intervals The confidence interval provides a verbal or graphical characterization, based on the information in a sample, of the likely range of values within which the “true” or population mean lies. This example uses an artificial data set [cidat.csv] cidat is a data frame that can be generated as follows # generate 4000 random values from the Normal Distribution with mean=10, and standard deviation=1 NormDat <- rnorm(mean=10, sd=1, n=4000) # generate a "grouping variable" that defines 40 groups, each with 100 observations Group <- sort(rep(1:40,100)) cidat <- data.frame(cbind(NormDat, Group)) # make a data frame Attach and summarize the data set. attach(cidat) ## The following objects are masked _by_ .GlobalEnv: ## ## Group, NormDat summary(cidat) ## NormDat Group ## Min. : 6.322 Min. : 1.00 ## 1st Qu.: 9.357 1st Qu.:10.75 ## Median :10.030 Median :20.50 ## Mean :10.030 Mean :20.50 ## 3rd Qu.:10.703 3rd Qu.:30.25 ## Max. :13.627 Max. :40.00 The idea here is to imagine that each group of 100 observations represents one possible sample of some underlying process or information set, that might occur in practice. These hypothetical samples (which are each equally likely) provide a mechanism for illustrating the range of values of the mean that could occur simply due to natural variability of the data, and the “confidence interal” is that range of values of the mean that enclose 90% of the possible mean values. Get the means and standard errors of each group_ group_means <- tapply(NormDat, Group, mean) group_sd <- tapply(NormDat, Group, sd) group_npts <- tapply(NormDat, Group, length) group_semean <- (group_sd/(sqrt(group_npts))) mean(group_means) ## [1] 10.02969 sd(group_means) ## [1] 0.09699701 Plot the individual samples (top plot) and then the means, and their standard errors (bottom plot). Note the different scales on the plots. # plot means and data par(mfrow=c(2,1)) plot(Group, NormDat) points(group_means, col="red", pch=16) # plot means and standard errors of means plot(group_means, ylim=c(9, 11), col="red", pch=16, xlab="Group") points(group_means + 2.0*group_semean , pch="-") points(group_means - 2.0*group_semean , pch="-") abline(10,0) The bottom plot shows that out the 40 mean values (red dots), 2 (0.05 or 5 percent) have intervals (defined to be twice the standard error either side of the mean, black tick marks) that do not enclose the “true” value of the mean (10.0). Set the graphics window back to normal and detach cidat. par(mfrow=c(1,1)) detach(cidat) # 3 Simple inferences based on the standard error of the mean The standard error of the mean, along with the knowledge that the sample mean is normally distributed allows inferences about the mean to made For example, questions of the following kind can be answered: • What is the probability of occurrence of an observation with a particular value? • What is the probability of occurrence of a sample mean with a particular value? • What is the “confidence interval” for a sample mean with a particular value? Here’s a short discussion of simple inferential statistics: ## 3.1 Hypothesis tests The next step toward statistical inference is the more formal development and testing of specific hypotheses (as opposed to the rather informal inspection of descriptive plots, confidence intervals, etc.) “Hypothesis” is a word used in several contexts in data analysis or statistics: • the research hypothesis is the general scientific issue that is being explored by a data analysis. It may take the form of quite specific statements, or just general speculations. • the null hypothesis (Ho) is a specific statement whose truthfulness can be evaluated by a particular statistical test. An example of a null hypothesis is that the means of two groups of observations are identical. • the alternative hypothesis (Ha) is, as its name suggests an alternative statement of what situation is true, in the event that the null hypothesis is rejected. An example of an alternative hypothesis to a null hypothesis that the means of two groups of observations are identical is that the means are not identical. A null hypothesis is never “proven” by a statistical test. Tests may only reject, or fail to reject, a null hypothesis. There are two general approaches toward setting up and testing specific hypotheses: the “classical approach” and the “p-value” approach. The steps in the classical approach: 1. define or state the null and alternative hypotheses. 2. select a test statistic. 3. select a significance level, or a specific probability level, which if exceeded, signals that the test statistic is large enough to consider significant. 4. delineate the “rejection region” under the pdf of the appropriate distribution for the test statistic, (i.e. determine the specific value of the test statistic that if exceeded would be grounds to consider it significant. 5. compute the test statistic. 6. depending on the particular value of the test statistics either a) reject the null hypothesis (Ho) and accept the alternative hypothesis (Ha), or b) fail to reject the null hypothesis. The steps in the “p-value” approach are: 1. define or state the null and alternative hypotheses. 2. select and compute the test statistic. 3. refer the test statistic to its appropriate reference distribution. 4. calculate the probability that a value of the test statistic as large as that observed would occur by chance if the null hypothesis were true (this probability, or p-value, is called the significance level). 5. if the significance level is small, the tested hypothesis (Ho) is discredited, and we assert that a “significant result” or “significant difference” has been observed. # 4 The t-test An illustration of an hypothesis test that is frequently used in practice is provided by the t-test, one of several “difference-of-means” tests. The t-test (or more particularly Student’s t-test (after the pseudonym of its author, W.S. Gosset) provides a mechanism for the simple task of testing whether there is a significant difference between two groups of observations, as reflected by differences in the means of the two groups. In the t-test, two sample mean values, or a sample mean and a theoretical mean value, are compared as follows: • the null hypthesis is that the two mean values are equal, while the • alternative hypothesis is that the means are not equal (or that one is greater than or less than the other) • the test statistic is the t-statistic • the significance level or p-value is determined using the* t-distribution The shape of the t distribution can be visualized as follows (for df=30): x <- seq(-3,3, by=.1) pdf_t <- dt(x,3) plot(pdf_t ~ x, type="l") You can read about the origin of Gosset’s pseudonum (and his contributions to brewing) here. ## 4.1 The t-test for assessing differences in group means [Details of the t-test] There are two ways the t-test is implemented in practice, depending on the nature of the question being asked and hence on the nature of the null hypotheis: • one-sample t-test (for testing the hypothesis that a sample mean is equal to a “known” or “theoretical” value), or the • two-sample t-test (for testing the hypothesis that the means of two groups of observations are identical). Example data sets: Attach the example data, and get a boxplot of the data by group: # t-tests attach(ttestdat) boxplot(Set1 ~ Group1) Two-tailed t-test (are the means different in a general way?) # two-tailed tests t.test(Set1 ~ Group1) ## ## Welch Two Sample t-test ## ## data: Set1 by Group1 ## t = -0.2071, df = 55.818, p-value = 0.8367 ## alternative hypothesis: true difference in means is not equal to 0 ## 95 percent confidence interval: ## -0.11841233 0.09622446 ## sample estimates: ## mean in group 0 mean in group 1 ## 7.988305 7.999399 The t-statistic is -0.2071 and the p-value = 0.8367, which indicates that the t-statistic is not significant, i.e. that there is little support for rejecting the null hypothesis that there is no difference between the mean of group 0 and the mean of group 1. Two one-tailed t-tests (each evaluates whether the means are different in a specific way?) t.test(Set1 ~ Group1, alternative = "less") # i.e. mean of group 0 is less than the mean of group 1 ## ## Welch Two Sample t-test ## ## data: Set1 by Group1 ## t = -0.2071, df = 55.818, p-value = 0.4183 ## alternative hypothesis: true difference in means is less than 0 ## 95 percent confidence interval: ## -Inf 0.07850556 ## sample estimates: ## mean in group 0 mean in group 1 ## 7.988305 7.999399 t.test(Set1 ~ Group1, alternative = "greater") # i.e. mean of group 0 is greater than the mean of group 1 ## ## Welch Two Sample t-test ## ## data: Set1 by Group1 ## t = -0.2071, df = 55.818, p-value = 0.5817 ## alternative hypothesis: true difference in means is greater than 0 ## 95 percent confidence interval: ## -0.1006934 Inf ## sample estimates: ## mean in group 0 mean in group 1 ## 7.988305 7.999399 Notice that for each example, the statistics (t-statistic, means of each group), are identical, while the p-values, and confidence intervals for the t-statistic differ). The smallest p-value is obtained for the test of the hypothes that the mean of group 0 is less than the mean of group 1 (which is the observed difference). But, that difference is not significant (the p-value is greater than 0.05). A a second example boxplot(Set2 ~ Group2) t.test(Set2 ~ Group2) ## ## Welch Two Sample t-test ## ## data: Set2 by Group2 ## t = 6.9733, df = 57.372, p-value = 3.419e-09 ## alternative hypothesis: true difference in means is not equal to 0 ## 95 percent confidence interval: ## 0.2772856 0.5006463 ## sample estimates: ## mean in group 0 mean in group 1 ## 7.988305 7.599339 detach(ttestdat) Here the t-statistic is relatively large and the p-value very small, lending support for rejecting the null hypothesis of no significant difference in the means (and accepting the alternative hypothesis that the means do differ). Remember, we haven’t “proven” that they differ, we’ve only rejected the idea that they are identical. ## 4.2 Differences in group variances One assumption that underlies the t-test is that the variances (or dispersions) of the two samples are equal. A modification of the basic test allows cases when the variances are approximately equal to be handled, but large differences in variability between the two groups can have an impact on the interpretability of the test results: Example data: [foursamples.csv] t-tests among groups with different variances attach(foursamples) # nice histograms cutpts <- seq(0.0, 20.0, by=1) par(mfrow=c(2,2)) hist(Sample1, breaks=cutpts, xlim=c(0,20)) hist(Sample2, breaks=cutpts, xlim=c(0,20)) hist(Sample3, breaks=cutpts, xlim=c(0,20)) hist(Sample4, breaks=cutpts, xlim=c(0,20)) par(mfrow=c(1,1)) boxplot(Sample1, Sample2, Sample3, Sample4) mean(Sample1)-mean(Sample2) ## [1] -0.2718703 t.test(Sample1, Sample2) ## ## Welch Two Sample t-test ## ## data: Sample1 and Sample2 ## t = -1.7294, df = 997.9, p-value = 0.08404 ## alternative hypothesis: true difference in means is not equal to 0 ## 95 percent confidence interval: ## -0.58035326 0.03661273 ## sample estimates: ## mean of x mean of y ## 10.72631 10.99818 mean(Sample3)-mean(Sample4) ## [1] -0.2676365 t.test(Sample3, Sample4) ## ## Welch Two Sample t-test ## ## data: Sample3 and Sample4 ## t = -4.2308, df = 998, p-value = 2.543e-05 ## alternative hypothesis: true difference in means is not equal to 0 ## 95 percent confidence interval: ## -0.3917738 -0.1434991 ## sample estimates: ## mean of x mean of y ## 10.73264 11.00027 mean(Sample1)-mean(Sample3) ## [1] -0.006325667 t.test(Sample1, Sample3) ## ## Welch Two Sample t-test ## ## data: Sample1 and Sample3 ## t = -0.053011, df = 658.3, p-value = 0.9577 ## alternative hypothesis: true difference in means is not equal to 0 ## 95 percent confidence interval: ## -0.2406330 0.2279816 ## sample estimates: ## mean of x mean of y ## 10.72631 10.73264 mean(Sample2)-mean(Sample4) ## [1] -0.002091883 t.test(Sample2, Sample4) ## ## Welch Two Sample t-test ## ## data: Sample2 and Sample4 ## t = -0.017387, df = 654.69, p-value = 0.9861 ## alternative hypothesis: true difference in means is not equal to 0 ## 95 percent confidence interval: ## -0.2383369 0.2341532 ## sample estimates: ## mean of x mean of y ## 10.99818 11.00027 detach(foursamples) There is a formal test for equality of group variances that will be described with analysis of variance.
2021-03-02 07:27:48
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.6589615345001221, "perplexity": 2105.035157419884}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-10/segments/1614178363782.40/warc/CC-MAIN-20210302065019-20210302095019-00096.warc.gz"}
https://search.r-project.org/CRAN/refmans/energy/html/eigen.html
EVnormal {energy} R Documentation ## Eigenvalues for the energy Test of Univariate Normality ### Description Pre-computed eigenvalues corresponding to the asymptotic sampling distribution of the energy test statistic for univariate normality, under the null hypothesis. Four Cases are computed: 1. Simple hypothesis, known parameters. 2. Estimated mean, known variance. 3. Known mean, estimated variance. 4. Composite hypothesis, estimated parameters. Case 4 eigenvalues are used in the test function normal.test when method=="limit". ### Usage data(EVnormal) ### Format Numeric matrix with 125 rows and 5 columns; column 1 is the index, and columns 2-5 are the eigenvalues of Cases 1-4. Computed ### References Szekely, G. J. and Rizzo, M. L. (2005) A New Test for Multivariate Normality, Journal of Multivariate Analysis, 93/1, 58-80, doi: 10.1016/j.jmva.2003.12.002. [Package energy version 1.7-10 Index]
2022-05-25 18:51:49
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.6458898186683655, "perplexity": 6706.561924491614}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-21/segments/1652662593428.63/warc/CC-MAIN-20220525182604-20220525212604-00431.warc.gz"}