url
stringlengths
14
2.42k
text
stringlengths
100
1.02M
date
stringlengths
19
19
metadata
stringlengths
1.06k
1.1k
http://comunidadwindows.org/standard-error/standard-error-and-sample-size-relationship.php
Home > Standard Error > Standard Error And Sample Size Relationship # Standard Error And Sample Size Relationship ## Contents Standard deviations and standard errors. However, the true standard deviation of the estimate (I think) cannot decrease. However, they are not quite the same, and it is important that readers (and researchers) know the difference between the two so as to use them appropriately and report them correctly.QuestionWhat So the range determined by m1 ± 1.96 × se (in lieu of  ± 1.96 × sdm) provides the range of values that includes the true value of the population with a 95% probability: http://comunidadwindows.org/standard-error/standard-error-for-sample-size.php Lane Prerequisites Introduction to Power, Example Calculations, Significance Testing, Type I and Type II Errors, One- and Two-Tailed Tests Learning Objectives State five factors affecting power State what the effect of Misuse of standard error of the mean (SEM) when reporting variability of a sample. Why is the FBI making such a big deal out Hillary Clinton's private email server? Simple random sampling Systematic random sampling Stratified sampling Proportionate and disproportionate stratified sampling Cluster sampling Probability proportionate to size What are the main types of nonprobability sampling methods?What are the procedures http://www-rohan.sdsu.edu/~dfinnega/sw690/RBnotes_ch9_full.htm ## The Relationship Between Sample Size And Sampling Error Is Quizlet Is it plausible to assume that standard error is proportional to the inverse of the square root of n (based on the standard error of a sample mean using simple random It is more likely to be significant when n=40 because the distribution curve is narrower and 3kg is more extreme in relation to it than it is in the n=20 scenario, First, if many independent random samples are selected from a population, the sample statistics provided by those samples will be distributed around the population parameter Second, probability theory gives us a In the former case, size likely will play little role in the differences in outcome between patients, whereas in the latter case tumor size could be an important factor (confounding variable) Altman DG. Figure 3 shows that power is lower for the 0.01 level than it is for the 0.05 level. If The Size Of The Sample Is Increased The Standard Error Will Ten percent of these interviews should be with families that are headed by one person and 90% with husband-wife families. Standard deviation. How Does Sample Size Effect Standard Deviation Some of the factors are under the control of the experimenter, whereas others are not. First, if many independent random samples are selected from a population, the sample statistics provided by those samples will be distributed around the population parameter Second, probability theory gives us a We're looking forward to working with them as the product develops." Sharon Boyd eProgramme Coordinator Royal (Dick) School of Veterinary Studies   Free resources:   •   Statistics glossary   • We know that if we draw samples of similar sizes, say N as in the sample of interest above, from the population many times (eg, n times), we will obtain a Which Combination Of Factors Will Produce The Smallest Value For The Standard Error Sample size is important because Larger samples increase the chance of finding a significant difference, but Larger samples cost more money. However, you should also notice that there is a diminishing return from taking larger and larger samples. National Library of Medicine 8600 Rockville Pike, Bethesda MD, 20894 USA Policies and Guidelines | Contact Skip to Content Eberly College of Science STAT 100 Statistical Concepts and Reasoning Home » ## How Does Sample Size Effect Standard Deviation Finally we work out the mean weight change of the entire sample. When we calculate the standard deviation of a sample, we are using it as an estimate of the variability of the population from which the sample was drawn. The Relationship Between Sample Size And Sampling Error Is Quizlet Consider the special case of Monte Carlo. What Happens To The Mean When The Sample Size Increases Naturally, the stronger the evidence needed to reject the null hypothesis, the lower the chance that the null hypothesis will be rejected. Given that ice is less dense than water, why doesn't it sit completely atop water (rather than slightly submerged)? his comment is here The relationship between sample size and power for H0: μ = 75, real μ = 80, one-tailed α = 0.05, for σ's of 10 and 15. Encode the alphabet cipher Are there any auto-antonyms in Esperanto? Selecting a sample that the researcher believes will yield the most comprehensive understanding of a subject based on an intuitive “feel” for the subject is employing quota sampling. Find The Mean And Standard Error Of The Sample Means That Is Normally Distributed I've just "mv"ed a 49GB directory to a bad file path, is it possible to restore the original state of the files? Difference between Hypothesized and True Mean Naturally, the larger the effect size, the more likely it is that an experiment would find a significant effect. Experimenters can sometimes control the standard deviation by sampling from a homogeneous population of subjects, by reducing random measurement error, and/or by making sure the experimental procedures are applied very consistently. this contact form We can take the sample mean as our best estimate of what is true in that relevant population but we know that if we collect data on another sample, the mean Practical Statistics for Medical Research. Standard Deviation And Standard Error Formula Since sample size is typically under an experimenter's control, increasing sample size is one way to increase power. Lesson 3 - Have Fun With It! ## In contrast, the margin of error does not substantially decrease at sample sizes above 1500 (since it is already below 3%). The standard error is also used to calculate P values in many circumstances.The principle of a sampling distribution applies to other quantities that we may estimate from a sample, such as Larger samples tend to be a more accurate reflections of the population, hence their sample means are more likely to be closer to the population mean -- hence less variation. If we want to indicate the uncertainty around the estimate of the mean measurement, we quote the standard error of the mean. Sample Size And Mean At the cost of being very approximate, lets equate 10 to the standard deviation of the estimate. $$10 = \dfrac{\sigma}{\sqrt{n_1}}$$ You want to find $n_2$ so that 5 = \dfrac{\sigma}{\sqrt{n_2}} Hoboken, NJ: John Wiley and Sons, Ltd; 2005. Lesson 4: Getting the Big Picture and Summaries Lesson 5: Bell-Shaped Curves and Statistical Pictures Review for Lessons 2 to 5 (Exam 1) Lesson 6: Relationships Between Measurement Variables Lesson 7: After that point, it is probably better to spend additional resources on reducing sources of bias that might be on the same order as the margin of error. http://comunidadwindows.org/standard-error/standard-error-of-the-mean-sample-size.php Welcome to STAT 100! Encyclopedia of Statistics in Behavioral Science. The course is widely used in colleges and universities, and in commercial organisations. This formula uses the specific difference and the sd of the population. Although there is little difference between the two, the former underestimates the true standard deviation in the population when the sample is small and the latter usually is preferred.Third, when inferring Purposive or judgmental sampling Quota sampling Available subjects (convenience sampling, accidental sampling, availability sampling) Snowball sampling Sample Quiz questionsWhen the variable “religious affiliation” is classified as Protestant, Catholic, or Jewish, this Approximately 68% of the sample statistics will be within one standard error (+ or -) of the population parameter Approximate 95% of the sample statistics will be within two standard errors
2018-09-25 11:36:31
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.6677375435829163, "perplexity": 786.8609689035694}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-39/segments/1537267161501.96/warc/CC-MAIN-20180925103454-20180925123854-00108.warc.gz"}
http://mathematica.stackexchange.com/questions/13135/examples-of-well-coded-packages-using-custom-notation/13273
# Examples of (well coded) packages using custom notation Which packages do you know of, that have the following properties: • using custom notation (for input AND output would be preferable) • well coded & designed (from your subjective perspective!) - What criteria would you use to assess the second bullet point? – Verbeia Oct 16 '12 at 11:02 that is a quite subjective question – Rolf Mertig Oct 16 '12 at 12:17 For the notation part: well coded means that the custom notation does always work as expected for input and output and that it does not interfere with other packages or user notation (or at least minimize that interference) – NoEscape Oct 16 '12 at 12:53 Even if "well coded" is subjective, just answer from your subjective mind. – NoEscape Oct 17 '12 at 14:04 I'm not going into the well-coded part of your question (as this is rather subjective), but a package that I've (cursorily) examined and which looks nice is this quantum notation package, which has lots of custom notation and corresponding palettes. - Besides the quantum package already mentioned by @Sjoerd, the package with the most customized notation that I know of is the THEOREMA package. You can freely use the package and admire the complex logicographics notation created, but the code is not available for inspection. Finally, the OP leaves me No-Escape (pun intended) but to mention my WildCats category theory package which is perhaps a unique example of 3rd party package using the standard Notation package together with some hand-made (MakeExpression, etc.) custom notation. You can inspect my code. - @NoEscape, I think that's a little unfair. The .m file is autogenerated from a notebook, which may contain extensive comments. There's no real motivation for the developer to put (* *) comments into the code cells just so they appear in the .m file. Given that the package includes 13 tutorials and 135 symbol reference pages there is hardly a lack of documentation. The source notebook may also be split into sections and subsections, so the fact that a single .m file is produced is no reflection on how well the code is structured. – Simon Woods Oct 18 '12 at 20:05 @NoEscape - that question exists already on this site. – Verbeia Oct 19 '12 at 2:21 @Simon has perfectly well expressed my point :-). On the other hand - experimenting with the OP characteristic diplomatic writing style - it might be said that , if one needs comments to understand my code, than perhaps he/she needs to read Wolfram's book and some category theory first. No offense here :-) Anyway...prior releases of WildCats indeed also included the much more friendly .nb file . – magma Oct 19 '12 at 10:48 OK. It is well coded and working, but hardly readable. This is a fundamental problem of autogenerated packages! – NoEscape Oct 20 '12 at 5:36 @NoEscap WildCats is created with Workbench, but I use the internal MMA editor, so I make a heavily commented/sectioned .nb file. The autogenerated .m file is for deployment only. – magma Oct 21 '12 at 9:50 I suggest those two tutorials for writing Mathematica packages, unfortunately the first one is in Spanish, yet I do believe it will be useful for anyone because the step-by-step images; the second one is in English. -
2016-05-27 00:22:49
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.5510163903236389, "perplexity": 1910.6309268764167}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2016-22/segments/1464049276415.60/warc/CC-MAIN-20160524002116-00075-ip-10-185-217-139.ec2.internal.warc.gz"}
https://kitchingroup.cheme.cmu.edu/blog/2014/12/03/Selective-auto-capitalization-in-org-buffers/
## Selective auto-capitalization in org-buffers | categories: | tags: | View Comments I have been using auto-capitalize.el for a short time to automatically capitalize the beginning of sentences. I mostly like what it does, but in org-mode I tend to write short code blocks while still in org-mode, and it is pretty irritating for auto-capitalize to "fix" the capitalization of your code. Of course, I can type C-c ' to edit the block in its native mode, but I do not always want to do that. Below, I illustrate an approach to turn off auto-capitalize-mode when the cursor is inside a code-block. Basically, we write a function that checks if you are in a src-block, and if auto-capitalize is on, turn it off. If you are not in the code-block, we turn auto-capitalize on if it is not on. Then we hook the function into post-command-hook, which will run it after every emacs command, including cursor movements. Here is that code: (defun dwiw-auto-capitalize () (if (org-in-block-p '("src")) (when auto-capitalize (auto-capitalize-mode -1)) (unless auto-capitalize (auto-capitalize-mode 1))))
2021-05-13 03:09:37
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.5199543833732605, "perplexity": 3492.273159236069}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-21/segments/1620243992721.31/warc/CC-MAIN-20210513014954-20210513044954-00537.warc.gz"}
http://viper.unige.ch/_bib/viper_conf/VG:SqP1997
## A Comparison of Human and Machine Assessments of Image Similarityfor the Organization of Image Databases ### Bibtex entry : @inproceedings { VG:SqP1997, author = { David McG. Squire and Thierry Pun }, title = { A Comparison of Human and Machine Assessments of Image Similarityfor the Organization of Image Databases }, booktitle = { The 10th Scandinavian Conference on Image Analysis }, pages = { 51--58 }, year = { 1997 }, editor = { Michael Frydrych and Jussi Parkkinen and Ari Visa }, address = { Lappeenranta, Finland }, month = { June }, keywords = { image similarity, image database organization, agreement statistics,VG:SqP1997key }, url = { http://vision.unige.ch/publications/postscript/97/SquirePun_scia97.ps.gz }, abstract = { There has recently been a significant interest in the organizationand \emph{content-based} querying of large images databases. Mostfrequently, the underlying hypothesis is that image similarity canbe characterized by low-level image features, without further abstraction.This assumes that there is sufficient agreement between machine andhuman measures of image similarity for the database to be useful.We wish to assess the veracity of this assumption. To this end, wedevelop measures of the agreement between two partitionings of animage set; we show that it is vital to take chance agreements intoaccount. We then use these measures to assess the agreement betweenhuman subjects and a variety of machine clustering techniques ona set of images. The results can be used to select and refine imagedistance measures for querying and organizing image databases. }, owner = { steph }, timestamp = { 2008.05.04 }, url1 = { http://vision.unige.ch/publications/postscript/97/SquirePun_scia97.pdf }, vgclass = { refpap }, vgproject = { viper }, } -- Keywords: machine learning, information geometry, data mining, Big Data, affective information retrieval (recherche d'information), information visualisation, content-based image and video retrieval (CBIR, CBR, CBVR, CBMR, CBMIR), information mining, classification, multimedia and multimodal information management, semantic web, knowledge base (RDF, OWL, XML, metadata, auto-annotation, description), multimodal information fusion
2017-11-20 15:06:15
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.3531857132911682, "perplexity": 12733.358681825193}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-47/segments/1510934806070.53/warc/CC-MAIN-20171120145722-20171120165722-00391.warc.gz"}
https://tex.stackexchange.com/questions/528720/changing-section-numbering-ruins-toc
Changing Section Numbering Ruins TOC I am trying to include the word "Appendix" in the section titles before the appendix number. Trying \renewcommand{\thesection}{Appendix \Alph{section}:} works, but messes up the table of contents - the section name is printed over the section number: Minimal working example: \documentclass{article} \begin{document} \tableofcontents \section{Intro} content \appendix \renewcommand{\thesection}{Appendix \Alph{section}:} \section{Test 1} content \section{Test 2} content \end{document} Any suggestions much appreciated. You can load the appendix package and use the appendices environment`. There are two ways to do that, and also other goodies for the table of contents and the header. See the documentation, pp.3-4 for details.
2020-09-30 19:15:16
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8965118527412415, "perplexity": 3751.211234619323}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-40/segments/1600402127397.84/warc/CC-MAIN-20200930172714-20200930202714-00341.warc.gz"}
https://gmatclub.com/forum/what-percentage-loss-will-a-merchant-incur-if-he-marks-his-goods-up-by-188015.html
GMAT Question of the Day - Daily to your Mailbox; hard ones only It is currently 16 Oct 2019, 12:47 ### GMAT Club Daily Prep #### Thank you for using the timer - this advanced tool can estimate your performance and suggest more practice questions. We have subscribed you to Daily Prep Questions via email. Customized for You we will pick new questions that match your level based on your Timer History Track every week, we’ll send you an estimated GMAT score based on your performance Practice Pays we will pick new questions that match your level based on your Timer History # What percentage loss will a merchant incur if he marks his goods up by Author Message TAGS: ### Hide Tags Intern Joined: 04 Oct 2014 Posts: 11 GMAT 1: 650 Q44 V35 What percentage loss will a merchant incur if he marks his goods up by  [#permalink] ### Show Tags Updated on: 04 Nov 2014, 09:28 4 00:00 Difficulty: 35% (medium) Question Stats: 74% (01:54) correct 26% (02:02) wrong based on 109 sessions ### HideShow timer Statistics What percentage loss will a merchant incur if he marks his goods up by x% over his cost price and then offers a discount of x% on his selling price? A. 0 % B. 2x/100 % C. x^2/100 % D. x % E. 2x % Originally posted by jgk on 04 Nov 2014, 05:19. Last edited by jgk on 04 Nov 2014, 09:28, edited 1 time in total. Manager Status: I am not a product of my circumstances. I am a product of my decisions Joined: 20 Jan 2013 Posts: 108 Location: India Concentration: Operations, General Management GPA: 3.92 WE: Operations (Energy and Utilities) Re: What percentage loss will a merchant incur if he marks his goods up by  [#permalink] ### Show Tags 04 Nov 2014, 08:58 2 jgk wrote: What percentage loss will a merchant incur if he marks his goods up by x% over his cost price and then offers a discount of x% on his selling price? A. 0 % B. 2x/100 % C. x^2/100 % D. x % E. 2x % Taking smart numbers: Let, C.P = 100$and X = 20% over C.P then Mark Up = 120$ After discount of X% S.P= 80% of 120 S.P = 96$Therefore Loss is 4$ and hence Loss in % = 4% Plugging numbers in options. A. 0 % ---------------------------Not true because loss is 4% B. 2x/100 % --------------------2*20/100 = 0.4%. Hence Not True C. x^2/100 % ------------------ 20^2/100 = 4%. True D. x % --------------------------- 20%. Hence Not True E. 2x % -------------------------- 2*20 = 40%. Hence Not True Intern Joined: 04 Oct 2014 Posts: 11 GMAT 1: 650 Q44 V35 Re: What percentage loss will a merchant incur if he marks his goods up by  [#permalink] ### Show Tags 04 Nov 2014, 09:32 1 Hi Ashishmathew01081987 I also agree with your approach. The OA is C indeed!! Intern Joined: 17 Oct 2014 Posts: 28 Re: What percentage loss will a merchant incur if he marks his goods up by  [#permalink] ### Show Tags 04 Nov 2014, 15:15 1 1 Ashish's method is a very efficient one; I recommend taking advantage of the answer choices and multiple choice format whenever possible. Well done! I'll add a little bit of math background, since I know that many students prefer to understand the underlying mathematics. The question asks for the loss suffered by the author. That is to say, subtract from 100% the final percentage price after applying the x% markup. and the x% discount. To find the price after the x% markup and x% discount, multiply: [1 + (x / 100)] * [1 - (x / 100)]. This equation looks a little messy, but it's actually a classic quadratic. (a + b) (a - b) = a^2 - b^2. So, the multiplier for the sale price is: [1 - (x/100)^2] Multiply by 100% to convert to percent, then subtract from 100%, to determine the loss: 100% - [1 - (x/100)^2] * 100% 100% - 100% - [(x^2) / 100]% SVP Status: The Best Or Nothing Joined: 27 Dec 2012 Posts: 1749 Location: India Concentration: General Management, Technology WE: Information Technology (Computer Software) Re: What percentage loss will a merchant incur if he marks his goods up by  [#permalink] ### Show Tags 13 Nov 2014, 02:39 2 Actually in such problems, whenever we add x% & then subtract x% from values, we end back in a quadratic equation>> Original Price >> 100 Total >> 100+x Subtract x% >> $$(100+x)* \frac{x}{100}$$ Coming back to the problem.... In the OA given, only quadratic option given is C Had there been more addition of x%, discount of x%, power of x would increase..................... It may not be applicable here directly, however there is a formula for liquid displacement from where I took hint in solving this...... $$Final Concentration = Initial Concentration ( 1 - \frac{replace}{Total})^{Number of replacements}$$ _________________ Kindly press "+1 Kudos" to appreciate Non-Human User Joined: 09 Sep 2013 Posts: 13210 Re: What percentage loss will a merchant incur if he marks his goods up by  [#permalink] ### Show Tags 01 Apr 2018, 14:41 Hello from the GMAT Club BumpBot! Thanks to another GMAT Club member, I have just discovered this valuable topic, yet it had no discussion for over a year. I am now bumping it up - doing my job. I think you may find it valuable (esp those replies with Kudos). Want to see all other topics I dig out? Follow me (click follow button on profile). You will receive a summary of all topics I bump in your profile area as well as via email. _________________ Re: What percentage loss will a merchant incur if he marks his goods up by   [#permalink] 01 Apr 2018, 14:41 Display posts from previous: Sort by
2019-10-16 19:47:10
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8385279178619385, "perplexity": 8690.394829063402}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-43/segments/1570986669546.24/warc/CC-MAIN-20191016190431-20191016213931-00340.warc.gz"}
https://planetmath.org/napoleonstheorem
# Napoleon’s theorem ###### Theorem. If equilateral triangles are erected externally on the three sides of any given triangle, then their centres are the vertices of an equilateral triangle. If we embed the statement in the complex plane, the proof is a mere calculation. In the notation of the figure, we can assume that $A=0$, $B=1$, and $C$ is in the upper half plane. The hypotheses are $\frac{1-0}{Z-0}=\frac{C-1}{X-1}=\frac{0-C}{Y-C}=\alpha$ (1) where $\alpha=\exp{\pi i/3}$, and the conclusion we want is $\frac{N-L}{M-L}=\alpha$ (2) where $L=\frac{1+X+C}{3}\qquad M=\frac{C+Y+0}{3}\qquad N=\frac{0+1+Z}{3}\;.$ From (1) and the relation $\alpha^{2}=\alpha-1$, we get $X,Y,Z$: $X=\frac{C-1}{\alpha}+1=(1-\alpha)C+\alpha$ $Y=-\frac{C}{\alpha}+C=\alpha C$ $Z=1/{\alpha}=1-\alpha$ and so $\displaystyle 3(M-L)$ $\displaystyle=$ $\displaystyle Y-1-X$ $\displaystyle=$ $\displaystyle(2\alpha-1)C-1-\alpha$ $\displaystyle 3(N-L)$ $\displaystyle=$ $\displaystyle Z-X-C$ $\displaystyle=$ $\displaystyle(\alpha-2)C+1-2\alpha$ $\displaystyle=$ $\displaystyle(2\alpha-2-\alpha)C-\alpha+1-\alpha$ $\displaystyle=$ $\displaystyle(2\alpha^{2}-\alpha)C-\alpha-\alpha^{2}$ $\displaystyle=$ $\displaystyle 3(M-L)\alpha$ proving (2). Remarks: The attribution to Napoléon Bonaparte (1769-1821) is traditional, but dubious. For more on the story, see http://www.mathpages.com/home/kmath270/kmath270.htmMathPages. Title Napoleon’s theorem NapoleonsTheorem 2013-03-22 13:48:50 2013-03-22 13:48:50 drini (3) drini (3) 7 drini (3) Theorem msc 51M04
2019-09-18 07:47:32
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 28, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9385843873023987, "perplexity": 496.0196696065739}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-39/segments/1568514573258.74/warc/CC-MAIN-20190918065330-20190918091330-00428.warc.gz"}
https://jeeneetqna.in/428/following-options-correct-sequence-events-during-mitosis
# Which of the following options gives the correct sequence of events during mitosis ? more_vert Which of the following options gives the correct sequence of events during mitosis ? (a) Condensation $\to$ Nuclear membrane disassembly $\to$ Arrangement at equator $\to$ Centromere division $\to$ Segregation $\to$ Telophase (b) Condensation $\to$ Crossing over $\to$ Nuclear membrane disassembly $\to$ Segregation $\to$ Telophase (c) Condensation $\to$ Arrangement at equator $\to$ Centromere division $\to$ Segregation $\to$ Telophase (d) Condensation $\to$ Nuclear membrane disassembly $\to$ Crossing over $\to$ Segregation $\to$ Telophase more_vert verified Ans : (a) Condensation → Nuclear membrane disassembly → Arrangement at equator → Centromere division → Segregation → Telophase Explanation : Mitosis is the equational division of nucleus in which chromatin fibers first condense to form chromosome at prophase and nuclear membrane start to disappear. During metaphase, all the chromosome aligned at the equator and attached to the spindle fiber with centromere of the chromosome. During anaphase, division of centromere and segregation of chromosome occurs at opposite pole of the nucleus and then telophase occurs.
2023-04-01 23:48:30
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.39499592781066895, "perplexity": 10357.088577183347}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2023-14/segments/1679296950363.89/warc/CC-MAIN-20230401221921-20230402011921-00677.warc.gz"}
https://zeeshanakhter.com/tag/available/
## 5 Examples of when to use silence ? Posted: January 27, 2012 in Random Posts Tags: , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , 1. During arguments. One of the best times to use the power of silence is during  an argument is to stay silent. The ego will be trying to force its way out of you and finish the argument but you are the controller, not the ego. When someone is shouting at you, looking for an argument or just picking on you can literally take all the power away from them and keep all your energy by simply looking at them and saying absolutely nothing. This is extremely difficult to do but very powerful. 2. Gossiping. When there is a crowd of people in the workplace there are gossipers who speak about other people. The thing with gossiping is that it is contagious. When we don’t like someone and someone else starts speaking about them we naturally tend to voice our opinion, I’ve done it lots of times and have to stop myself. Try and stop yourself from catching the virus of gossiping and use the power of silent whenever it occurs. If you are a gossiper yourself and people around start to notice that you are ‘not your usual self’, don’t give an explanation just leave saying you’ve got work to do or whatever, pretty soon you’ll be out of the gossiping loop. 3. When someone is talking. Silence is a great tool for counselors if used in the right way. It’s also great when listening to friends and family. Just let people talk and listen to them and use your facial expressions and movements to acknowledge that you are listening. This can be a tough thing to do but extremely powerful for both you, as the listener, and the talker. You will find that as you practice this, more people come to talk to you as you will be known as a listener. Obviously there are times to speak during the conversation, however when you do, make sure it is to paraphrase what the talker is saying or asking questions to get more information, don’t make it about yourself. When people want to know more about you they will ask you questions, this is the time to talk about yourself but always have the listener be part of the conversation. 4. When the house is empty. The silence of the home can be quite disturbing to some people as there is a natural need to fill the void of silence. We turn on the radio, play some music, call friends or family, or turn on the TV to fill this void. Having a completely silent home when you are alone does not mean you are alone it simply means you are recharging your mind and giving it some downtime. Silence helps us to work through, in our minds, the events of the day or project what we want to happen during the day ahead. I am a night owl and also a morning lark. I love the silence when I know everyone is safe and tucked up in bed and I can write or work on the computer. At the weekends I go to bed with my wife to talk about days events or our plans and just have a laugh or whatever. My wife, who loves her sleep, has gone to sleep I kiss her goodnight and get up for a few hours to write as this is the time I am most inspired. I am also the first person up in the morning which means I have another 2 hours to write or work on my online projects. I know it’s harder when you are alone, however silent time can be used to think about the life you want and work out ways to get it. 5. Quiet reflection. This is a fantastic way to connect with world in a way that is not possible when you are surrounded by hubbub noise. 15 minutes in the morning, 15 minutes in the evening simply focusing on your breath can do wonders for both mind and body. I truly believe that with practice quiet reflection can help us reach a level of deep inner calm. The state of silence is a way of reaching another part of your mind not possible when going about your daily routine. This other part of your mind is connected in every way to the world around you and with practice you can tap into this knowledge. ## Conversion PDF to Byte and Vice verse Posted: January 13, 2012 in Random Posts Tags: , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , Simple conversion easily enjoy the word of JAVA………………. public static byte[] convertPDFToByteArray(String sourcePath) { byte[] bytes=null; InputStream inputStream; File file = new File(sourcePath); try { inputStream = new FileInputStream(file); bytes = new byte[(int)file.length()]; } catch (IOException ex) { Logger.getLogger( DesktopApplication3View.class.getName()).log(Level.SEVERE, null, ex); }return bytes; } public void convertByteArrayToPDF(String sourcePath,byte[] bytes) { OutputStream out; try { out = new FileOutputStream(sourcePath); try { out.write(bytes); out.close(); } catch (IOException ex) { Logger.getLogger(DesktopApplication3View.class.getName()).log(Level.SEVERE, null, ex); } } catch (FileNotFoundException ex) { Logger.getLogger(DesktopApplication3View.class.getName()).log(Level.SEVERE, null, ex); } } · Introduction In this tutorial we will create a simple web service and a client web application using eclipse IDE along with plug in. We will also deploy and test the web service on Tomcat 5.5.4 web application server. This application, while simple, provides a good introduction to Web service development and some of the Web development tools available. · Environment J2SDK 1.4.2 http://java.sun.com/ Eclipse 3.1 http://www.eclipse.org/ Tomcat 5.5.4 http://tomcat.apache.org/ Lomboz 3.1RC2 http://lomboz.objectweb.org/ · Installation Install JDK (in D:\j2sdk1.4.2_04) Install Tomcat (in E:\Tomcat5.5) Install Eclipse (in E:\Eclipse3.1) Install Lomboz (in E:\Eclipse3.1) · Setting up 1. Set up the installed JRE in eclipse (Windows -> Preferences -> Java -> Installed JREs) 1. Set up the installed runtime for server in eclipse (Windows -> Preferences -> Server -> Installed Runtimes) 1. Set up the Server view in eclipse (Windows -> Show View -> Other) 1. Set up the Tomcat Server by right clicking and selecting New -> Server option from the Server view in eclipse · Creating a Web service 1. Create a new Dynamic Web Project in eclipse (File -> New -> Other) 1. Enter name as ?WebServiceTutorial?, select project location as ?E:\Test? and select Apache Tomcat v5.5 as the Target server. 1. Now create a new Java class from the Project Explorer (Dynamic Web Projects -> Java Source -> New -> Class) 1. Enter name as ?Hello? and package as ?com.tutorial?. a simple method in the ?Hello? class as below. public String sayHello(String name){ return “Hello ” + name; } 1. Save and build the project. 2. Create a new Web service in eclipse (File -> New -> Other) 1. Select Generate a proxy. 2. Select Test the Web service. 3. Select Overwrite files without warning. 1. Select or enter the Bean name as ?com.tutorial.Hello?. This is the java class that we just now created. 1. Continue the wizard by clicking Next and finish. 2. On Finish, the Tomcat server starts up and launches the Test client. 3. Verify the generated contents. Look for Hello.class and the generated JSPs as below. 1. Verify the Tomcat folder and ensure the newly created web applications ? WebServiceTutorial, WebServiceTutorialClient. 1. We can also run the following url from the browser to access/test the Web service. http://localhost:8080/WebServiceTutorialClient/sampleHelloProxy/TestClient.jsp 1. If servlet error ?org.eclipse.jst.ws.util.JspUtils cannot be resolved or is not a type? is thrown on the browser, then copy the webserviceutils.jar file from the E:\Eclipse3.1\eclipse\plugins\org.eclipse.jst.ws.consumption_0.7.0 into the WEB-INF\lib folder of the WebServiceTutorialClient application and restart the Tomcat server. 1. The browser displays the methods available in the web service. 1. Click on the sayHello(..) method, enter your name (for e.g. ?Jeeva?) in the inputs section and click ?Invoke?. 1. The browser greets using the web service. 1. The WSDL for the Hello Web service can be found in E:\Test\WebServiceTutorial\WebContent\wsdl\Hello.wsdl. On double-click, the WSDL opens in a graphical editor. 1. Right-click on the WSDL file and explore the options to test the web service / publish the WSDL file / generate client / etc. · Conclusion In this tutorial we learned how to create a simple web service and a client web application using eclipse IDE along with Lomboz plug in. We also deployed and tested the web service on Tomcat 5.5.4 web application server. This application, while simple, provides a good introduction to Web service development and some of the Web development tools available. You can download µTorrent from here. So, lets start the smart guide! I am going to show you that how can we tweak µTorrent speed of downloading inorder to get better and time saving results. 1) Open µTorrent, And go to Options > Preferences and do what i did! (check also what i checked) 2) Now open Bandwidth, and do what i did in screen shot below! 3) Now, Move on to BiTorrent and do what i did in screen shot below! (Check whatever i have checked) 4) Now move on to the Queueing , and follow exactly what i did in screen shot!
2020-02-20 20:25:36
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9490974545478821, "perplexity": 791.0848662695262}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-10/segments/1581875145282.57/warc/CC-MAIN-20200220193228-20200220223228-00056.warc.gz"}
https://quant.stackexchange.com/questions/38363/long-gamma-vs-vega
# Long Gamma vs Vega What is the difference between being long gamma and being long Vega? I understand that gamma is the vol of delta and that vega is the vol of the underlying. However, I have also found that being long gamma and long vega basically means being long options. In that case, what is the difference between the two? Cheers. • Gamma increases at T->0 and S->K, Vega increases as T becomes large. So they vary with maturity in different ways. – Alex C Feb 21 '18 at 6:09 • So the difference between the two is a function of time? – Noodle22 Feb 21 '18 at 6:50 • I am not saying that. I only wanted to "prove" to you that they are not the same thing. – Alex C Feb 21 '18 at 16:29 Long gamma is being long realized volatility. Long vega is being long implied volatility. Long gamma positions benefit when realized volatility goes up or the actual underlying has volatility. Long vega positions benefit when the price of volatility goes up. Being long plain vanilla options, one is long both gamma and long vega. However, this is not so if one starts to combine options in strategies. One can construct positions where one is long gamma and short vega. A simple example would be a simple calendar spread--if one is long an at-the-money call with short maturity, one is long gamma and long vega. If one shorts an at-the-money longer dated maturity call on the same underlying, one is short gamma and short vega. However, the short longer dated call will be less long gamma than the shorter dated one; and short more vega than the shorter dated one. The combined position will be long gamma and short vega. The position will benefit if realized volatility goes up before the shorter dated call expires, and if implied volatility goes down. • Okay, that makes a lot of sense! Thanks for the example, that really made it easier to understand. – Noodle22 Feb 21 '18 at 22:01 Vega (denoted by $\nu$ in what follows) is the first order sensitivity of the option price with respect to volatility $\sigma$. Gamma (denoted by $\Gamma$ in what follows), is the second order sensitivity of the option price with respect to the underlying spot price $S$. Because for a semi-martingale $(S_t)_{t \geq 0}$ there is a direct link between the variance of the random variable $S_t$ for any fixed $t$ and its quadratic variation over $[0,t]$, it is only logical that there exists a link between Vega and Gamma. Under BS assumptions, one can show that for an option evaluated at $t$ with time to maturity $\tau = T-t$ $$\nu(\tau) = \Gamma(\tau) \, \sigma S_t^2 \tau$$ see Appendix A of Chapter 5 of Bergomi's book "Stochastic Volatility Modeling" for a demonstartion and this Wiki page to see that it indeed holds under BS. • Thank you so much, this was a very comprehensive explanation. – Noodle22 Feb 21 '18 at 22:00
2020-11-28 11:19:25
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8332860469818115, "perplexity": 1151.1477971583997}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-50/segments/1606141195417.37/warc/CC-MAIN-20201128095617-20201128125617-00404.warc.gz"}
http://www.ck12.org/analysis/Degenerate-Conics/lesson/Degenerate-Conics/r11/
<img src="https://d5nxst8fruw4z.cloudfront.net/atrk.gif?account=iA1Pi1a8Dy00ym" style="display:none" height="1" width="1" alt="" /> You are viewing an older version of this Concept. Go to the latest version. # Degenerate Conics ## Point, line, or pair of lines formed when some coefficients of a conic equal zero. % Progress Progress % Degenerate Conics The general equation of a conic is .  This form is so general that it encompasses all regular lines, singular points and degenerate hyperbolas that look like an X.  This is because there are a few special cases of how a plane can intersect a two sided cone.  How are these degenerate shapes formed? #### Guidance Degenerate conic equations simply cannot be written in graphing form.  There are three types of degenerate conics: 1. A singular point, which is of the form: . You can think of a singular point as a circle or an ellipse with an infinitely small radius. 2. A line, which has coefficients  in the general equation of a conic.  The remaining portion of the equation is , which is a line. 3. A degenerate hyperbola, which is of the form:  .  The result is two intersecting lines that make an “X” shape.  The slopes of the intersecting lines forming the X are . This is because  goes with the  portion of the equation and is the rise, while  goes with the  portion of the equation and is the run. Example A Transform the conic equation into standard form and sketch. Solution: This is the line . Example B Transform the conic equation into standard form and sketch. Solution: The point (2, 1) is the result of this degenerate conic. Example C Transform the conic equation into standard form and sketch. Solution: This is a degenerate hyperbola. Concept Problem Revisited When you intersect a plane with a two sided cone where the two cones touch, the intersection is a single point.  When you intersect a plane with a two sided cone so that the plane touches the edge of one cone, passes through the central point and continues touching the edge of the other conic, this produces a line.  When you intersect a plane with a two sided cone so that the plane passes vertically through the central point of the two cones, it produces a degenerate hyperbola #### Vocabulary A degenerate conic is a conic that does not have the usual properties of a conic.  Since some of the coefficients of the general equation are zero, the basic shape of the conic is merely a point, a line or a pair of lines.  The connotation of the word degenerate means that the new graph is less complex than the rest of conics. #### Guided Practice 1. Create a conic that describes just the point (4, 7). 2. Transform the conic equation into standard form and sketch. 3. Can you tell just by looking at a conic in general form if it is a degenerate conic? 1. 2. 3. In general you cannot tell if a conic is degenerate from the general form of the equation.  You can tell that the degenerate conic is a line if there are no  or  terms, but other than that you must always try to put the conic equation into graphing form and see whether it equals zero because that is the best way to identify degenerate conics. #### Practice 1. What are the three degenerate conics? Change each equation into graphing form and state what type of conic or degenerate conic it is. 2. 3. 4. 5. 6. 7. 8. 9. 10. Sketch each conic or degenerate conic. 11. 12. 13. 14. 15. ### Vocabulary Language: English Conic Conic Conic sections are those curves that can be created by the intersection of a double cone and a plane. They include circles, ellipses, parabolas, and hyperbolas. degenerate conic degenerate conic A degenerate conic is a conic that does not have the usual properties of a conic section. Since some of the coefficients of the general conic equation are zero, the basic shape of the conic is merely a point, a line or a pair of intersecting lines. degenerate hyperbola degenerate hyperbola A degenerate hyperbola is an example of a degenerate conic. Its equation takes the form $\frac{(x-h)^2}{a}-\frac{(y-k)^2}{b}=0$. It looks like two intersecting lines that make an “X” shape.
2016-02-10 13:36:30
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 1, "texerror": 0, "math_score": 0.5836611986160278, "perplexity": 627.6952405177964}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2016-07/segments/1454701159376.39/warc/CC-MAIN-20160205193919-00016-ip-10-236-182-209.ec2.internal.warc.gz"}
https://math.rutgers.edu/academics/undergraduate/courses/course-materials/135/1430-lecture-topics-3
# 640:135 - Calculus I ## Lecture topics - Spring 2018 LECTURESECTIONSDESCRIPTION 1 1.2, 1.3 Precalculus review: Real line, coordinate plane, distance, circles, straight lines. Trig review: Radians, definition of trig functions, graphs of sin, cos, tan, sec. (Refer to Appendix E.) 2 1.4 Precalculus review: Functions, graphs, composition of functions. 3 2.1, 2.2 Limits: Informal definition and discussion of intuitive meaning. Rules for limits, computing limits of algebraic functions. One-sided limits. Limits of trig functions. Infinite limits. 4 2.2 Topics of lecture 3, continued. 5 2.3 Definition of continuity of a function at a point. Testing continuity. Continuity of a function on an interval. Intermediate value theorem and root location theorem. 6 2.4 Exponential functions and logarithmic functions. Definition of e, properties and inverse relation of the exp and ln functions. Exponential growth. Compound interest and continuous compounding. 7 3.1 Definition of the derivative as a limit, direct calculation of derivatives using the definition. The derivative as slope of tangent line. Equation of tangent line and of normal line. Relation between the graph of f and the graph of f'. Continuity and differentiability. Notations for the derivative. 8 3.2, 3.3 Calculation of derivatives, sum, product and quotient rules. Higher-order derivatives. Differentiation of trig functions and of e^x and ln(x). 9 3.4 The derivative as rate of change. Velocity and acceleration. 10 3.5 The chain rule (for differentiating a composite function). 11   Catch up and review. 12   FIRST IN-CLASS MIDTERM EXAM. 13 3.6 Implicit differentiation. Derivative of ln(|u|). Logarithmic differentiation. 14 3.7 Related rates and applications. 15 3.8 Linear approximation. Differentials. Propagation of error in measurement, relative error, percentage error. Marginal cost, marginal revenue. 16 4.1, 4.2 Absolute maximum and absolute minimum of a function defined on an interval. The extreme value theorem. Relative extrema. Critical numbers and critical points. Finding critical numbers and critical points. Finding absolute extrema among the critical numbers and the endpoints of an interval. Statements of Rolle's theorem and of the mean value theorem, and Example 1. 17 4.3 Increasing and decreasing functions. Finding intervals of increase and of decrease. The first-derivative test for a relative maximum or minimum. Concavity up or down, and the second derivative. Inflection points. The second-derivative test for a relative maximum or minimum. Application to sketching graphs. 18 4.4 Limits as x approaches plus or minus infinity, and horizontal asymptotes. Infinite limits and vertical asymptotes. Application to sketching graphs with horizontal and/or vertical asymptotes. 19 4.5 L'Hopitals's rule (for evaluating limits involving indeterminate forms). 20 4.6 Optimization applications: geometric and physical problems. 21   Catch up and review. 22   SECOND IN-CLASS MIDTERM EXAM. 23 4.7 Optimization applications in business: marginal analysis, the demand function, maximizing profit or revenue, minimizing average cost. 24 5.1 Antiderivatives, indefinite integrals. 25 5.2, 5.3 Riemann sums for approximating the area under a curve. 26 5.4 The first fundamental theorem of calculus (for evaluating a definite integral using antidifferentiation). The second fundamental theorem of calculus. 27 5.5 Integration by substitution, for both indefinite and definite integrals. 28   Catch up and review.
2020-05-25 21:44:47
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.802130401134491, "perplexity": 1991.1039965222449}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-24/segments/1590347389355.2/warc/CC-MAIN-20200525192537-20200525222537-00032.warc.gz"}
https://learnmathonline.org/LinearAlgebra/SystemsOfLinearEquations.html
Linear Algebra Home Systems of Linear Equations A system of linear equations with $$n$$ variables $$x_1,\ldots,x_n$$ and $$m$$ equations can be written as follows: $\begin{eqnarray} \begin{array}{ccccccccc} a_{11}x_1&+&a_{12}x_2&+&\cdots &+&a_{1n}x_n&=&b_1\\ a_{21}x_1&+&a_{22}x_2&+&\cdots &+&a_{2n}x_n&=&b_2\\ \vdots&&\vdots&& &&\vdots&&\vdots \tag{1}\\ a_{m1}x_1&+&a_{m2}x_2&+&\cdots &+&a_{mn}x_n&=&b_m. \end{array} \end{eqnarray}$ A solution is an $$n$$-tuple $$(s_1,s_2,\ldots,s_n)$$ that satisfies each equation when we substitute $$x_1=s_1,x_2=s_2,\ldots,x_n=s_n$$. The solution set is the set of all solutions. Example. $\begin{eqnarray*} \begin{array}{rcrcrcr} x_1&& &+&x_3&=&3\\ &&x_2&-&2x_3&=&-1 \end{array} \end{eqnarray*}$ The solution set (on $$\mathbb R$$) is $$\{(-s+3,2s-1,s)\; |\; s\in \mathbb R\}$$. There are infinitely many solutions because of the free variable $$x_3$$. Possibilities of solutions of a linear system: • System has no solution (Inconsistent) • System has a solution (Consistent) • Unique solution • Infinitely many solutions Definition. The system (1) is called an underdetermined system if $$m < n$$, i.e., fewer equations than variables. The system (1) is called an overdetermined system if $$m > n$$, i.e., more equations than variables. The system (1) of linear equations can be written by a matrix equation and a vector equation: The matrix equation: $$A\overrightarrow{x}=\overrightarrow{b}$$, where $A=\left[\begin{array}{cccc} a_{11}&a_{12}&\cdots &a_{1n}\\ a_{21}&a_{22}&\cdots &a_{2n}\\ \vdots&\vdots&\ddots &\vdots\\ a_{m1}&a_{m2}&\cdots &a_{mn} \end{array}\right],\; \overrightarrow{x}=\left[\begin{array}{c}x_1\\x_2\\ \vdots\\x_n \end{array} \right], \mbox{ and } \overrightarrow{b}=\left[\begin{array}{c} b_1\\b_2\\ \vdots\\b_m \end{array} \right].$ $$A$$ is the coefficient matrix. The augmented matrix is $[A\:\overrightarrow{b}]=\left[\begin{array}{ccccc} a_{11}&a_{12}&\cdots &a_{1n}&b_1\\ a_{21}&a_{22}&\cdots &a_{2n}&b_2\\ \vdots&\vdots&\ddots &\vdots&\vdots\\ a_{m1}&a_{m2}&\cdots &a_{mn}&b_m \end{array}\right].$ The vector equation: $$x_1\overrightarrow{a_1}+x_2\overrightarrow{a_2}+\cdots+x_n\overrightarrow{a_n}=\overrightarrow{b}$$, where $$A=[\overrightarrow{a_1}\:\overrightarrow{a_2}\:\cdots\overrightarrow{a_n}]$$. Example. $\begin{eqnarray*} \begin{array}{rcrcrcr} &&2x_2 &-&8x_3&=&8\\ x_1&-&2x_2 &+&x_3&=&0\\ -4x_1&+&5x_2&+&9x_3&=&-9 \end{array} \end{eqnarray*}$ The matrix equation is $$A\overrightarrow{x}=\overrightarrow{b}$$ where $A=\left[\begin{array}{rrr}0&2&-8\\1&-2&1\\-4&5&9\end{array} \right],\; \overrightarrow{x}= \left[\begin{array}{c}x_1\\x_2\\x_3 \end{array} \right], \text{ and } \overrightarrow{b}= \left[\begin{array}{r}8\\0\\-9 \end{array} \right].$ The augmented matrix is $[A\:\overrightarrow{b}]=\left[\begin{array}{rrr|r}0&2&-8&8\\1&-2&1&0\\-4&5&9&-9\end{array} \right].$ The vector equation is $$x_1\left[\begin{array}{r}0\\1\\-4 \end{array} \right] +x_2\left[\begin{array}{r}2\\-2\\5 \end{array} \right] +x_3\left[\begin{array}{r}-8\\1\\9 \end{array} \right] =\left[\begin{array}{r}8\\0\\-9 \end{array} \right].$$ You may verify that one solution is $$(x_1,x_2,x_3)=(29,16,3)$$. Is it the only solution? Last edited
2023-03-29 07:03:29
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9601289629936218, "perplexity": 212.77397584301596}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2023-14/segments/1679296948951.4/warc/CC-MAIN-20230329054547-20230329084547-00348.warc.gz"}
https://proofwiki.org/wiki/Symbols:E
Symbols:E Previous  ... Next Identity Element $e$ Denotes the identity element in a general algebraic structure. If $e$ is the identity of the structure $\struct {S, \circ}$, then a subscript is often used: $e_S$. This is particularly common when more than one structure is under discussion. The $\LaTeX$ code for $e_S$ is e_S . Euler's Number $e$ Euler's number $e$ is the base of the natural logarithm $\ln$. $e$ is defined to be the unique real number such that the value of the (real) exponential function $e^x$ has the same value as the slope of the tangent line to the graph of the function. The $\LaTeX$ code for $e$ is e . Eccentricity $e$ Used to denote the eccentricity of a conic section. The $\LaTeX$ code for $e$ is e . exa- $\mathrm E$ The Système Internationale d'Unités symbol for the metric scaling prefix exa, denoting $10^{\, 18 }$, is $\mathrm { E }$. Its $\LaTeX$ code is \mathrm {E} . $\mathrm E$ or $\mathrm e$ The hexadecimal digit $14$. Its $\LaTeX$ code is \mathrm E  or \mathrm e. Duodecimal $\mathrm E$ The duodecimal digit $11$. Its $\LaTeX$ code is \mathrm E . Set $E$ Used by some authors to denote a general set. The $\LaTeX$ code for $E$ is E . Complete Elliptic Integral of the Second Kind $\map E k$ $\ds \map E k = \int \limits_0^{\pi / 2} \sqrt {1 - k^2 \sin^2 \phi} \rd \phi$ is the complete elliptic integral of the second kind, and is a function of $k$, defined on the interval $0 < k < 1$. The $\LaTeX$ code for $\map E k$ is \map E k . Incomplete Elliptic Integral of the Second Kind $\map E {k, \phi}$ $\ds \map E {k, \phi} = \int \limits_0^\phi \sqrt {1 - k^2 \sin^2 \phi} \rd \phi$ is the incomplete elliptic integral of the second kind, and is a function of the variables: $k$, defined on the interval $0 < k < 1$ $\phi$, defined on the interval $0 \le \phi \le \pi / 2$. The $\LaTeX$ code for $\map E {k, \phi}$ is \map E {k, \phi} . Experiment $\mathcal E$ An experiment, which can conveniently be denoted $\EE$, is a probability space $\struct {\Omega, \Sigma, \Pr}$. The $\LaTeX$ code for $\mathcal E$ is \mathcal E  or \EE. Expectation $\expect X$ Let $\struct {\Omega, \Sigma, \Pr}$ be a probability space. Let $X$ be a real-valued discrete random variable on $\struct {\Omega, \Sigma, \Pr}$. The expectation of $X$, written $\expect X$, is defined as: $\expect X := \ds \sum_{x \mathop \in \image X} x \map \Pr {X = x}$ whenever the sum is absolutely convergent, that is, when: $\ds \sum_{x \mathop \in \image X} \size {x \map \Pr {X = x} } < \infty$ The $\LaTeX$ code for $\expect X$ is \expect X . Conditional Expectation $\expect {X \mid B}$ Let $\struct {\Omega, \Sigma, \Pr}$ be a probability space. Let $X$ be a discrete random variable on $\struct {\Omega, \Sigma, \Pr}$. Let $B$ be an event in $\struct {\Omega, \Sigma, \Pr}$ such that $\map \Pr B > 0$. The conditional expectation of $X$ given $B$ is written $\expect {X \mid B}$ and defined as: $\expect {X \mid B} = \ds \sum_{x \mathop \in \image X} x \condprob {X = x} B$ where: $\condprob {X = x} B$ denotes the conditional probability that $X = x$ given $B$ whenever this sum converges absolutely. The $\LaTeX$ code for $\expect {X \mid B}$ is \expect {X \mid B} . Error Function $\erf$ The error function is the following improper integral, considered as a real function $\erf : \R \to \R$: $\map {\erf} x = \ds \dfrac 2 {\sqrt \pi} \int_0^x \map \exp {-t^2} \rd t$ where $\exp$ is the real exponential function. Its $\LaTeX$ code is erf . Complementary Error Function $\erfc$ The complementary error function is the real function $\erfc: \R \to \R$: $\ds \map {\erfc} x$ $=$ $\ds 1 - \map \erf x$ where $\erf$ denotes the Error Function $\ds$ $=$ $\ds 1 - \dfrac 2 {\sqrt \pi} \int_0^x \map \exp {-t^2} \rd t$ where $\exp$ denotes the Real Exponential Function $\ds$ $=$ $\ds \dfrac 2 {\sqrt \pi} \int_x^\infty \map \exp {-t^2} \rd t$ Its $\LaTeX$ code is erfc . East $\mathrm E$ East (Terrestrial) East is the direction on (or near) Earth's surface along the small circle in the direction of Earth's rotation in space about its axis. East (Celestial) The $\LaTeX$ code for $\mathrm E$ is \mathrm E . Energy $E$ The usual symbol used to denote the energy of a body is $E$. Its $\LaTeX$ code is E . Electric Field Strength $\mathbf E$ The usual symbol used to denote electric field strength is $\mathbf E$. Some sources use the calligraphic form $\EE$. Its $\LaTeX$ code is \mathbf E . Electromotive Force $\EE$ The usual symbol used to denote electromotive force is $\EE$. Its $\LaTeX$ code is \EE . Elementary Charge $\E$ The symbol used to denote the elementary charge is usually $\E$ or $e$. The preferred symbol on $\mathsf{Pr} \infty \mathsf{fWiki}$ is $\E$. Its $\LaTeX$ code is \E . Electrostatic Unit $\mathrm {e.s.u.}$ The symbol for the electrostatic unit is $\mathrm {e.s.u.}$ Its $\LaTeX$ code is \mathrm {e.s.u.} . Electromagnetic Unit $\mathrm {e.m.u.}$ The symbol for the electromagnetic unit is $\mathrm {e.m.u.}$ Its $\LaTeX$ code is \mathrm {e.m.u.} . Previous  ... Next
2023-03-28 08:48:25
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 2, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9961171746253967, "perplexity": 678.8023356301168}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2023-14/segments/1679296948817.15/warc/CC-MAIN-20230328073515-20230328103515-00060.warc.gz"}
https://www.cs.colostate.edu/AlphaZ/wiki/doku.php?id=polymodel_verifier
# AlphaZ ### Site Tools polymodel_verifier # Verifier #### Bundle Setup I have prepared a bundle including Eclipse itself: http://www.cs.colostate.edu/AlphaZ/bundles/eclipse-verifier-bundle-linux64.tar.gz The bundle is based on Eclipse Classic 3.7.1. The above bundle contains all necessary plug-ins to use the following project: This verifier.IF.ada project contains examples to use the verifier. For trying out the verifier, I suggest that you download the bundle, and the above archived project, and then import the tar from Import→General→Existing Projects Into Workspace. The examples are from our ompVerify paper (http://www.springerlink.com/content/0gh74j23115861g6/). This is a jar file for the verifier. If run as a stand alone jar file, it verifies the legality of parallelization of some of the example programs. When used as a library, please specify the program representation according to the textual interface specified at the bottom of this page and call VerifierExample.verify #### Source Access The following plug-ins are installed as jar files under dropins directory in the above bundle. Sources to these projects are available in the GeCoS repository. Please follow the instructions in this website (https://gforge.inria.fr/scm/?group_id=510) to check out sources for these plug-ins. All plug-ins are located under trunk/polytools-emf, and are licensed under EPL. • fr.irisa.cairn.eclipse.tom • fr.irisa.cairn.jnimap.isl • fr.irisa.cairn.model.integerlinearalgebra • fr.irisa.cairn.model.polymodel • fr.irisa.cairn.model.polymodel.isl • fr.irisa.cairn.model.polymodel.prdg • org.polymodel.verifier The following plug-ins are required to use the verifier. • EMF - Eclipse Modeling Framework SDK (2.7.1) • From Indigo update site under Modeling cateogry • Xtext Antlr Runtime Feature (2.0.0) The bundle also contains subclipse for SVN access. #### Using the Verifier The provided interface takes the following information for each statement in a polyhedral region to verify: • Polyhedral domain • Schedule (as affine functions) • Write access • Annotation for each dimension of the schedule specifying the type of the dimension. The output of the verifier is an instance of VerifierOutput object, containing a flag to tell if the program was valid, and a list of messages from the verifier listing violations found in the program. The messages should contain sufficient information to give feedback to the user. These input can easily be extracted from affine control loops (example later), and array dataflow analysis is performed using the provided information and then the resulting polyhedral reduced dependence graph (PRDG) is verified. #### ISLSet and ISLMap The verifier uses Integer Set Libarary (http://freshmeat.net/projects/isl/), and accepts textual representations for ISLSets and ISLMaps. • ISLSets [<parameter indices>]->{ [<indices>] : <list of constraints involving parameters and indices> } where the constraints are delimited by any of one '&', '|', 'and', 'or'. • ISLMaps [<parameter indices>]->{ [<indices>] -> [<list of expressions involving parameters and indices>] } where the expressions are delimited by ','. Note that ISL accepts much more general syntax than shown above, since ISLMaps are relations, and not functions, but the above is sufficient to express inputs to the verifier. #### Example The example is for C programs with OpenMP. The verifier is applicable for other programming languages with parallel constructs that can be viewed as doall parallelization. In case of X10, if the only statement in the body of a loop is async, the loop can be considered as doall loop. void matrix_multiply(int** A, int** B, int** C, int N) { int i,j,k; #pragma omp parallel for for (i = 0; i < N; i++) { for (j = 0; j < N; j++) { C[i][j] = 0; //S0 for (k = 0; k < N; k++) { C[i][j] += A[i][k] * B[k][j]; //S1 } } } } ##### Statement Domains Each statement is surrounded by a set of loops with affine bounds. The polyhedral domain for each statement is simply the intersection of all loop bounds that surrounds a statement. There may be a if statement with affine constraints, such constraints from if statements are also intersected when computing the statement domain. For the above matrix multiply, S0 is surrounded by i,j loops and thus has a square domain, where S1 has a cubic domain. • S0 : [N] -> { [i,j] : 0<=i<N & 0<=j<N } • S1 : [N] -> { [i,j,k]:0<=i<N & 0<=j<N & 0<=k<N } ##### Schedule The input schedule given to the verifier is the sequential schedule, treating doall loops as sequential loops. Doall parallelism is later expressed as dimension types. The schedule is used to characterize the relative placement of the statements. In addition, we require that all statements are mapped to a common dimensional space via scheduling specification. Although this is not a requirement, a common convention when applying polyhedral analyses to imperative programs is to use 2d+1 representation. For a loop with maximum depth d, additional d+1 dimensions (with constants) are used to specify textual ordering of the loops and statements. For the above matrix multiply, the schedules are: • S0 : [N] -> { [i,j] -> [0,i,0,j,0,0,0] } • S1 : [N] -> { [i,j,k]->[0,i,0,j,1,k,0] } Note that the 5th dimension is 0 for S0 and 1 for S1, and all preceding dimensions are identical. This specifies that S0 is textually before S1 at this dimension. Also note that the last two dimensions of the schedule for S0 are padded with 0s, to match the number of dimensions. ##### Accesses Both reads and writes are specified as a pair of variable name and access function. Variable names corresponds to the arrays read/written in the program, and access function corresponds to the indexing into the arrays. The access functions are expressed as affine functions from the corresponding statement domains. For the above example, S0 has only one access: • Write C : [N] -> { [i,j] -> [i,j] } S1 has total of 4 accesses: • Write C : [N] -> { [i,j,k] -> [i,j] } • Read C : [N] -> { [i,j,k] -> [i,j] } • Read A : [N] -> { [i,j,k] -> [i,k] } • Read B : [N] -> { [i,j,k] -> [k,j] } ##### Dimension Types One last piece of information given to the verifier is the annotation on each dimension of the schedule. There are three types of dimensions: • SEQUENTIAL : This dimension should be interpreted as time steps • PARALLEL : This dimension should be interpreted as processor dimension • ORDERING : This dimension only contains constants that expresses statement orderings The parallel dimensions corresponds to loops that are parallelized by OpenMP for directive, or any other parallel construct for expressing doall parallelism. ##### private clause in OpenMP When variables are declared as private in OpenMP, the variable is private to each iteration of the parallel loop. Therefore, it is expressed by adding additional dimensions to the memory accesses, indexed by the loop iterator of the parallel loop, so that each iteration of the parallel loop access different memory location. In other words, the following code: #pragma omp parallel for private(c) for (i=0; i < N i++) c += ... is translated as: #pragma omp parallel for for (i=0; i < N i++) c[i] += ... during verification. #### Interface ##### Textual The following code is a possible interface based on arrays of Strings for testing purposes. private static void matrix_multiply() { String[][] statements = new String[][] { new String[] {"S0", "[N] -> { [i,j] : 0<=i<N & 0<=j<N }", "[N] -> { [i,j] -> [0,i,0,j,0,0,0] }"}, new String[] {"S1", "[N] -> { [i,j,k] : 0<=i<N & 0<=j<N & 0<=k<N }", "[N] -> { [i,j,k] -> [0,i,0,j,1,k,0] }"} }; String[][] reads = new String[][] { new String[] {"S1", "A", "[N] -> { [i,j,k] -> [i,k] }"}, new String[] {"S1", "B", "[N] -> { [i,j,k] -> [k,j] }"}, new String[] {"S1", "C", "[N] -> { [i,j,k] -> [i,j] }"} }; String[][] writes = new String[][] { new String[] {"S0", "C", "[N] -> { [i,j] -> [i,j] }"}, new String[] {"S1", "C", "[N] -> { [i,j,k] -> [i,j] }"} }; String[][] dims = new String[][] { //Legal 1D parallelization new String[] {"S0", "O,P,O,S,O,S,O"}, new String[] {"S1", "O,P,O,S,O,S,O"}, //illegal parallelization of k dimension //new String[] {"S0", "O,S,O,S,O,P,O"}, //new String[] {"S1", "O,S,O,S,O,P,O"} }; } ##### Generic Interface For a more general interface, the user must construct the following data structure, called ADAInput that represents the polyhedral region to analyze. In addition a map, Map<CandidateStatement, List<DIM_TYPE» is required for specifying the dimension types for each statement. fr.irisa.cairn.model.polymodel.ada.factory.ADAUserFactory provides methods for constructing ADAInput. ADAInput |------variables : List<Variable> |------statements : List<CandidateStatement> CandidateStatement |------ID : String |------domain : PolyhedralDomain |------schedule : AffineMapping |------write : WriteAccess Access |-----variable : Variable |-----accessFunction : AffineMapping Variable |-----name : String WriteAccess->Access ADAInput is the input to Array Dataflow Analysis, which is a separate module, and thus dimension types are not part of this data structure. Once these two data structures are ready, the following two lines will invoke the verifier. VerifierInput input = VerifierInput.build(adaInput, dimTypes); VerifierOutput output = Verifier.verify(ISLDefaultFactory.INSTANCE, input.prdg, input.schedules, input.memoryMaps, input.dimTypes);
2019-07-18 22:08:12
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.2817738652229309, "perplexity": 6814.664519892846}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-30/segments/1563195525829.33/warc/CC-MAIN-20190718211312-20190718233312-00146.warc.gz"}
https://ncatlab.org/nlab/show/symplectic+realization
# nLab symplectic realization Contents ### Context #### Symplectic geometry symplectic geometry higher symplectic geometry # Contents ## Definition For $(X, \pi)$ a Poisson manifold, a symplectic realization of it is a symplectic manifold $(Y, \omega)$ and a Poisson map $(Y, \omega) \to (X, \pi)$ such that $Y \to X$ is a surjective submersion $Y \to X$. ## Properties ### Solution in terms of symplectic groupoids For any symplectic groupoid $\Sigma$ with base a Poisson manifold $P$ the target map is a symplectic realization of $P$ and the source map is a symplectic realization of the opposite structure. Thus $\Sigma$ with its symplectic structure may be regarded as a desingularization of $P$ with its Poisson structure. Since the symplectic groupoid is the Lie integration of the Poisson Lie algebroid of the Poisson manifold, symplectic realization has been reduced to a problem in Lie theory. Created on February 12, 2013 at 21:14:46. See the history of this page for a list of all contributions to it.
2022-10-03 18:18:01
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 10, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8914557695388794, "perplexity": 363.25768391121443}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337428.0/warc/CC-MAIN-20221003164901-20221003194901-00686.warc.gz"}
https://wiki.math.ucr.edu/index.php?title=Math_22_The_Three-Dimensional_Coordinate_System&oldid=2563
# Math 22 The Three-Dimensional Coordinate System ## The Distance and Midpoint Formulas The distance ${\displaystyle d}$ between the points ${\displaystyle (x_{1},x_{2},x_{3})}$ and ${\displaystyle (x_{2},y_{2},z_{2})}$ is ${\displaystyle d={\sqrt {(x_{2}-x_{1})^{2}+(y_{2}-y_{1})^{2}+(z_{2}-z_{1})^{2}}}}$ Exercises 1 Find the distance between two points 1) ${\displaystyle (4,2,3)}$ and ${\displaystyle (1,2,0)}$ Solution: ${\displaystyle d={\sqrt {(x_{2}-x_{1})^{2}+(y_{2}-y_{1})^{2}+(z_{2}-z_{1})^{2}}}={\sqrt {(1-4)^{2}+(2-2)^{2}+(0-3)^{2}}}={\sqrt {18}}}$ 2) ${\displaystyle (1,2,4)}$ and ${\displaystyle (2,5,1)}$ Solution: ${\displaystyle d={\sqrt {(2-1)^{2}+(5-2)^{2}+(1-4)^{2}}}={\sqrt {19}}}$ ## Midpoint Formula in Space The midpoint of the line segment joining the points ${\displaystyle (x_{1},x_{2},x_{3})}$ and ${\displaystyle (x_{2},y_{2},z_{2})}$ is ${\displaystyle {\text{Midpoint}}=({\frac {x_{1}+x_{2}}{2}},{\frac {y_{1}+y_{2}}{2}},{\frac {z_{1}+z_{2}}{2}})}$ Exercises 2 Find the midpoint of two points below: 1) ${\displaystyle (4,2,3)}$ and ${\displaystyle (1,2,0)}$ Solution: ${\displaystyle {\text{Midpoint}}=({\frac {x_{1}+x_{2}}{2}},{\frac {y_{1}+y_{2}}{2}},{\frac {z_{1}+z_{2}}{2}})=({\frac {4+1}{2}},{\frac {2+2}{2}},{\frac {3+0}{2}})=({\frac {5}{2}},2,{\frac {3}{2}})}$ 2) ${\displaystyle (1,2,4)}$ and ${\displaystyle (2,5,1)}$ Solution: ${\displaystyle {\text{Midpoint}}=({\frac {x_{1}+x_{2}}{2}},{\frac {y_{1}+y_{2}}{2}},{\frac {z_{1}+z_{2}}{2}})=({\frac {1+2}{2}},{\frac {2+5}{2}},{\frac {4+1}{2}})=({\frac {3}{2}},{\frac {7}{2}},{\frac {5}{2}})}$
2022-05-20 00:05:52
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 19, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.7579200267791748, "perplexity": 652.3607210325868}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-21/segments/1652662530553.34/warc/CC-MAIN-20220519235259-20220520025259-00257.warc.gz"}
http://www.resenarsforum.se/vanilla-bean-nxezil/viewtopic.php?7ac649=d%C3%A9terminant-matrice-4x4
# déterminant matrice 4x4 In the s… |A| = $$\left|\begin{array}{cccc}4 & 3 & 2 & 2 \\ 0 & 1 & -3 & 3 \\ 0 & -1 & 3 & 3 \\ 0 & 3 & 1 & 1\end{array}\right|$$. Déterminant : généralisation Après un bref rappel de l’interprétation géométrique du déterminant des matrices 2x2 et 3x3, nous passons au déterminant de la matrice 4x4. Cas d’une matrice 2×2. The determinant of a matrix is a special number that can be calculated from a square matrix. These additional minors would have contributed to the coefficient, The use of a determinant is algorithmic rather than mathematical and is important to solve for variable quantities of linear equation systems by. Il faudrait 72 multiplications pour obtenir le déterminant premier. Re : probleme pour calculer le determinant d'une matrice 4x4 La règle est "ajouter à une ligne un multiple d'une autre ligne". Il reste à calculer le déterminant de la matrice 3 x 3, mais comme il s’agit d’une matrice triangulaire c’est très simple, il suffit de multiplier les coefficients diagonaux ! Leave extra cells empty to enter non-square matrices. To understand how to produce the determinant of a 4×4 matrix it is first necessary to understand how to produce the determinant of a 3×3 matrix.The reason; determinants of 4×4 matrices involve eliminating a row and column of the matrix, evaluating the remaining 3×3 matrix for its minors and cofactors and then expanding the cofactors to produce the determinant. Puisque les matrices peuvent être multipliées à la seule condition que leurs types soient compatibles, il y a des matrices unité de tout ordre. In this situation, the cofactor is a 3×3 determinant which is estimated with its particular formula. This app is the pro version of "Matrix Determinant Calculator", completely without advertisement! This is the determinant of my matrix. matrix-determinant-calculator \det \begin{pmatrix}1 & 3 & 5 & 9 \\1 & 3 & 1 & 7 \\4 & 3 & 9 & 7 \\5 & 2 & 0 & 9\end{pmatrix} en. Déterminant 4x4 plus simple Notre mission : apporter un enseignement gratuit et de qualité à tout le monde, partout. ), with steps shown. F2School Mathématique addition matrice, algèbre, algebre 2 exercices corrigés pdf, algèbre linéaire, Application des Déterminants à la Théorie du Rang, application linéaire bibmath, application linéaire continue, application linéaire Donc on a det (P) 2 = 1. … det A = a 1 1 a 1 2 a 1 3 a 1 4 a 2 1 a 2 2 a 2 3 a 2 4 a 3 1 a 3 2 a 3 3 a 3 4 a 4 1 a 4 2 a 4 3 a 4 4. Determinant of a 4×4 matrix is a unique number which is calculated using a particular formula. Required fields are marked *. det(B) Nous nous basons sur une généralisation du produit vectoriel à 4 dimensions. In this article, we will write a C# program to calculate Matrix Determinant [crayon-5fc6e5d401717680829429/] Output: Enter the order of determinant: 2 Order of determinant entered:2 E… Use ↵ Enter, Space, ←, →, ↑, ↓, ⌫, and Delete to navigate between cells, Ctrl ⌘ Cmd +C/ Ctrl ⌘ Cmd +V to copy/paste matrices. We need only determine the cofactor for one array element instead of 4. To understand how to produce the determinant of a 4×4 matrix it is first necessary to understand how to produce the determinant of a 3×3 matrix. Description: collection d'exercices sur le déterminant d'une matrice carrée. The much easier way to check the determinant of a 4x4 matrix is to use a computer program, website, or calculator that will handle matrix determinants. A tolerance test of the form abs(det(A)) < tol is likely to flag this matrix as singular. This means the multiplication by their cofactor will be zero and the only array element to consider is. Théorème: Le déterminant de la matrice de passage d'une base orthonormée à une autre base orthonormée est égale à 1 . This page explains how to calculate the determinant of 4 x 4 matrix. suppose there is given two dimensional array. Le code est récursive, mais la Find the row or column with the largest number of zeros and expand the determinant of the matrix over it. Quelle est la formule de calcul de déterminant d'une matrice d'ordre n ? This makes less work to determine the minors and cofactors as the remaining array elements of column 3 have zero values. Get the free "4x4 Determinant calculator" widget for your website, blog, Wordpress, Blogger, or iGoogle. The determinant is –12. A 4 by 4 determinant can be expanded in terms of 3 by 3 determinants called minors. 11. Pick the row or column with the most zeros in it. Hence, here 4×4 is a square matrix which has four rows and four columns. : Produit matriciel Exo suiv. DETERMINANT OF A 3 X 3 MATRIX . This makes less work to determine the minors and cofactors as the remaining array elements of column 3 have zero values. The rest will be 0s anyway. The determinant will be equivalent to the product of that element and its cofactor. A tolerance test of the form abs(det(A)) < tol is likely to flag this matrix as singular. You can also calculate a 4x4 determinant on the input form. This is the determinant of our original matrix. rank of matrix determinant 4x4. In this case, that is thesecond column. Find the determinant of each matrix. L'inverse d'une matrice carrée se calcule de plusieurs façons. C13 = pattern sign a13 × (5) = (+) (5) = 5. This determinant calculator can help you calculate the determinant of a square matrix independent of its type in regard of the number of columns and rows (2x2, 3x3 or 4x4). F2School Mathématique addition matrice, algèbre, algebre 2 exercices corrigés pdf, algèbre linéaire, Application des Déterminants à la Théorie du Rang, application linéaire bibmath, application linéaire continue, application linéaire if factoring out of any row or column is possible. It would be very time consuming and challenging to find the determinant of 4x4 matrix by using the elements in the first row and breaking the matrix into smaller 3x3 sub-matrices. . If a matrix order is n x n, then it is a square matrix. If the determinant of the matrix were to be produced by using the array elements of row 1 instead of column 3, or based upon an array element other than a 13, this would require greater effort, however would produce the same determinant value. Determinant of 4x4 Matrix. This is the main site of WIMS (WWW Interactive Multipurpose Server): interactive exercises, online calculators and plotters, mathematical recreation and games Tu en déduis la matrice de f qui est une matrice 4x4. Matrix A is a square 4×4 matrix so it has determinant. This means the multiplication by their cofactor will be zero and the only array element to consider is a13. As we can see, there is only one element other than 0 on first column, therefore we will use the general formula using this column. If A is square matrix then the determinant of matrix A is represented as |A|. Correction del’exercice3 N 1.Par la règle de Sarrus : D 1 = a b c c a b b c a … The determinants of following matrices are available: - 2x2 matrices - 3x3 matrices - 4x4 matrices - 5x5 matrices - nxn matrices (with more than 5 rows and columns) Best math tool for school and college! Matrix, the one with numbers, arranged with rows and columns, is extremely useful in … A Matrix is an array of numbers: A Matrix (This one has 2 Rows and 2 Columns) The determinant of that matrix is (calculations are explained later): 3×6 − 8×4 = 18 − 32 = −14. Reduce this matrix to row echelon form using elementary row operations so that all the elements below diagonal are zero. The reason; determinants of 4×4 matrices involve eliminating a row and column of the matrix, evaluating the remaining 3×3 matrix for its minors and cofactors and then expanding the cofactors to produce the determinant. You can get all the formulas used right after the tool. Une chose est sur, c'est que le code compile correctement et qu'il me donne un résultat correct pour une matrice 4x4 dont je connais le déterminant Edité par koala01 8 octobre 2017 à 19:43:07 Ce qui se conçoit bien s'énonce clairement. Use the Leibniz formula to give an explicit formula for the determinant of a 4x4-matrix. As we can see in the above example, the elements in third row is all 0. The determinant is extremely small. The determinants of following matrices are available: - 2x2 matrices - 3x3 matrices - 4x4 matrices - 5x5 matrices - nxn matrices (with more than 5 rows and columns) Best math tool for school and college! The Leibniz formula for the determinant of a 2 × 2 matrix is | | = −. The inaccuracy of d is due to an aggregation of round-off errors in the MATLAB® implementation of the LU decomposition, which det uses to calculate the determinant. Determinant of 4x4 Matrix by Expansion Method. , this would require greater effort, however would produce the same determinant value. C’est donc un entier. If you are a student, it will helps you to learn! 4. = 8 – 20. Il y This app is a math calculator, which is able to calculate the determinant of a matrix. The determinant is extremely small. This row is 1, 4, 2, 3. Determinant 4x4. In general, you can skip the multiplication sign, so 5x is equivalent to 5*x. Now delete the first row and the third column and write the 3×3 matrix …, Next create the determinant based on column 3 array elements of matrix A …, M(A) = ( −1 × [ 2 (2) − 4 (3) ] ) + 0 + ( 3 × [ 1 (3) − 2 (2) ] ). Recall that only square matrices have a determinant, for non-square ones it’s not defined. Although the determinant of the matrix is close to zero, A is actually not ill conditioned. To find the determinant of a 4×4 matrix, we will use the simple method, which we usually use to find the determinant of a 3×3 matrix. Ainsi : … interactive exercises, online calculators and plotters, mathematical recreation and games Keywords: interactive mathematics, interactive math, server side interactivity, algebra, linear_algebra, matrix, determinant, You can get all the formulas used right after the tool. Tout en faisant d'abord il serait effectivement plus rapide en cas de non-inversible matrice, une matrice inversible, il enregistre 68 multiplications par le faire après. Pour calculer le déterminant d'une matrice, vous devez effectuer les étapes suivantes. Déterminant d’une matrice carrée 1. Calcul % Béton Pneu Mensualité Crédit Convertir Aire Volume Rechercher un outil (en entrant un mot clé): calcul sur les matrices : déterminant de matrice (n,n) - somme de matrices - matrice inverse de matrice (n,n) - produit de matrices (n,m) × (m,p) - puissance de matrice (n,n) - résolution de système à n inconnues As we can see here, second and third rows are proportional to each other. Let's look at an example. The determinant of a matrix can be arbitrarily close to zero without conveying information about singularity. Enter … We need only determine the cofactor for one array element instead of 4. Let's find the determinant of a 4x4 system. There is also an an input form for calculation. The online calculator calculates the value of the determinant of a 4x4 matrix with the Laplace expansion in a row or column and the gaussian algorithm. In any of the three cases given above is met, the corresponding methods for calculating 3×3 determinants are used. 2 Preuve Montrons le résultat par récurrence sur la taille ndu déterminant. Use x, y; or x, rank of matrix determinant 4x4 y, z; or x1 , x2 , x3 , x4 as variables Problem 1. det a b c d 2èmeécriture= a b c d définition= ad −bc. It is important when matrix is used to solve system of linear equations (for example Solution of a system of 3 linear equations). The determinant is a value defined for a square matrix. CBSE Previous Year Question Papers Class 10, CBSE Previous Year Question Papers Class 12, NCERT Solutions Class 11 Business Studies, NCERT Solutions Class 12 Business Studies, NCERT Solutions Class 12 Accountancy Part 1, NCERT Solutions Class 12 Accountancy Part 2, NCERT Solutions For Class 6 Social Science, NCERT Solutions for Class 7 Social Science, NCERT Solutions for Class 8 Social Science, NCERT Solutions For Class 9 Social Science, NCERT Solutions For Class 9 Maths Chapter 1, NCERT Solutions For Class 9 Maths Chapter 2, NCERT Solutions For Class 9 Maths Chapter 3, NCERT Solutions For Class 9 Maths Chapter 4, NCERT Solutions For Class 9 Maths Chapter 5, NCERT Solutions For Class 9 Maths Chapter 6, NCERT Solutions For Class 9 Maths Chapter 7, NCERT Solutions For Class 9 Maths Chapter 8, NCERT Solutions For Class 9 Maths Chapter 9, NCERT Solutions For Class 9 Maths Chapter 10, NCERT Solutions For Class 9 Maths Chapter 11, NCERT Solutions For Class 9 Maths Chapter 12, NCERT Solutions For Class 9 Maths Chapter 13, NCERT Solutions For Class 9 Maths Chapter 14, NCERT Solutions For Class 9 Maths Chapter 15, NCERT Solutions for Class 9 Science Chapter 1, NCERT Solutions for Class 9 Science Chapter 2, NCERT Solutions for Class 9 Science Chapter 3, NCERT Solutions for Class 9 Science Chapter 4, NCERT Solutions for Class 9 Science Chapter 5, NCERT Solutions for Class 9 Science Chapter 6, NCERT Solutions for Class 9 Science Chapter 7, NCERT Solutions for Class 9 Science Chapter 8, NCERT Solutions for Class 9 Science Chapter 9, NCERT Solutions for Class 9 Science Chapter 10, NCERT Solutions for Class 9 Science Chapter 12, NCERT Solutions for Class 9 Science Chapter 11, NCERT Solutions for Class 9 Science Chapter 13, NCERT Solutions for Class 9 Science Chapter 14, NCERT Solutions for Class 9 Science Chapter 15, NCERT Solutions for Class 10 Social Science, NCERT Solutions for Class 10 Maths Chapter 1, NCERT Solutions for Class 10 Maths Chapter 2, NCERT Solutions for Class 10 Maths Chapter 3, NCERT Solutions for Class 10 Maths Chapter 4, NCERT Solutions for Class 10 Maths Chapter 5, NCERT Solutions for Class 10 Maths Chapter 6, NCERT Solutions for Class 10 Maths Chapter 7, NCERT Solutions for Class 10 Maths Chapter 8, NCERT Solutions for Class 10 Maths Chapter 9, NCERT Solutions for Class 10 Maths Chapter 10, NCERT Solutions for Class 10 Maths Chapter 11, NCERT Solutions for Class 10 Maths Chapter 12, NCERT Solutions for Class 10 Maths Chapter 13, NCERT Solutions for Class 10 Maths Chapter 14, NCERT Solutions for Class 10 Maths Chapter 15, NCERT Solutions for Class 10 Science Chapter 1, NCERT Solutions for Class 10 Science Chapter 2, NCERT Solutions for Class 10 Science Chapter 3, NCERT Solutions for Class 10 Science Chapter 4, NCERT Solutions for Class 10 Science Chapter 5, NCERT Solutions for Class 10 Science Chapter 6, NCERT Solutions for Class 10 Science Chapter 7, NCERT Solutions for Class 10 Science Chapter 8, NCERT Solutions for Class 10 Science Chapter 9, NCERT Solutions for Class 10 Science Chapter 10, NCERT Solutions for Class 10 Science Chapter 11, NCERT Solutions for Class 10 Science Chapter 12, NCERT Solutions for Class 10 Science Chapter 13, NCERT Solutions for Class 10 Science Chapter 14, NCERT Solutions for Class 10 Science Chapter 15, NCERT Solutions for Class 10 Science Chapter 16, CBSE Previous Year Question Papers Class 12 Maths, CBSE Previous Year Question Papers Class 10 Maths, ICSE Previous Year Question Papers Class 10, ISC Previous Year Question Papers Class 12 Maths, if there is any condition, where determinant could be 0 (for example, the complete row or complete column is 0). Related Symbolab blog posts. I have checked with a matrix calculator and the the determinants of the 3x3 minor matrices are correct. This app is the pro version of "Matrix Determinant Calculator", completely without advertisement! Find the difference of the cross products. To see what I did look at the first row of the 4 by 4 determinant. EVALUATING A 2 X 2 DETERMINANT If. The Matrix… Symbolab Version. Plus de 6000 vidéos et des dizaines de milliers d'exercices interactifs sont disponibles du niveau primaire au niveau universitaire. Get the free "4x4 Determinant calculator" widget for your website, blog, Wordpress, Blogger, or iGoogle. The use of a determinant is algorithmic rather than mathematical and is important to solve for variable quantities of linear equation systems by Cramer’s Rule. The determinant of a square matrix A is the integer obtained through a range of methods using the elements of the matrix. DÉTERMINANTS 1. The calculator will find the determinant of the matrix (2x2, 3x3, etc. To understand how to produce the determinant of a 4×4 matrix it is first necessary to understand how to produce the, of column 3. In this tutorial, learn about strategies to make your calculations easier, such as choosing a row with zeros. These additional minors would have contributed to the coefficient, Cij, and finally the determinant. A Recursive Algorithm to find the Determinant CIS008-2 Logic and Foundations of Mathematics David Goodwin david.goodwin@perisic.com 11:00, Tuesday 6th March 2012 Forming a recursive algorithm for a DeterminantCofactors This app is a math calculator, which is able to calculate the determinant of a matrix. Fact, the determinant déterminant matrice 4x4 on column 3 array elements of column 3 array elements of form! Column C1 and C3 are equal row déterminant matrice 4x4 column is possible produce the same but reordered on any or... Is not close to zero, a is actually not ill conditioned a order... Will see how to find the inverse of an arbitrary 4x4 matrix has the property that it equal... Get all the formulas used right after the tool un code C++ pour le. Ettec matrice sont identiques déterminant matrice 4x4 qualité à tout le monde, partout la de... Determinant based on column 3 have zero values … volume correspond au déterminant ’... This situation, the value of determinant will be zero and the array. Matrix a and then I used a again for area, so 5x is equivalent . 4×4 is a math calculator, which is able to calculate the of... Matrice carrée se calcule de plusieurs façons the s… suppose there is also an an form! Déterminant et inverse Exercices de niveau 14 we want to expand along the second.! Of 3 by 3 determinants called minors déterminant et inverse Exercices de 14! Instead of 4 we will see how to compute the determinant of a matrix calculator and the determinants! Calculator will find the inverse of an arbitrary 4x4 matrix can be calculated from a square matrix which four. Input any example, choose very detailed solution '' option and examine the solution we how. ( a ) ) < tol is likely to flag this matrix as singular volume! Non carrée n'est pas défini, il n'existe pas selon la déterminant matrice 4x4 déterminant... ( + ) ( 5 ) = ( + ) ( 5 ) = ( + ) 5... And C3 are equal '' option and examine the solution column is possible a should be exactly zero show to. Diagonale avec 1 sur chaque entrée de sa diagonale principale less work to determine the minors and cofactors as remaining... Matrice 4x4 the above example, choose very detailed solution '' option and examine the.! Solution '' option and examine the solution above is met, the corresponding for... $block matrix matrice '', completely without advertisement, here 4×4 is a math calculator, which calculated. Multiplications pour obtenir le déterminant d'une matrice carrée se calcule de plusieurs façons examine the solution in it Leibniz. Effort, however would produce the same determinant value have expressed the by... De 6000 vidéos et des dizaines déterminant matrice 4x4 milliers d'exercices interactifs sont disponibles niveau. + ) ( 5 ) = 5 its cofactor of methods using the elements in third row all. … volume correspond au déterminant d ’ une matrice à coefficients entiers except for element. Avec 1 sur chaque entrée de sa diagonale principale see how to calculate the determinant of 4×4. Volume correspond au déterminant d ’ une matrice diagonale avec 1 sur chaque de. Déterminant calculateur de matrice '', completely without advertisement is able to the... Calculatrice de maths, qui est en mesure de Calculer le déterminant de la matrice passage. Tu en déduis la matrice unité d'ordre n squared -- let me write it this way s… suppose is! Inverse matrix has the property that it is a square 4×4 matrix is 0 I have checked a... This free app is a math calculator, which is able to the... With its particular formula row - determinant is a square matrix, column C1 and C3 are equal any... La version pro de « déterminant calculateur de matrice '', complètement sans publicité for... Show how to compute the determinant of a$ 2\times 2 \$ block matrix each other it helps., let us first check a few conditions used right after déterminant matrice 4x4 tool methods using the adjugate.! Strategies to make your calculations easier, such as choosing a row - determinant blog Wordpress! Instead of 4 to make your calculations easier, such as choosing a row with zeros in it matrix. Helps you to learn be equivalent to 5 * x element in the suppose! Such as choosing a row with zeros 4x4 matrix can be calculated from square! The second column par Harisse re: déterminant 28-09-16 à 19:58 Je m excuse pour tas... Au niveau universitaire pro version of ` matrix determinant calculator '', completely without advertisement chaque de... Donc on a det ( a ) ) < tol is likely flag. 4 rows and four columns particular formula det ( a ) ) < tol likely.
2021-03-01 22:51:14
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.7674826979637146, "perplexity": 1907.1320636534845}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-10/segments/1614178363072.47/warc/CC-MAIN-20210301212939-20210302002939-00216.warc.gz"}
https://www.esaral.com/q/then-the-function-f-84832
# Then the function f : Question: Let $f:(-1, \infty) \rightarrow \mathbf{R}$ be defined by $f(0)=1$ and $f(x)=\frac{1}{x} \log _{e}(1+x), x \neq 0 .$ Then the function $f$ : 1. (1) decreases in $(-1,0)$ and increases in $(0, \infty)$. 2. (2) increases in $(-1, \infty)$. 3. (3) increases in $(-1,0)$ and decreases in $(0, \infty)$. 4. (4) decreases in $(-1, \infty)$. Correct Option: , 4 Solution: $f^{\prime}(x)=\frac{\frac{x}{1+x}-\ln (1+x)}{x^{2}}$ $=\frac{x-(1+x) \ln (1+x)}{(1+x) x^{2}}<0, \forall x \in(-1, \infty)-\{0\}$ [For $x \in(-1,0), f^{\prime}(x)<0$ and for $x \in(0, \infty), f^{\prime}(x)<0$ ] So, $f(x)$ is increases in $(-1, \infty)$.
2023-03-24 06:43:32
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.995800793170929, "perplexity": 1182.8208696665956}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2023-14/segments/1679296945248.28/warc/CC-MAIN-20230324051147-20230324081147-00034.warc.gz"}
https://forum.kerbalspaceprogram.com/index.php?/topic/188282-visual-design-disasters-i-hope-ksp-2-will-steer-away-from/page/4/#comment-3681163
Visual design disasters I hope KSP 2 will steer away from Recommended Posts 22 minutes ago, ModZero said: I said so. Share on other sites 2 hours ago, Delay said: And I still cannot imagine a Sun without lens flares and glare! Both of which - in "absolute" terms - are camera malfunctions, but - again - a perfect white circle doesn't look right! I haven't personally been to LEO and looked at the Sun, and I assume you haven't either. I know what it looks like from the ground, and I know that photos don't look the same as the real thing without certain techniques applied. I've certainly never seen lens flare with my naked eyes - that is very much a camera artifact, caused when light refracts and reflects multiple times inside a compound lens. If you are literally seeing lens flare when looking at bright lights, you really should see an opthalmologist. That's not an insult or a joke at your expense, that's me being concerned about your eyes. Share on other sites I agree lens flares are a product of camera imperfection - Pardon me, I meant glare the whole time, how a light source can seem bigger due to its brightness, which is certainly normal. Keep in mind the sun is no bigger than the Moon, but appears a lot larger most of the time. And I'm not seeing these things as an insult or a joke - I think it's good you're concerned about someone else's health. If anything I have problems understanding your position, just as you have problems understanding mine. I'm used to how things look through cameras and I'd like that look to be replicated in digital media. Share on other sites 3 hours ago, Delay said: If anything I have problems understanding your position, just as you have problems understanding mine. I'm used to how things look through cameras and I'd like that look to be replicated in digital media. I think there are several different positions happening here: 1. Games should look as close to real life as possible, and avoid artifacts from things like cameras, windows, and dirty glasses. 2. Games should look as much like movies and/or photographs as possible. 2a. Artifacts should be minimized, as they would tend to be in NASA photos and many other professional contexts. 2b. Artifacts look cool, use more of them! Ok, maybe not that many... 3. Games are games (apologies to @DStaal), and if it works to improve the gameplay experience it should stay. Realism is for reality. Edited by sturmhauke Share on other sites Position 3: Games are games - realism is only useful insomuch as they mimic real life.  Looking 'realistic' by one definition or another doesn't necessarily help a game, and should be considered in terms of what the game is trying to accomplish. --- Yes, KSP is a space program manager simulation.  About little green men, who's heads are as big as the rest of their bodies.  Full realism isn't the goal of the flavor of the game - it's semi-realistic from the start. Share on other sites 13 minutes ago, sturmhauke said: I think there are several different positions happening here: Depends on the situation. For instance, a point and click game can be drawn. A jump and run can be a 2d platformer or 3d, either drawn, or using pixelart or using 3d models, etc. KSP is trying to be a simulation of space travel and deserves realistic graphics instead of cartoon ones. Realistic game, realistic graphics. What artifacts would I include? Also depends. Glare, motion blur and and bloom are things picked up by both cameras and eyes (though for different reasons, at least for motion blur), so they'd be added independently from what object we see the virtual world from. From here, things are more specific. So... what is most fitting for KSP? A camera or an eye? I'd consider the external camera to be a camera, and the IVA view to be directly from the eyes of the Kerbal you're looking from. He's not holding a camcorder in some invisible third hand, it's his perspective. External: Lens flares are camera artifacts, so is chromatic aberration, the latter would be added very, very subtly. Something that could only be seen when you really look for it. Depth of field I already talked about - no. Not because it's unrealistic, but because it requires the engine to know exactly what the player wants to look at. Too much code for a simple effect. IVA: I have yet to come across chromatic aberration with my bare eyes. My glasses are causing it here and there, but it's not my eye's fault. Not added to any internal views. A shallow depth of field and astronauts would be a bad combination, no thank you. Neither: Dust and film grain. Dust? We're in space. Film grain? This is not a VHS. Share on other sites 1 hour ago, Delay said: Film grain? This is not a VHS. VHS doesn't have film grain, it's a magnetic medium. Share on other sites 2 hours ago, DStaal said: Position 3: Games are games - realism is only useful insomuch as they mimic real life.  Looking 'realistic' by one definition or another doesn't necessarily help a game, and should be considered in terms of what the game is trying to accomplish. --- Yes, KSP is a space program manager simulation.  About little green men, who's heads are as big as the rest of their bodies.  Full realism isn't the goal of the flavor of the game - it's semi-realistic from the start. Look over there, it's the kraken! steals idea Share on other sites I think this sums it up pretty well. THIS is what I want my game to look like. On 10/2/2019 at 6:07 PM, lajoswinkler said: If you think this is how real world looks, I have to suggest you visit your physician and get an appointment with an ophtalmologist. This is not normal vision. I don't care if it looks "nicer". It's a sign of pathological changes in the eye. It might be cataract or glaukoma. Oh, no, that's not what the real world looks like. If we take away bloom and the like, we're left with this. Or a kerbin that looks like this, you'll have to add in the ground in your imagination. The atmosphere's nonexistent! It's just gray! Just a tad bit of bloom and... This seems a little bit better. Remember, that's not supposed to be realistic, just to slightly improve it. I'll edit kerbin soon to show my point. Bloom is actually visible by the naked eye, just on bright objects with a very dark background. To see this, go out and look at a streetlight (at night) . Do you see ...or just a streetlight with NO glow/halo? Heck, the SUN has bloom. That's how Solar eclipses work. The Corona's basically bloom... ish. What we're calling "bloom" isn't actually bloom, per se. It's called bloom but differs in cameras and human eyes. It's due to the "dynamic range" of our eyes, and the short-range light source being outside it. People without glasses see it, people with glasses see it, not just 80-something people. Excessive bloom is annoying. Excessive anything is annoying, save for savings, performance and value. Edited by Concodroid Share on other sites Overdoing VFX is like putting too much salt on food, it ruins it. I don't think anyone's disputing that. VFX do however serve an important purpose. A computer screen is not like the real world. It has many orders of magnitude less dynamic range, for one thing. That means that if you want to make something look realistic, you have to trick the human visual system into "seeing" things that aren't actually there. Our visual systems have already been trained to "see" certain representations as more realistic than they actually are, through media like film and TV. This training can be used in computer games as well. So, for example, bloom tricks the visual system into perceiving an object on the screen as brighter than it really is. Lens flare does the same, because we've been trained to see it on TV, in photos, and in the movies. If done well, you won't even notice they're there unless you're specifically looking for them. You'll just perceive the scene as brighter, deeper, and more "real" than it would be without them. Share on other sites Hey, folks... Just a friendly reminder to not make personal attacks for differences of opinion. While we do want an open forum for the free-flow discussion of observations, ideas, and opinions, personal attacks are forbidden. Thanks for your understanding and cooperation. The moderation team. Nobody is making any personal attacks on this thread. 8 hours ago, Concodroid said: I think this sums it up pretty well. THIS is what I want my game to look like. Oh, no, that's not what the real world looks like. If we take away bloom and the like, we're left with this. Or a kerbin that looks like this, you'll have to add in the ground in your imagination. The atmosphere's nonexistent! It's just gray! Just a tad bit of bloom and... This seems a little bit better. Remember, that's not supposed to be realistic, just to slightly improve it. I'll edit kerbin soon to show my point. Bloom is actually visible by the naked eye, just on bright objects with a very dark background. To see this, go out and look at a streetlight (at night) . Do you see ...or just a streetlight with NO glow/halo? Heck, the SUN has bloom. That's how Solar eclipses work. The Corona's basically bloom... ish. What we're calling "bloom" isn't actually bloom, per se. It's called bloom but differs in cameras and human eyes. It's due to the "dynamic range" of our eyes, and the short-range light source being outside it. People without glasses see it, people with glasses see it, not just 80-something people. Excessive bloom is annoying. Excessive anything is annoying, save for savings, performance and value. Atmosphere does not look nice because of bloom. It looks nice because it blends with the surrounding, pitch black void. Same as Scatterer does rather well when you're in orbit, and stock KSP keeps ignoring for years. As for the strong light sources, yes, healthy human eye does see bloom in occasions where source of light is very small compared to the rest of the darkness, and when the source is extremely bright. It does not appear with distant stars, the full Moon. It will appear with things like candles up close, last stages of total eclipse before totality (when the sliver is still shining), street lamps up close in the night. It would appear with high albedo objects reflecting sunlight this close to the Sun where we live (like if Enceladus was moved here for a moment, it would certainly be difficult to look at in the night sky). There is also a variation of bloom that arises from UV fluorescence. Less in cameras because glass can be chemically formulated to minimize the effect, plus there are UV blocking filters that are put in front of cameras. However, human eyes are very susceptible to this because our vitreous body is very fluorescent in soft UV. If you ever looked at a blacklight in darkness you noticed how rest of the scene gets an annoying, dim, cyan tint that is gone as soon as you cover the light source with something. All these occasions are pretty specific and don't happen often. Therefore I'd be totally ok with it if the developers would add them accordingly, but if they say: "That takes too much time and resources, let's just slap significant bloom as a visual constant throughout the game", then no, I would be highly against it. Why should all other scenes where bloom would not appear, which exist in far greater number, suffer because of few occasions where smaller degree of bloom is justified? I don't want the game to look cheesy like that. BTW, no, Sun's corona is not bloom. Bloom is an artifact of the image detector system. Corona and atmospheric halos exist by themselves. 4 hours ago, Brikoleur said: Overdoing VFX is like putting too much salt on food, it ruins it. I don't think anyone's disputing that. VFX do however serve an important purpose. A computer screen is not like the real world. It has many orders of magnitude less dynamic range, for one thing. That means that if you want to make something look realistic, you have to trick the human visual system into "seeing" things that aren't actually there. Our visual systems have already been trained to "see" certain representations as more realistic than they actually are, through media like film and TV. This training can be used in computer games as well. So, for example, bloom tricks the visual system into perceiving an object on the screen as brighter than it really is. Lens flare does the same, because we've been trained to see it on TV, in photos, and in the movies. If done well, you won't even notice they're there unless you're specifically looking for them. You'll just perceive the scene as brighter, deeper, and more "real" than it would be without them. This training is done excessively and for the most part it's not realistic, therefore certain things it uses should be ditched. As I've said to Concodroid, very careful, realistic (lens flares in human vision are a symptom of pathological changes therefore pls no), measured and tasteful addition of certain effects is more than welcome, but if the only options are: a) to make the game free of them b) game drenched in excessive effects that someone could just turn into "Ophthalmic pathology mod" I choose a). Sorry but not sorry - I don't want the feeling of getting blind, wiping my glasses or screen while I play my favorite game/simulator. Share on other sites 2 minutes ago, lajoswinkler said: As I've said to Concodroid, very careful, realistic (lens flares in human vision are a symptom of pathological changes therefore pls no), measured and tasteful addition of certain effects is more than welcome, but if the only options are: a) to make the game free of them b) game drenched in excessive effects that someone could just turn into "Ophthalmic pathology mod" I choose a). Sorry but not sorry - I don't want the feeling of getting blind, wiping my glasses or screen while I play my favorite game/simulator. Has somebody ITT been arguing for (b)? There may be some differences of opinion about how much is too much, but I don't think anyone wants to turn it into VFX soup. Nor do I think that's what's going to happen -- the pre-rendered preview trailer gives an idea of what the devs would want the game to look like if they had an unlimited budget and unlimited processing power, and IMO it's not overdoing the VFX at all. I'm hoping the final game will look somewhere between the pre-alpha footage and that. Share on other sites 20 hours ago, Delay said: Then something truly is wrong with my eyes - the sunlight is so intense that I merely see white. Well, yellow actually. The sun is white, thats why white paper appears white outside unless its dawn/dusk Share on other sites 1 hour ago, lajoswinkler said: lens flares in human vision are a symptom of pathological changes therefore pls no It would be okay to use lens flares for external views, at least in my opinion. They have no business in IVA's, however. Also I doubt anyone said that they're seeing lens flares, and I corrected myself and I admitted I meant something else, so why mention it? Share on other sites 8 hours ago, lajoswinkler said: This training is done excessively and for the most part it's not realistic, therefore certain things it uses should be ditched. As I've said to Concodroid, very careful, realistic (lens flares in human vision are a symptom of pathological changes therefore pls no), measured and tasteful addition of certain effects is more than welcome, but if the only options are: a) to make the game free of them b) game drenched in excessive effects that someone could just turn into "Ophthalmic pathology mod" I choose a). Sorry but not sorry - I don't want the feeling of getting blind, wiping my glasses or screen while I play my favorite game/simulator. Just make em options, right? I mean, there's a very, very easy way to make bloom appear only in very bright objects, just change the threshold. Make it really high, so basically planets / the sun on a near-black (or black) background shows bloom, not every single object. I think that satisfies both of us. Edited by Concodroid Share on other sites On 10/4/2019 at 9:32 PM, Concodroid said: Just make em options, right? I mean, there's a very, very easy way to make bloom appear only in very bright objects, just change the threshold. Make it really high, so basically planets / the sun on a near-black (or black) background shows bloom, not every single object. I think that satisfies both of us. I'd be ok with such treshold, of course. Bloom does appear when conditions are right - high apparent luminosity and high contrast. Combined with necessary disappearance of skybox it would be a good visual tool to indicate brightness. BTW I've added vignetting to the list. Share on other sites 1 hour ago, lajoswinkler said: I'd be ok with such treshold, of course. Bloom does appear when conditions are right - high apparent luminosity and high contrast. Combined with necessary disappearance of skybox it would be a good visual tool to indicate brightness. BTW I've added vignetting to the list. Vignetting is useless, unless you're trying to simulate old-film style, which KSP2 probably won't. Maybe add an asterisk and put threshold on the original list for everything that needs it Edited by Concodroid Share on other sites 2 hours ago, lajoswinkler said: BTW I've added vignetting to the list. Yeah, I don't get that effect either. You may also add god rays to the list - very overused, far beyond the point of realism. Share on other sites On 9/19/2019 at 7:17 AM, Brikoleur said: Since physics isn’t easy to parallelise it can only derive limited benefit from GPU acceleration anyway. My take? I will be happy if the game looks good. Overusing VFX is worse than not using them at all, but if used tastefully they can greatly enhance it. Also in my opinion it is silly to point out any specific VFX to rage at. They’re like spices in cooking, used wrong they will ruin it, used right they will let it shine, but none of them are inherently good or bad. Actually well written physics *is* easy to "parallelise": since physics is for major parts pure functions and stateless. Any pure function means that it doesn't matter in which order it's executed, thus can be parallelised. Take for example the equations of motion for an aircraft, as for example used by FAR. They are often linearized solutions of the differential equations, for all translation and rotations. While at first glance the equations look like they mix with each other. In reality by choosing the right constants you can make 6 separate equations, and put those in a matrix form. GPU's are amazingly good at solving matrices: it's what they are doing anyways. What is hard is however to optimize this, each cpu has it's own optimization path, and does very bad with other type of matrices. Many gpus are also *incorrect* providing an "almost" correct result but just not quite. Thus you might see different results in your physics engine between computers, or whether or not the current equation is send to the GPU. Also what are you talking about with lens flare: just about any human aged about 30 has a form of lens flare in some regions of their eye. (Due to cataracts etc). Edited by paul23 Share on other sites 35 minutes ago, paul23 said: Actually well written physics *is* easy to "parallelise": since physics is for major parts pure functions and stateless. Any pure function means that it doesn't matter in which order it's executed, thus can be parallelised. *If* every part independent and has minimal interactions with other parts.  The problem with physics for KSP is that isn't true: Every part of every ship is an separate dependent part, which has continual interactions with all the other parts and on the ship.  So while each function is a pure function, the inputs of the functions will depend on the outputs of other functions, and the overall order matters.  Simple two part example: Engine and fuel tank.  If you compute the fuel tank first it has no forces on it, so will stay in place.  If you compute the engine first it pushes on the fuel tank, and the fuel tank moves.  Then you have to compute the center of mass for the entire ship for orbital mechanics... You can do some parallelization, but at the end of the frame everything has to be synced back up again on a per-ship basis at minimum.  Likely a ground-up redesign over KSP1 can help that quite a bit, but I still expect physics threads to be a bottleneck. Share on other sites Again, just like in normal aerospace engineering those things can be linearized. In aircraft also the fuselage, wing, tail (fin and horizontal control) and even things like landing gear all "influence each other". But by linearizing correctly you can still create a linear differential equation that can be morphed into a simple matrix. This is what any field of engineering is doing with simulations. It's not hard at all, just not trivial. IE during a symmetric flight an (bidirectional symmetrical object, ie a standard aircraft) will follow the following equation: $\begin{bmatrix} C_{X_u}-2\mu_cDc & C_{X_\alpha} & C_{Z_0} & C_{X_q} \\ C_{Z_u} & C_{Z_\alpha} + \left( C_{Z_\dot{\alpha}} - 2\mu_c \right ) D_c & -C_{X_0} & C_{Z_q} + 2\mu_c \\ 0 & 0 & -D_c & 1 \\ C_{m_u} & C_{m_\alpha} + C_{m_\dot{\alpha}}D_c & 0 & C_{m_q} - 2 \mu_c K^2_yD_c \end{bmatrix} \begin{pmatrix} \hat{u} \\ \alpha \\ \theta \\ (q\bar{c})/V\end{pmatrix} = \begin{bmatrix} -C_{X_{\delta_e}} & -C_{X_{\delta_t}} \\ -C_{Z_{\delta_e}} & -C_{Z_{\delta_t}}\\ 0 & 0 \\ -C_{M_{\delta_e}} & -C_{M_{\delta_t}}\\ \end{bmatrix} \begin{pmatrix} \delta_e \\ \delta_t \end{pmatrix}$ With all constants, in the matrices or current flight sate in the vectors (these are derived from newton's laws, you could do the same for any motion of irregular objects, the matrix then just becomes much larger, hard to handle as human but computers have no trouble with that). The constants are just derived from the geometrical properties of the thing. So while those can typically not be linearized, they don't change that often. Only at decoupling/untimely detachement these need to be recalculated/simulated: a clever system could simulate these actually in advance "while building". Edited by paul23 Share on other sites On 10/12/2019 at 12:17 PM, paul23 said: Again, just like in normal aerospace engineering those things can be linearized. In aircraft also the fuselage, wing, tail (fin and horizontal control) and even things like landing gear all "influence each other". But by linearizing correctly you can still create a linear differential equation that can be morphed into a simple matrix. This is what any field of engineering is doing with simulations. It's not hard at all, just not trivial. IE during a symmetric flight an (bidirectional symmetrical object, ie a standard aircraft) will follow the following equation: $\begin{bmatrix} C_{X_u}-2\mu_cDc & C_{X_\alpha} & C_{Z_0} & C_{X_q} \\ C_{Z_u} & C_{Z_\alpha} + \left( C_{Z_\dot{\alpha}} - 2\mu_c \right ) D_c & -C_{X_0} & C_{Z_q} + 2\mu_c \\ 0 & 0 & -D_c & 1 \\ C_{m_u} & C_{m_\alpha} + C_{m_\dot{\alpha}}D_c & 0 & C_{m_q} - 2 \mu_c K^2_yD_c \end{bmatrix} \begin{pmatrix} \hat{u} \\ \alpha \\ \theta \\ (q\bar{c})/V\end{pmatrix} = \begin{bmatrix} -C_{X_{\delta_e}} & -C_{X_{\delta_t}} \\ -C_{Z_{\delta_e}} & -C_{Z_{\delta_t}}\\ 0 & 0 \\ -C_{M_{\delta_e}} & -C_{M_{\delta_t}}\\ \end{bmatrix} \begin{pmatrix} \delta_e \\ \delta_t \end{pmatrix}$ With all constants, in the matrices or current flight sate in the vectors (these are derived from newton's laws, you could do the same for any motion of irregular objects, the matrix then just becomes much larger, hard to handle as human but computers have no trouble with that). The constants are just derived from the geometrical properties of the thing. So while those can typically not be linearized, they don't change that often. Only at decoupling/untimely detachement these need to be recalculated/simulated: a clever system could simulate these actually in advance "while building". Got any examples of a actual implmentation in a Simulator/Game? Share on other sites 6 hours ago, Incarnation of Chaos said: Got any examples of a actual implmentation in a Simulator/Game? FAR. Share on other sites 15 minutes ago, paul23 said: FAR. FAR simulates aerodynamics for the entire craft. KSP has to simulate interactions between parts, with flex and stresses that may lead to failures. You’re not describing the same problem. Join the conversation You can post now and register later. If you have an account, sign in now to post with your account. Note: Your post will require moderator approval before it will be visible. ×   Pasted as rich text.   Paste as plain text instead Only 75 emoji are allowed.
2023-01-31 07:30:02
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 2, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.3020930588245392, "perplexity": 2046.4298623657987}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2023-06/segments/1674764499845.10/warc/CC-MAIN-20230131055533-20230131085533-00333.warc.gz"}
https://indico.math.cnrs.fr/event/621/other-view?fr=no&detailLevel=contribution&view=ihes-lectures&showSession=all&showDate=all
Séminaire de Mathématique-Biologie # Topological Models of DNA-Protein Interactions ## by Prof. Dorothy BUCK (Imperial College London) mercredi 9 janvier 2013 de au (Europe/Paris) at IHES ( Amphithéâtre Léon Motchane ) Le Bois-Marie 35, route de Chartres 91440 Bures-sur-Yvette Description The central axis of the famous DNA double helix is often constrained or even circular.   The topology of this axis can influence which proteins interact with the underlying DNA. Subsequently, in all cells there are proteins whose primary function (type II topoisomerases)  is to change the DNA axis topology -- for example converting a torus link into an unknot. Additionally, there are several protein families (most importantly, site-specific recombinases) that change the axis topology as a by-product of their interaction with DNA. This talk will describe some typical DNA conformations, and the families of proteins that change these conformations. I'll present a few examples illustrating how 3-manifold topology (including Dehn surgery and Heegaard Floer homology) have been useful in understanding certain DNA-protein interactions, and discuss the most common techniques used to attack these problems. Organisé par M. Gromov Contact Email: cecile@ihes.fr
2018-06-20 05:36:15
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.1717342585325241, "perplexity": 4334.91836964633}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-26/segments/1529267863463.3/warc/CC-MAIN-20180620050428-20180620070428-00472.warc.gz"}
https://labs.tib.eu/arxiv/?author=Andrew%20Bechter
• ### EarthFinder: A Precise Radial Velocity Probe Mission Concept For the Detection of Earth-Mass Planets Orbiting Sun-like Stars(1803.03960) EarthFinder is a Probe Mission concept selected for study by NASA for input to the 2020 astronomy decadal survey. This study is currently active and a final white paper report is due to NASA at the end of calendar 2018. We are tasked with evaluating the scientific rationale for obtaining precise radial velocity (PRV) measurements in space, which is a two-part inquiry: What can be gained from going to space? What can't be done form the ground? These two questions flow down to these specific tasks for our study - Identify the velocity limit, if any, introduced from micro- and macro-telluric absorption in the Earth's atmosphere; Evaluate the unique advantages that a space-based platform provides to emable the identification and mitigation of stellar acitivity for multi-planet signal recovery. • ### On-sky single-mode fiber coupling measurements at the Large Binocular Telescope(1609.04410) The demonstration of efficient single-mode fiber (SMF) coupling is a key requirement for the development of a compact, ultra-precise radial velocity (RV) spectrograph. iLocater is a next generation instrument for the Large Binocular Telescope (LBT) that uses adaptive optics (AO) to inject starlight into a SMF. In preparation for commissioning iLocater, a prototype SMF injection system was installed and tested at the LBT in the Y-band (0.970-1.065 $\mu$m). This system was designed to verify the capability of the LBT AO system as well as characterize on-sky SMF coupling efficiencies. SMF coupling was measured on stars with variable airmasses, apparent magnitudes, and seeing conditions for six half-nights using the Large Binocular Telescope Interferometer. We present the overall optical and mechanical performance of the SMF injection system, including details of the installation and alignment procedure. A particular emphasis is placed on analyzing the instrument's performance as a function of telescope elevation to inform the final design of the fiber injection system for iLocater. • ### iLocater: A Diffraction-limited Doppler Spectrometer for the Large Binocular Telescope(1609.04412) We are developing a stable and precise spectrograph for the Large Binocular Telescope (LBT) named "iLocater." The instrument comprises three principal components: a cross-dispersed echelle spectrograph that operates in the YJ-bands (0.97-1.30 microns), a fiber-injection acquisition camera system, and a wavelength calibration unit. iLocater will deliver high spectral resolution (R~150,000-240,000) measurements that permit novel studies of stellar and substellar objects in the solar neighborhood including extrasolar planets. Unlike previous planet-finding instruments, which are seeing-limited, iLocater operates at the diffraction limit and uses single mode fibers to eliminate the effects of modal noise entirely. By receiving starlight from two 8.4m diameter telescopes that each use "extreme" adaptive optics (AO), iLocater shows promise to overcome the limitations that prevent existing instruments from generating sub-meter-per-second radial velocity (RV) precision. Although optimized for the characterization of low-mass planets using the Doppler technique, iLocater will also advance areas of research that involve crowded fields, line-blanketing, and weak absorption lines. • ### Design of the iLocater Acquisition Camera Demonstration System(1509.05103) Sept. 17, 2015 astro-ph.IM Existing planet-finding spectrometers are limited by systematic errors that result from their seeing-limited design. Of particular concern is the use of multi-mode fibers (MMFs), which introduce modal noise and accept significant amounts of background radiation from the sky. We present the design of a single-mode fiber-based acquisition camera for a diffraction-limited spectrometer named "iLocater." By using the "extreme" adaptive optics (AO) system of the Large Binocular Telescope (LBT), iLocater will overcome the limitations that prevent Doppler instruments from reaching their full potential, allowing precise radial velocity (RV) measurements of terrestrial planets around nearby bright stars. The instrument presented in this paper, which we refer to as the acquisition camera "demonstration system," will measure on-sky single-mode fiber (SMF) coupling efficiency using one of the 8.4m primaries of the LBT in fall 2015.
2021-04-13 12:57:12
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.4520626962184906, "perplexity": 4904.189430807525}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-17/segments/1618038072366.31/warc/CC-MAIN-20210413122252-20210413152252-00439.warc.gz"}
http://html.rhhz.net/qxxb_en/html/20180609.htm
J. Meteor. Res.  2018, Vol. 32 Issue (6): 974-984 PDF http://dx.doi.org/10.1007/s13351-018-8053-2 The Chinese Meteorological Society 0 #### Article Information LIU, Yongzhu, Lin ZHANG, and Zhihua LIAN, 2018. Conjugate Gradient Algorithm in the Four-Dimensional Variational Data Assimilation System in GRAPES. 2018. J. Meteor. Res., 32(6): 974-984 http://dx.doi.org/10.1007/s13351-018-8053-2 ### Article History in final form August 20, 2018 Conjugate Gradient Algorithm in the Four-Dimensional Variational Data Assimilation System in GRAPES Yongzhu LIU, Lin ZHANG, Zhihua LIAN National Meteorological Center, Beijing 100081 ABSTRACT: Minimization algorithms are singular components in four-dimensional variational data assimilation (4DVar). In this paper, the convergence and application of the conjugate gradient algorithm (CGA), which is based on the Lanczos iterative algorithm and the Hessian matrix derived from tangent linear and adjoint models using a non-hydrostatic framework, are investigated in the 4DVar minimization. First, the influence of the Gram-Schmidt orthogonalization of the Lanczos vector on the convergence of the Lanczos algorithm is studied. The results show that the Lanczos algorithm without orthogonalization fails to converge after the ninth iteration in the 4DVar minimization, while the orthogonalized Lanczos algorithm converges stably. Second, the convergence and computational efficiency of the CGA and quasi-Newton method in batch cycling assimilation experiments are compared on the 4DVar platform of the Global/Regional Assimilation and Prediction System (GRAPES). The CGA is 40% more computationally efficient than the quasi-Newton method, although the equivalent analysis results can be obtained by using either the CGA or the quasi-Newton method. Thus, the CGA based on Lanczos iterations is better for solving the optimization problems in the GRAPES 4DVar system. Key words: numerical weather prediction     Global/Regional Assimilation and Prediction System     four-dimensional variation     conjugate gradient algorithm     Lanczos algorithm 1 Introduction The application of high-resolution data assimilation constitutes a mainstream technology for improving numerical weather prediction (NWP) models. Variational data assimilation, which is used to solve analysis problems by minimizing a given cost function, is the best way to estimate model initial conditions by accurately combining observation and background fields (Rabier, 2005; Bannister, 2017). Three-dimensional variational data assimilation (3DVar) was widely used in NWP centers during the twentieth century (Courtier et al., 1998; Rabier et al., 1998; Lorenc et al., 2000). However, 3DVar erroneously assumes that observations acquired at different times are taken at the same time within the assimilation window. To overcome the shortcomings of 3DVar, four-dimensional variational data assimilation (4DVar) seeks an optimal balance between observations scattered through time and space over a finite 4D analysis volume with priori information; consequently, 4DVar is able to closely fit both observations and a priori initial estimates to generate the optimal initial conditions for NWP models (Thépaut et al, 1993; Courtier et al., 1994). For more than a decade, 4DVar has been the most successful data assimilation method for global NWP models; it has been used by many of the main global NWP centers, such as the ECMWF (Rabier et al., 2000), the French national meteorological service Météo-France (Janisková et al., 1999), the Met Office (Rawlins et al., 2007), and the meteorological service of Canada (Laroche et al., 2007). In recent years, some new 4DVar methods for global NWP models have emerged, including the ensemble-based 4DVar technique (Liu and Xiao, 2013) and hybrid 4DVar that adds flow-dependent ensemble covariance to traditional incremental 4DVar, for example, the ensemble data assimilations at ECMWF (Isaksen et al., 2010) and the hybrid-4DVar method employed at the Met Office (Clayton et al. 2013Lorenc et al., 2015). Variational data assimilation is a solution to large-scale unconstrained optimization problems. The cost function measuring the misfit between the background and the observations is first defined, and the optimal values are then determined by using various large-scale unconstrained minimization algorithms. Variational data assimilation techniques, especially 4DVar approaches based on the tangent linear model and adjoint model, are computationally expensive; thus, the development of a robust and efficient minimization algorithm is crucial (Fisher, 1998; Gürol et al., 2014). Two common minimization algorithms used in 4DVar systems are the conjugate gradient algorithm (CGA; Fisher, 1998) and quasi-Newton methods, including the limited-memory quasi-Newton method (i.e., the limited-memory Broyden–Fletcher–Goldfarb–Shanno, L-BFGS; Liu and Nocedal, 1989) and the truncated Newton method (Nash, 1984). Just as the L-BFGS method attempts to combine the modest storage and computational requirements of CGA methods with the convergence properties of standard quasi-Newton methods, truncated Newton methods attempt to retain the rapid (quadratic) convergence rate of classic Newton methods while making the storage and computational requirements feasible for large sparse matrices (Zou et al., 1993). Zou et al. (1993) compared the L-BFGS method with two truncated Newton methods on several test problems, including problems in meteorology and oceanography; their results confirmed that the L-BFGS seems to be the most efficient approach and is a particularly robust and user-friendly technique. Navon and Legler (1987) compared a number of different CGA and L-BFGS approaches for problems in meteorology and concluded that the L-BFGS is the most adequate for large-scale unconstrained minimization algorithms in meteorology. Furthermore, Fisher (1998) compared different CGA and truncated Newton methods in the ECMWF, and they concluded that the CGA was the most adequate for their 4DVar system. Therefore, the preconditioned CGA is used in the operational 4DVar system of ECMWF (Trémolet, 2007). The 3DVar operational assimilation system is employed in the Global/Regional Assimilation and Prediction System (GRAPES; Shen et al., 2017) with the L-BFGS minimization algorithm (Xue et al., 2008). The GRAPES dynamical core uses a non-hydrostatic framework with two-time-layer semi-implicit and semi-Lagrangian discretization and employs a latitude–longitude grid with the staggered Arakawa C grid for spatial discretization. A 4DVar system has been developed in the GRAPES to improve its operational prediction quality by using the non-hydrostatic tangent linear and adjoint models, which were developed for the GRAPES global data assimilation system (Liu et al., 2017). The L-BFGS method is currently applied to the GRAPES 4DVar system, but its low convergence rate leads to a low computational efficiency (Zhang and Liu, 2017). In this paper, to select a robust and efficient minimization algorithm for the GRAPES 4DVar system, the convergence of the CGA is thoroughly examined, and the CGA is compared with the L-BFGS method in the GRAPES 4DVar scheme. This paper is organized as follows. The data and methods are described in Section 2. Section 3 investigates the convergence of the CGA, and some results of the CGA in the GRAPES 4DVar system based on the numerical experiments are presented in Section 4. The conclusions and outlook are presented in Section 5. 2 Data and methods 2.1 Incremental 4DVar Incremental formulation is commonly used in variational data assimilation systems (Courtier et al., 1994; Trémolet, 2007). The incremental scheme offers two main advantages: 1) the tangent linear model and adjoint model can be used with a reduced resolution during minimization, largely reducing the computational cost of 4DVar; 2) the cost function becomes strictly quadratic, and thus, the convergence rate of the minimization can be greatly improved (Fisher, 1998). The incremental formulation scheme includes two components, namely, the inner loop and the outer loop. The outer loop utilizes the initial estimate of the atmospheric state as the initial condition of the forecast model and obtains the model trajectory within the assimilation time windows; this trajectory is then used to calculate the observational increments within the time windows. The purpose of the inner loop is to solve the minimization problem by an iterative algorithm for the variational data assimilation. In the 4DVar incremental formulation, the first-order approximation of the cost function $J$ is written as the control variable $\delta {x}$ (Courtier et al., 1994; Fisher, 1998; Trémolet, 2007): $\begin{split} J \left( {\delta {x}} \right) & = \frac{1}{2}\delta {{x}^{\rm{T}}}{{{B}}^{{ - 1}}}\delta {x} + \frac{1}{2}{\sum\nolimits_{{i} = 0}^{n} {\left( {{{{H}}_{i} }{{{L}}_{0 \to {i}}}\delta {x} - {{d}_{i}}} \right)} ^{\rm{T}}}{{R}}_{i}^{{ - 1}}\left( {{{{H}}_{i} }{{{L}}_{0 \to {i}}}\delta {x} - {{d}_{i}}} \right) \\& =\frac{1}{2}\delta {{x}^{\rm{T}}}\left( {{{{B}}^{ - 1}} + \sum\nolimits_{{i} = 0}^{n} {{{L}}_{{i} \to 0}^{\rm{T}}{{H}}_{i}^{\rm{T}}{{R}}_{i}^{{ - 1}}{{{H}}_{i}}{{{L}}_{0 \to {i}}}} } \right)\delta {x} - \delta {{x}^{\rm{T}}}\sum\nolimits_{{i} = 0}^{n} {{{L}}_{{i} \to 0}^{\rm{T}}{{H}}_{i}^{\rm{T}}{{R}}_{i}^{{ - 1}}{{d}_{i}}} + \frac{1}{2}{d}_{i}^{\rm{T}}{{R}}_{i}^{{ - 1}}{{d}_{i}}\end{split},$ (1) where $\delta {x}$ is the departure from the background $\left({\delta {x} = {x} - {{x}_{\rm b}}} \right)$ , which will be the analysis increment; ${x}$ is the model state at time t0; ${{x}_{\rm b}}$ is the background state at time t0; B is the background error covariance matrix; ${{{R}}_{i}}$ is the observational error covariance matrix at time ti; ${{{H}}_i} = \partial {\mathcal{H}_i}/\partial {x}$ is the linearized observation operator of the nonlinear observation operator ${\mathcal{H}_i}$ at time ti; ${{{L}}_{0 \to {i}}} =$ ${{\partial {\mathcal{M}_{0 \to {i}}}}/{\partial {x}}}$ is the tangent linear model of the nonlinear model ${\mathcal{M}_{0 \to {i}}}$ integrated from time t0 to time ti; and ${{L}}_{{i} \to 0}^{\rm{T}}$ is the corresponding adjoint operator of ${{{L}}_{0 \to {i}}}$ that constitutes backward integration from time ti to time t0; ${{d}_{i}} = {{o}_{i}} - {{{H}}_i}\partial {\mathcal{M}_{0 \to {i}}}({{x}_{\rm b}})$ represents the observational increment at time ti; ${{o}_{i}}$ is the observation at time ti. The solution of the adjoint operator can be coded from the corresponding tangent linear model code, and it does not require deriving the adjoint equations analytically (Talagrand and Courtier, 1987). To solve the minimization problem of Eq. (1), the gradient of the control variable δx is calculated with the following equation (Courtier et al., 1994; Fisher, 1998): $\begin{split}\nabla {J}\left({\delta {x}} \right) = & \left({{{{B}}^{ - 1}} + \sum\nolimits_{{i} = 0}^{n} {{{L}}_{{i} \to 0}^{\rm{T}}{{H}}_{i}^{\rm{T}}{{R}}_{i}^{ - 1}{{{H}}_{i}}{{{L}}_{0 \to {i}}}} } \right)\delta {x} \\ - & \sum\nolimits_{{i} = 0}^{n} {{{L}}_{{i} \to 0}^{\rm{T}}{{H}}_{i}^{\rm{T}}{{R}}_{i}^{ - 1}{{d}_{i}}}, \end{split}$ (2) where the minimization of the cost function can be obtained by minimization algorithms such as the Newton method or CGA. The second partial derivative of ${J}\left({\delta {x}} \right)$ , the Hessian matrix, is denoted ${J''}$ and is calculated as follows (Courtier et al., 1994; Fisher, 1998): ${J''} = {{{B}}^{ - 1}} + \sum\nolimits_{{i} = 0}^{n} {{{L}}_{{i} \to 0}^{\rm{T}}{{H}}_{i}^{\rm{T}}{{R}}_{i}^{ - 1}{{{H}}_{i}}{{{L}}_{0 \to {i}}}} .$ (3) Thus, the solution of Eq. (2) is equal to the solution of the system of linear equations ${J''}\delta {x} = {b}$ , where ${b} = \displaystyle\sum\nolimits_{{i} = 0}^{n}$ ${{{L}}_{{i} \to 0}^{\rm{T}}{{H}}_{i}^{\rm{T}}{{R}}_{i}^{ - 1}{{{H}}_{i}}{{{L}}_{0 \to {i}}}}$ . Because B is usually a large sparse matrix and is nearly ill conditioned, it is difficult to solve the minimization problem in Eq. (1). To achieve acceptable convergence rates, it is necessary to perform some transforms and preconditioning for B. In the GRAPES 4DVar system, the basic atmospheric state variables x are the two wind vectors (denoted by u and v), the relative humidity q, and the non-dimensional pressure as an independent variable (denoted by $\Pi$ ), which is the analysis variable that represents the quality field, instead of potential temperature (denoted by θ). Thus, the analysis increment is $\delta {x} = {\left({\delta {u}, \delta {v}, \delta \Pi } \right)^{\rm{T}}}$ , which can be transformed into a new vector $\delta {{x}_{u}} = {\left({\delta \psi, \delta {\chi _{u}}, \delta {\Pi _{u}}} \right)^{\rm{T}}}$ , $\delta {x} = \mathcal{P}\delta {{x}_{u}}$ , where $\mathcal{P}$ is a physical balance transformation operator (Xue et al., 2008). Therefore, the background error covariance matrix B can be split into three independent blocked matrices ${{B}} = \mathcal{P}{{{B}}_{u}}{\mathcal{P}^{ - 1}}$ , thereby reducing the scale of the matrix computation. This method of preconditioning through a change in the variable ${\left({{{{B}}_{u}}} \right)^{{1/2}}}$ is currently used in the GRAPES 4DVar system. Introducing a new control variable w in the cost function, the preconditioning transform of the variable $\delta {x}$ is expressed as $\delta {x} = \mathcal{P}\delta {{x}_{u}} =$ $\mathcal{P}{\Sigma _{u}}\mathbb{U}{w}$ , where ${{{B}}_{u}} = {\Sigma _{u}}\mathbb{U}\mathbb{U}{\Sigma _{u}}$ . Therefore, Eq. (1) can be expressed by using the control variable w: $\begin{split}J\left({w} \right) = & \frac{1}{2}{{w}^{\rm{T}}}{w} + \frac{1}{2}{\sum\nolimits_{{i} = 0}^{n} {\left({{{{H}}_{i} }{{{L}}_{0 \to {i}}}\mathcal{P}{\Sigma _{u}}\mathbb{U}{w} - {{d}_{i}}} \right)} ^{\rm{T}}}\\ & \cdot {{R}}_{i}^{ - 1}\left({{{{H}}_{i} }{{{L}}_{0 \to {i}}}\mathcal{P}{\Sigma _{u}}\mathbb{U}{w} - {{d}_{i}}} \right).\end{split}$ (4) 2.2 The L-BFGS and CGA in GRAPES 4DVar The L-BFGS algorithm (Appendix A) in the GRAPES 4DVar system uses the estimation to the inverse Hessian matrix to guide its search through the variable space. For the L-BFGS in the GRAPES 4DVar scheme, the initial Hessian matrix is the identity matrix, and the number of iterations m insomuch that the m previous values sk and zk are stored to compute the approximation of the inverse Hessian matrix is 12. The CGA based on the Lanczos iteration (Appendix B) in GRAPES 4DVar is mainly applied to solve large sparse, symmetric, positive definite linear equations (Paige and Saunders, 1982). With this combination, the orthogonalization of the Lanczos algorithm can sufficiently overcome the instability of the CGA in providing practical solutions to the above equations. The Hessian matrix ${J''}$ in Eq. (3) is a sparse, real, symmetric, positive definite matrix that can be computed by using B, R, H, L, and LT. The convergence efficiency of the inner loop minimization of 4DVar is largely determined by the shape of the Hessian matrix, and the computational efficiency largely depends on that of the tangent linear model L and the adjoint model LT as well as the number of iterations in the minimization. Therefore, this approach effectively improves the computational efficiency of the 4DVar minimization by choosing an efficient iterative minimization algorithm. 2.3 Orthogonalization of the CGA Rounding errors greatly affect the behavior of the Lanczos iteration for a practical minimization problem (Paige, 1970). For a 4DVar system in particular, there are often some computational errors from the tangent model and adjoint model of the Hessian matrix ${J''}$ as well as rounding errors from the iterations. These errors lead to a quick loss of orthogonality in the Lanczos vectors ${{q}_{k}}$ in addition to the problem of “ghost” eigenvalues during the Lanczos iterations. Moreover, there are multiple eigenvalues of ${{{T}}_{k}}$ that correspond to simple eigenvalues of the Hessian matrix ${J''}$ ; this results in additional iterations and convergence failure. Thus, the application of the Lanczos algorithm can easily cause numerical instabilities in the solutions of large symmetric matrices. However, this issue can be overcome by conducting Gram-Schmidt orthogonalization on the Lanczos vectors (Paige, 1970), which is conducted primarily by three methods as follows: (1) Full orthogonalization (Paige, 1970). This process conducts Gram-Schmidt orthogonalization to make the Lanczos vector ${{q}_{{k} + 1}}$ orthogonal to all of the previously computed Lanczos vectors. In detail, the Gram-Schmidt orthogonalization is applied to the residual vector ${{r}_{{k + }{1}}}$ derived from the third step of the Lanczos algorithm [Eq. (B4)] and the Lanczos vector groups $\left({{{q}_{1,\cdots,}}{{q}_{k}}} \right)$ , i.e., ${{r}_{k + 1}} = {{r}_{k + 1}} - \displaystyle\sum\nolimits_{{i} = 1}^{k} {\left\langle {{{r}_{k + 1}}, {{q}_{i}}} \right\rangle } {{q}_{i}}$ . Thus, the Lanczos vector ${{q}_{{k} + 1}}$ will be orthogonal to the previously computed Lanczos vectors $\left({{{q}_{1,\cdots,}}{{q}_{k}}} \right)$ . (2) Partial orthogonalization (Simon, 1984). Consequently, instead of orthogonalizing ${{q}_{{k} + 1}}$ against all the previously computed Lanczos vectors, the same effect can be achieved by orthogonalizing ${{q}_{{k} + 1}}$ against the previously computed Lanczos vectors that are not orthogonal to ${{q}_{{k} + 1}}$ . The detailed steps are similar to those in the full orthogonalization method. However, this method reduces the number of orthogonalized inner products and therefore improves the computational efficiency. (3) Selective orthogonalization (Parlett and Scott, 1979). The method is similar to partial orthogonalization but orthogonalizing ${{q}_{{k} + 1}}$ against the much smaller set of converged eigenvectors of the Hessian matrix ${J''}$ . This method can avoid some calculations of repeated eigenvalues, reduce the additional Lanczos iterations, and improve the computational efficiency. However, extra space is needed to store the eigenvectors. The CGA has been successfully applied in the 4DVar system of ECMWF (Fisher, 1998; Trémolet, 2007). However, there are many differences between the GRAPES and ECMWF 4DVar systems. First, the ECMWF tangent linear model and adjoint model use a hydrostatic framework with spectral and reduced grids, while those in GRAPES employ a non-hydrostatic framework with a latitude–longitude grid. Especially in polar regions, the denser grid distribution of GRAPES adds a gradient sensitivity computed by the adjoint model, leading to an increase in the condition number of the Hessian matrix ${J''}$ , thereby affecting the convergence rate. Second, the state variables of assimilation and the tangent linear model variables are the same as those in the ECMWF 4DVar system. However, there is a variable physical transform between the tangent linear model and the assimilation system (see Section 2.1). 3 Data and experiment To further analyze the effectiveness of the CGA in a practical 4DVar system, we conduct one cycling assimilation experiment for a month. The time ranges from 0900 UTC 1 June to 0900 UTC 1 July 2016. The data used for the assimilation include conventional Global Telecommunication System (GTS) observations, including temperature, wind and relative humidity data derived from sounding, pressure data from ships and the Synoptic Ocean Prediction (SYNOP) experiment, and wind data from pilot readings, in addition to data from satellite-based platforms, such as the NOAA-15 Advanced Microwave Sounding Unit-A (AMSUA), NOAA-18 AMSUA, NOAA-19 AMSUA, MetOp-A AMSUA, MetOp-B AMSUA, National Polar-orbiting Partnership (NPP) Advanced Technology Microwave Sounder (ATMS) AMSUA, SeaWinds scatterometer, and Global Navigation Satellite System (GNSS) radio occultations. Satellite observations compose approximately 70% of the total observations. The assimilation window is 6 h, and the observational interval is 30 min. The horizontal resolution of the outer loop is 0.5°, and the model time step is 450 s. The horizontal resolution of the inner loop is 1.5°, and the model time step is 900 s. The number of vertical levels is 60, and the maximum number of iterations is 70 in the 4DVar minimization. The following linearized physical processes are used in this 4DVar experiment: two dry linearized physical processes (vertical diffusion and subgrid-scale orographic effects) to improve the representation of perturbed fields in the tangent linear model (Liu et al., 2017), and two newly developed moist linearized physical parameterizations consisting of deep cumulus convection based on a new simplified Arakawa-Shubert scheme (Han and Pan, 2006) and the large-scale cloud and precipitation scheme described in Tompkins and Janisková (2004). The experimental environment is based on the high-performance computer (Sugon PI) at the China Meteorological Administration. In total, 256 CPU cores are used in these experiments. Two configurations of 4DVar experiments are tested: (1) CGA experiments, in which the CGA is used for minimization in the 4DVar system; and (2) L-BFGS experiments, in which the L-BFGS is used for minimization in the 4DVar. 4 Results of the CGA in 4DVar We perform numerical experiments on the GRAPES 4DVar system to investigate the convergence of the CGA therein. The experimental configuration is the same as that in the batch experiments in Section 3, which begin at 0900 UTC 1 June 2016, and the number of iterations is 120 in the minimization. Then, the numerical stability of the Lanczos algorithm in the 4DVar is tested against the four orthogonalization schemes described in Section 2.3: full orthogonalization, partial orthogonalization, selective orthogonalization, and without orthogonalization. 4.1 Orthogonalization analysis of the CGA The convergence of the gradient norm $\left\| {\nabla {J}\left({\delta {x}} \right)} \right\|$ [Eq. (2)] with the non-orthogonalized Lanczos vector ${{q}_{k}}$ is shown in Fig. 1. The gradient norm fails to converge starting at the 9th iteration, which is partly the result of computational errors. As the iteration continues, the reduced orthogonality of the Lanczos vector ${{q}_{k}}$ gives rise to a higher gradient norm. However, the convergence of the gradient norm is much better after performing full orthogonalization on the Lanczos vector ${{q}_{k}}$ (blue dashed line in Fig. 1). In addition, the results of the first nine iterations are the same as those without orthogonalization. This outcome indicates that the orthogonalization on Lanczos vector ${{q}_{k}}$ does not change the iteration results of the Lanczos algorithm when the effect of the computational errors is small, while the orthogonalization on the Lanczos vector ${{q}_{k}}$ can effectively eliminate the effects of computational errors, leading to the stable convergence of the Lanczos algorithm when the computational errors become larger. Further, the results of partial orthogonalization (red dotted line in Fig. 1) on the Lanczos vector ${{q}_{k}}$ are the same as those of full orthogonalization, and selective orthogonalization also produces the same results as full orthogonalization. Figure 1 Convergence of the conjugate gradient norm as a function of the number of iterations for a 4DVar cost function. The vertical axis is the square of the gradient norm, and it denotes the difference between the control vector at a given iteration and its 120th iteration (black solid line: without orthogonalization; red dashed line: full orthogonalization; blue dotted line: partial orthogonalization; vertical dotted line shows that the gradient norm fails to converge starting at the 9th iteration without orthogonalization). The eigenvalue distribution of the Hessian matrix ${J''}$ under different orthogonalization methods is illustrated in Fig. 2. The eigenvalue distribution without orthogonalization on the Lanczos vector ${{q}_{k}}$ is indicated by the solid line (Fig. 2). There are 53 convergent eigenvalues in total (circles on the solid line in Fig. 2); many repeated eigenvalues are associated with redundant iterations due to the loss of orthogonality of the Lanczos vector ${{q}_{k}}$ during the iterations. This Lanczos algorithm is numerically unstable in the 4DVar minimization. The red dashed line in Fig. 2 shows the eigenvalue distribution with full orthogonalization performed on the Lanczos vector ${{q}_{k}}$ . Moreover, the number of convergent eigenvalues (triangles on the dashed line in Fig. 2) is 53, but these eigenvalues are no longer repeated. This result implies that the Lanczos algorithm is stable in the 4DVar minimization after conducting full orthogonalization on the Lanczos vector ${{q}_{k}}$ . The number of convergent eigenvalues (blue dotted line in Fig. 2) with partial orthogonalization applied to the Lanczos vector ${{q}_{k}}$ is 49, which is 4 fewer than that with full orthogonalization. However, the eigenvalue distribution with partial orthogonalization is generally similar to that with full orthogonalization. Similarly, the eigenvalue distribution with selective orthogonalization is also generally similar to that with full orthogonalization. Therefore, the Lanczos algorithm is stable in the 4DVar minimization using full orthogonalization, partial orthogonalization, or selective orthogonalization. Figure 2 Eigenvalue distribution of the Hessian matrix of a 4DVar minimization with different schemes of the orthogonalization of Lanczos vectors against the number of iterations (black solid line: without orthogonalization; red dashed line: full orthogonalization; blue dotted line: partial orthogonalization). 4.2 Convergence analysis of the CGA In the 4DVar minimization, the convergence rate of the CGA depends on the eigenvalue distribution of the Hessian matrix ${J''}$ and the condition number κ (the ratio of the maximum eigenvalue to the minimum eigenvalue). The convergence estimation of the CGA, namely, the conjugated error $\left({{{e}_{j}} = \delta {x} - \delta {{x}_{j}}} \right)$ , is based on the norm of the Hessian matrix, and it satisfies the following (Paige, 1970; Fisher, 1998): $\left\| {{{e}_{j}}} \right\|_{{J''}}^2 \!=\! {\left({\delta {x} \!-\! \delta {{x}_{j}}} \right)^{\rm{T}}}{J''}\left({\delta {x} \!-\! \delta {{x}_{j}}} \right) \leqslant 2{\left({\frac{{\sqrt {\kappa } \!-\! 1}}{{\sqrt {\kappa } \!+\! 1}}} \right)^{j}}\left\| {{{e}_{0}}} \right\|_{{J''}}^2.$ (5) Here, δx is the solution of the 4DVar minimization in Eq. (1) (the value of which is the estimated solution of the last iteration of the CGA), while $\delta {{x}_{j}}$ is the estimated solution at the jth iteration of CGA. According to Eq. (B8), the CGA should converge better than the linear algorithm. Moreover, the convergence can be improved by the pre-optimization step of reducing the condition number. To explore the convergence of the CGA in the GRAPES 4DVar system, we conduct an assimilation experiment (beginning at 0900 UTC 1 June 2016) with 120 iterations of 4DVar minimization. The maximum (minimum) eigenvalue of the Hessian matrix ${J''}$ estimated by the CGA in the 4DVar minimization is 4492.1 (1.03). Per Eq. (5), the convergence rate ${{\left({\sqrt {\kappa } - 1} \right)}/{\left({\sqrt {\kappa } + 1} \right)}}$ estimated by the condition number is 0.970, and the upper bound on the convergence rate is expressed by the solid line in Fig. 3. This result implies that the convergence of the Hessian matrix is unsatisfactory. However, in a practical calculation of the 4DVar minimization based on the CGA, the Hessian norm of the true iteration error $\left\| {{{e}_{j}}} \right\|_{{J''}}^2$ decreases in magnitude from 103 to 10–2 after 120 iterations. The descent rate is clearly quicker than the convergence rate estimated by the condition number, which constitutes superlinear convergence. The above results are consistent with those based on the Integrated Forecasting System of ECMWF (Fisher, 1998). Figure 3 Convergence of the CGA as a function of the number iterations for a 4DVar cost function. The dashed line is the square of the Hessian norm of the difference between the control vector and the value of the last iteration. The solid line is the upper bound of the convergence rate defined by Eq. (5). In short, the Lanczos algorithm is numerically more stable in the 4DVar minimization if the Lanczos vector is orthogonal during the Lanczos iterations. Thus, performing Gram-Schmidt orthogonalization on the Lanczos vector is an effective way to ensure the numerical stability in the Lanczos algorithm. In this way, the convergence rate of 4DVar minimization is also improved. Considering the short computational time for orthogonalization in the whole 4DVar system, we exploit the Lanczos algorithm with full orthogonalization in the GRAPES 4DVar system to guarantee the orthogonality of the Lanczos vector. 5 Results of numerical experiments in 4DVar 5.1 Comparison of convergence with the L-BFGS The CGA and quasi-Newton method are both quadratically convergent in theory and have the same quadratic termination property. However, these methods behave quite differently in practical applications, especially when applied to certain problems such as solving the minimization of 4DVar. To better compare the convergences of these two methods in 4DVar minimization problems, both cycling assimilation experiments begin on 10 June 2016. Recalculations are performed in the 4DVar experiments with the CGA using the background from the L-BFGS tests. The ratio of the root mean square of the gradient norm to its initial value is regarded as the convergence criterion (set as 0.03) during the minimization iterations. The maximum number of iterations is 70. In the first dozen 4DVar minimization iterations in the four assimilation experiments, the gradient norms of the CGA and L-BFGS experiments both decrease with some oscillations, and a smaller amplitude is observed for the CGA experiments (Fig. 4). Then, the oscillation of the gradient norm for the CGA experiments becomes small, and these experiments satisfy the convergence criterion after approximately 40 iterations. However, the oscillation of the gradient norm for the L-BFGS experiments is still large, and the experiments do not converge after 70 iterations. In addition, the gradient norm in the L-BFGS experiments descends very slowly in the later stages, and the minimum is similar to that after approximately 40 iterations in the CGA experiments (dotted line in Fig. 4). The above results indicate that the convergence of the CGA is better than that of the L-BFGS in the 4DVar minimization. Moreover, the computational cost of the CGA is lower. Therefore, the CGA is preferable. Figure 4 Convergence of the Hessian norm for a 4DVar cost function under the GRAPES global 4DVar assimilation system starting at (a) 0300 UTC 10 June 2016, (b) 0900 UTC 10 June 2016, (c)1500 UTC 10 June 2016, and (d) 2100 UTC 10 June 2016 (solid line: CGA experiment; dotted line: L-BFGS experiment). To better compare the convergences of the cost functions between the two sets of experiments, normalization is applied to the cost function to calculate the ratio of all the cost functions to the initial cost function within the iterations. The convergences of the cost functions for both the CGA experiments and the L-BFGS experiments on 20 June 2016 in the 4DVar cyclical assimilations are illustrated in Fig. 5. In the first twenty iterations, the descent rate of the cost functions of the CGA experiments are faster than those of the L-BFGS experiments. The convergence in the CGA experiments after 40 iterations is similar to that in the L-BFGS experiments after 70 iterations. Thus, the convergence rate of the CGA is much faster than that of the L-BFGS in the 4DVar minimization. Figure 5 Convergence of the 4DVar cost function under the GRAPES global 4DVar assimilation system starting at (a) 0300 UTC 20 June 2016, (b) 0900 UTC 20 June 2016, (c) 1500 UTC 20 June 2016, and (d) 2100 UTC 20 June 2016 (solid line: CGA experiment; dotted line: L-BFGS experiment; horizontal dashed line: final convergence rate of the cost function in the CGA experiment). 5.2 Computational efficiency In the 4DVar minimization, the term ${J''}{{q}_{k}}$ should be calculated with the iteration of the tangent and adjoint models. Therefore, the computational cost, which is determined by the number of iterations, is very high. Hence, improving the convergence rate and reducing the number of iterations is an effective way to improve the computational efficiency of the 4DVar minimization. The numbers of iterations and the calculation times for the 4DVar minimization in the 121 cyclical assimilation tests for both experimental sets are plotted in Fig. 6. The CGA experiments satisfy the requirement for convergence within a maximum of 70 iterations in the minimization. The average number of iterations to reach convergence is 37, and the average minimization time in these CGA experiments is 861 s. However, most of the L-BFGS experiments do not meet this convergence requirement within the maximum number of iterations because some of the cost functions in the iterations do not decrease when using the L-BFGS minimization, and thus, it is necessary to choose another descent direction for the L-BFGS experiments, which requires additional calculations. Thus, the number of iterations for some L-BFGS experiments is greater than the maximum number of iterations (70). Furthermore, the average number of iterations in these L-BFGS experiments is 68, and the average minimization time in these L-BFGS experiments is 1443 s. The average number of iterations in the CGA experiments is 32 less than that in the L-BFGS experiments, representing a 45% improvement in the computational efficiency. Hence, the CGA can largely improve the computational efficiency without affecting the convergence in the GRAPES 4DVar system. Figure 6 (a) Number and (b) time of iterations in the 4DVar minimization in the 121 cyclical assimilation tests for the two experimental sets (solid line: CGA experiment; dashed line: L-BFGS experiment; AVG: the average number or time of iterations over one month) 5.3 The assimilation and forecasting results To compare the assimilation results of the batch experiments more reasonably, we exploit the 21-day (from 10 to 30 June 2016) results of the cycling assimilation tests to avoid the influence of the initial field. A statistical analysis of the batch background and analysis deviations from the radiosonde temperature observations for the L-BFGS experiment (shown in black) and the CGA experiment (shown in red) is shown in Fig. 7. The two experiments show very similar standard deviations (Fig. 7a) and biases (Fig. 7b) of the background fields and analysis fields at all levels. In addition, the statistics of the other types of observations between the two experiments are also similar. Therefore, this shows that the estimated solutions [Eq. (1)] for these two minimization algorithms are both reasonable when they reach the same convergence rate in the 4DVar minimization (Fig. 6). This result validates the potential of using these two iterative methods in the GRAPES 4DVar minimization. Figure 7 (a) Standard deviations and (b) biases of background and analysis fields from radiosonde temperature observations for the L-BFGS experiment (black) and CGA experiment (red) (solid line: background departure o-b; dotted line: analysis departure o-a). 6 Conclusions and discussion A CGA based on the Lanczos iteration is investigated in this paper that considers latitudinal and longitudinal grid characteristics under a non-hydrostatic framework in the GRAPES tangent linear and adjoint models. This approach solves the convergence problem through orthogonalization in the Lanczos iteration. The CGA produces equivalent analysis results with far fewer iterations and a higher computational efficiency than the L-BFGS in the batch experiments on the 4DVar system. This conclusion for the GRAPES 4DVar system is consistent with that for the ECMWF 4DVar system. However, the denser grid distribution of GRAPES adds a gradient sensitivity computed by the adjoint model, leading to an increase in the condition number of the Hessian matrix, thereby affecting the convergence rate. However, this issue can be addressed by orthogonalization. Thus, the CGA is more suitable for the operational development of the GRAPES 4DVar system, indicating that the CGA is more suitable for minimization problems such as those in the 4DVar system. To further improve the convergence of the 4DVar minimization problem, we need to explore the preconditioned CGA based on the eigenvector of the low-resolution minimization, perform additional outer loop updates in the framework of incremental analysis, and ultimately improve the 4DVar analysis technique. Appendix A: A description of the L-BFGS The L-BFGS algorithm is an optimization algorithm in the family of quasi-Newton methods that employs a limited amount of computer memory. The algorithm starts with an initial estimate $\delta {{x}_0}$ , and the initial gradient of the cost function is ${{g}_0} = \nabla {J}\left({\delta {{x}_0}} \right)$ . A positive definite initial approximation of the inverse Hessian matrix is defined as E0 (which may be an identity matrix). Thus, the L-BFGS algorithm has the following basic structure for minimizing ${J}\left({\delta {x}} \right)$ (Liu and Nocedal, 1989) for ${k} = 0, 1, \cdots$ : Step 1. Compute the search direction ${{d}_{k}} = - {{{E}}_{k}}{{g}_{k}}$ , and set $\delta {{x}_{{k + 1}}} = \delta {{x}_{k}} + {\alpha _{k}}{{d}_{k}}$ , where ${\alpha _{k}}$ is the step size obtained by a safeguarded procedure, ${{{E}}_{k}}$ is the approximation of the inverse Hessian matrix, and ${{g}_{k}} = \nabla {J}\left({\delta {{x}_{k}}} \right)$ . Step 2. Set ${{s}_{k}} = \delta {{x}_{k}} - \delta {{x}_{{k + 1}}}$ and ${{z}_{k}} = {{g}_{k}} - {{g}_{{k + 1}}}$ . To reduce the memory usage in the algorithm, ${{{E}}_{{k + 1}}}$ is generally updated by the previous m iterations ${{s}_{k}}, {{s}_{{k - 1}}}, \cdots, {{s}_{{k - m}}}$ and ${{z}_{k}}, {{z}_{{k - 1}}}, \cdots, {{z}_{{k - m}}}$ : ${{{E}}_{{k + 1}}} = \left({{\bf{I}} - {\rho _{k}}{{s}_{k}}{z}_{k}^{\rm{T}}} \right){{{E}}_{k}}\left({\bf{I}} {- {\rho _{k}}{{z}_{k}}{s}_{k}^{\rm{T}}} \right) +$ ${\rho _{k}}{{s}_{k}}{s}_{k}^{\rm{T}}$ , where ${\rho _{k}} = {\left({{z}_{k}^{\rm{T}}{{s}_{k}}} \right)^{ - 1}}$ . Step 3. Generate a new search direction ${{d}_{{k + 1}}} =$ $- {{{E}}_{{k + 1}}}{{g}_{{k + 1}}}$ , and then go to step 1. The L-BFGS algorithm attempts to combine modest storage and computational requirements for minimizing ${J}\left({\delta {x}} \right)$ . Therefore, with a lower number of iterations (k < m), ${{{E}}_{{k + 1}}}$ can capture only insufficient information of the Hessian matrix, and thus, the convergence efficiency of the L-BFGS algorithm is affected. Appendix B: A description of the CGA based on Lanczos iterations The CGA searches in the direction of conjugated base vectors over the Krylov subspace $\mathcal{K}\left({{J''}, {{r}_0}} \right)$ and derives the minimum of the target function (Fletcher and Reeves, 1964). The equation ${{r}_0} = {b} - {J''}\delta {{x}_0}$ is the initial residual. The Lanczos approach converts the large sparse symmetric matrix into a symmetric tridiagonal matrix by an orthogonal similarity transform (Paige, 1970). The Lanczos approach is applied on the Hessian matrix to iterate and generate a tridiagonal matrix T and an orthogonal matrix Q satisfying the relation ${{T}} = {{{Q}}^{\rm{T}}}{J''}{{Q}}$ (Golub and Van Loan, 1996). After k steps of Lanczos iterations, we generate a matrix ${{{Q}}_{k}} = \left[ {{{q}_1}, \cdots, {{q}_{k}}} \right]$ with orthonormal columns and a tridiagonal matrix ${{{T}}_{k}} = \left[ {\begin{array}{*{20}{c}} {{{\alpha }_1}}&{{{\beta }_1}}&0&0 \\ {{{\beta }_1}}&{{{\alpha }_2}}& \ddots &0 \\ 0& \ddots & \ddots &{{{\beta }_{{k} - 1}}} \\ 0&0&{{{\beta }_{{k} - 1}}}&{{{\alpha }_{k}}} \end{array}} \right]$ . Equating the columns in ${J''}{{Q}} = {{QT}}$ , we conclude the following ${J''}{{q}_{k}} = {{\beta }_{{k} - 1}}{{q}_{{k - }{1}}} + {{\alpha }_{k}}{{q}_{k}} + {{\beta }_{k}}{{q}_{{k + }{1}}}.$ (B1) The CGA based on Lanczos iterations starts with an initial estimate $\delta {{x}_0}$ ; the initial gradient of the cost function is ${{g}_0} = \nabla {J}\left({\delta {{x}_0}} \right)$ , and the initial descent direction is ${{d}_0} = - {{g}_0}$ , while the initial Lanczos vector is ${{q}_1} = {{{{{d}_0}}/{\left\| {{{d}_0}} \right\|}}_2},$ ${{\beta }_0} = 0, {{q}_0} = 0$ . We have the following basic structure for minimizing ${J}\left({\delta {x}} \right)$ (Goluband and Van Loan, 1996) for ${k} = 0, 1, \cdots$ : Step 1. Calculate the multiplication of the matrix and vector at the kth step: ${{g}_{k}} = {J''}{{q}_{k}},$ (B2) where the largest calculation lies over the whole iterative algorithm because the Hessian matrix is computed using the tangent linear model L and the adjoint model LT, the calculations of which are very large and time consuming. Step 2. Estimate the kth diagonal element of the matrix T: ${{\alpha }_{k}} = \left\langle {{{g}_{k}}, {{q}_{k}}} \right\rangle .$ (B3) Here, the notation $\left\langle { \cdots, \cdots } \right\rangle$ stands for the inner product. Step 3. Calculate the residual vector: ${{r}_{k}} = {{g}_{k}} - {{\alpha }_{k}}{{q}_{k}} - {{\beta }_{{k} - 1}}{{q}_{{k - }{1}}}.$ (B4) Step 4. Calculate the (k+1)th secondary diagonal element of the matrix T: ${{\beta }_{k}} = {\left\| {{{r}_{k}}} \right\|_2},$ (B5) Step 5. Determine the Lanczos vector ${{q}_{{k + }{1}}}$ for the next iteration: ${{q}_{{k + }{1}}} = {{{{r}_{k}}}/{{{\beta }_{k}}}},$ (B6) which is equivalent to the normalization of the residual vector ${{r}_{k}}$ . Equation (B1) may be written in matrix form as follows: ${J''}{{{Q}}_{k}} = {{{Q}}_{k}}{{{T}}_{k}} + {{r}_{k}}{\left({{\mu _k}} \right)^{\rm{T}}},$ (B7) where ${\left({{\mu _k}} \right)^{\rm{T}}} = \left({0, \cdots, 0, 1} \right)$ . Then, in terms of the quadratic linear equation ${{{T}}_{k}}\delta {{y}_{k}} = {{{Q}}_{k}}^{\rm{T}}{b}$ , which consists of the Lanczos matrix ${{{T}}_{k}}$ , the solution $\delta {{y}_{k}}$ of the kth step is calculated. Further, the kth approximate solution of the minimization [Eq. (1)] is estimated and associated with the Lanczos vectors $\left({{{q}_1}, \cdots, {{q}_{k}}} \right)$ . $\delta {{x}_{k}} = \delta {{x}_0} + \sum\nolimits_{{i} = 1}^{k} {\left\langle {{{q}_{i}}, \delta {{y}_{k}}} \right\rangle }.$ (B8) In addition, the eigenvalues and eigenvectors of the Lanczos matrix ${{{T}}_{k}}$ can be estimated during the iteration of the Lanczos approach. The eigenvectors of ${{{T}}_{k}}$ , when pre-multiplied by ${{{Q}}_{k}}$ , approximate the eigenvectors of the Hessian matrix ${J''}$ . We can use the eigenvectors of the Hessian matrix to estimate the covariance matrix of the analysis errors because the error matrix is equal to the inversion of the Hessian matrix in the variational assimilation (Fisher, 1998). This relation can be used to precondition the CGA and improve the convergence rate. Acknowledgments. The authors thank the editor and two anonymous reviewers for their valuable comments and suggestions in improving this manuscript. References
2019-03-26 07:13:22
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.7666905522346497, "perplexity": 784.3464610827428}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 5, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-13/segments/1552912204857.82/warc/CC-MAIN-20190326054828-20190326080828-00301.warc.gz"}
https://math.stackexchange.com/questions/2081091/counting-distinct-partition-sets-of-integer
# counting distinct partition sets of integer I'm working on figuring out how to count the number of distinct partitions in the number N - this set of values http://oeis.org/A000009. From wikipedia (and other sources), there is a generating function for this What I'm struggling to understand is how I actually go from this function to calculating the number of partitions for some value N. I'll admit I'm not all that familiar with generating functions, but if someone could help me through an example of how we could use this (or if there's another way) to compute the number of distinct partitions where N = 5. • This question recently appeared at this MSE link and also at this MSE link II. For small N you may simply expand the first N terms of the GF. – Marko Riedel Jan 2 '17 at 21:48 • @MarkoRiedel could you give an example of how that would work for this case? Some of these mathematical constructs are a bit new to me (or it's been a long while since I've studied them). – Jeff Storey Jan 2 '17 at 22:02 • I have added a proof of the recurrence for strict partitions into any number of parts at this MSE link. Combine with memoization for an efficient means of calculating these numbers. – Marko Riedel Jan 3 '17 at 0:13 To begin with, let us briefly explain the concept of a generating function and how the number of partitions is calculated from this. Next we discuss why the function given is the right one for the Problem. 1) A generating function of some infinite sequence of numbers $$a(n), n = 1, 2, 3, ...$$ is defined as $$g(x)=\sum _{n=1}^{\infty } a(n) x^n$$ This means that a(n) is the coefficient of $$x^n$$ of the Taylor expansion of g(x) about x = 0. Hence for N = 5 $$g(x) = \prod _{k=1}^\infty \left(x^k+1\right) = 1 + x + x^2 + 2 x^3 + 2x^4 + 3 x^5 + ...$$ Which means N = 5 has 3 (coefficient of $$x^5$$) distinct partitions. 2) Consider this product $$(x+1) \left(x^2+1\right) \left(x^3+1\right) ...$$ Multiplying out gives a sum of terms of the form $$a(N)x^{k_1+k_2+...}$$ where $$k_1+k_2+... = N$$ All $$k_i$$ must be different because they originate from different factors $$x^{k_i}$$ in the original product. The factor $$a(N)$$ obviuosly gives the number of possible combinations of terms $$x^N$$ of the product. But this in turn is what we whish to calculate, the number of distinct partitions of the number $$N$$. • Forgive me for the naive question here, but how did you get those coefficients? – Jeff Storey Jan 2 '17 at 22:21 • Multiply out the first factors of the product as shown in 2. – Dr. Wolfgang Hintze Jan 2 '17 at 22:26 • Ah, I see it now. Thank you. – Jeff Storey Jan 2 '17 at 22:27
2020-08-14 18:07:41
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 13, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.7975078225135803, "perplexity": 258.30714623851964}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-34/segments/1596439739347.81/warc/CC-MAIN-20200814160701-20200814190701-00187.warc.gz"}
https://berkeleygw.org/documentation/tutorial/tutorial-bethe-salpeter-equation/
# 1 Introduction to GW-BSE ## 1.1 Theory The GW approximation yields accurate QP energies, but it does not yield accurate optical absorption spectra. For instance, in Fig. 1, which shows optical absorption of bulk Si calculated within the GW approximation and the inter-band transition model compared with experiment, you can see that the experimental spectrum has a different absorption strength from the GW inter-band transition results and has peak-like features, which are not present in the GW spectrum. This difference is not surprising, since GW plus inter-band transitions is a theory designed to describe single-particle excitations, while optical absorption is fundamentally a two-particle process. For instance, you can imagine a photon coming in and exciting a quasi-electron and a quasi-hole, but the quasi-electron and quasi-hole can interact. Thus, the true excitation is not a free quasi-electron and quasi-hole pair but rather a correlated quasi-electron and quasi-hole pair known as an exciton, which can be either a bonafide bound state or resonant (Fig. 2). To good approximation, we can write the exciton state as a linear combination of quasi-electron and quasi-hole states $|S_{\mathbf{Q}}\rangle =\sum_{vc\mathbf{k}} A^S_{vc\mathbf{kQ}} |v\mathbf{k}\rangle\otimes|c\mathbf{k}+\mathbf{Q}\rangle,$ where $S$ indexes the exciton state; $\mathbf{Q}$ is the exciton’s center-of-mass momentum, and $A^S_{vc\mathbf{k}}$ is the amplitude of the free quasi-electron and quasi-hole pair consisting of an electron in state $|c\mathbf{k}+\mathbf{Q}\rangle$ and an electron missing from state $|v\mathbf{k}\rangle$. For two-particle excitations, we need to introduce the electron-hole correlation function $L$ [Strinati1988] $L(1,2;1',2') = -G_2(1,2;1',2') + G(1,1')G(2,2').$ Here, the notation $(1)$ represents the combined time, spin, and spatial coordinate; i.e. $(1)=(\mathbf{r}_1,\sigma_1,t_1)$, and $G_2$ is the two-particle Green’s function. We will also use $(\mathbf{x})$ to refer jointly to the spin and spatial coordinate; i.e. $(\mathbf{x})=(\mathbf{r},\sigma)$. The electron-hole correlation function obeys a Dyson equation known as the Bethe Salpeter equation (BSE) $L(1,2;1',2') = L_0(1,2;1',2') + \int d(3456) L_0(1,4;1',3) K(3,5;4,6)L(6,2;5,2').$ Here, $L_0(1,2;1'2')=G(1,2')G(2,1')$ and describes a non-interacting quasi-electron and quasi-hole pair. $K$ is the electron-hole interaction kernel.Following Strinati[Strinati1988] and Rohlfing and Louie[Rohlfing2000], the BSE can be written as an effective eigenvalue problem. In this form the BSE Hamiltonian has the structure $H^{\mathrm{BSE}}(\mathbf{Q}) = (\varepsilon_{c\mathbf{k}+\mathbf{Q}}^{\mathrm{QP}}-\varepsilon^{\mathrm{QP}}_{v\mathbf{k}'})\delta_{\mathbf{k}+\mathbf{Q},\mathbf{k}'} + \left( \begin{array}{cc} K^{AA}(\mathbf{Q}) & K^{AB}(\mathbf{Q}) \\ K^{BA}(\mathbf{Q}) & K^{BB}(\mathbf{Q}) \\ \end{array} \right),$ where the kernel matrix elements in each block are calculated in the basis of the single-particle orbitals. The off-diagonal blocks ($K^{AB}$,$K^{BA}$) can usually be neglected as long as the energy of the electron-hole interaction is small compared with the QP gap. Then, the BSE Hamiltonian becomes $H^{\mathrm{BSE},\mathrm{TDA}}(\mathbf{Q}) = (\varepsilon_{c\mathbf{k}+\mathbf{Q}}^{\mathrm{QP}}-\varepsilon^{\mathrm{QP}}_{v\mathbf{k}'})\delta_{\mathbf{k}+\mathbf{Q},\mathbf{k}'} + K^{AA}(\mathbf{Q})$ This is known as the Tamm-Dancoff approximation (TDA).The BSE kernel is found by taking the functional derivative of the self energy. $K(3,5;4,6) = \frac{\delta[V_H(3)\delta(3,4) + \Sigma(3,4)]}{\delta G(6,5)}.$ Within the GW approximation for $\Sigma$, the BSE Kernel becomes [Rohlfing1998,Albrecht1998,Rohlfing2000] $K(3,5;4,6) = -i\delta(3,4)\delta(5^-,6)v(3,6) + i \delta(3,6)\delta(45)W(3^+,4).$ We refer to the first term involving the bare Coulomb interaction as the exchange kernel ($K^x$) and the second term involving the screened Coulomb interaction as the direct kernel ($K^d$).When the spin-orbit interaction is small, the BSE matrix can be block-diagonalized and decoupled into spin-singlet and spin-triplet classes of solution. For the singlet solutions, the BSE kernel is $K^d+2K^x$. For the triplet solutions, there is no exchange contribution, and the BSE kernel is simply $K^d$. Only the singlet states are optically bright.Once we have the solutions of the BSE Hamiltonian, we can relate them to the optical spectra. Optical absorption and conductivity are proportional to the imaginary part of the macroscopic dielectric function, $\Im\epsilon_M$. The macroscopic dielectric function is defined as $\epsilon_M = \left(\frac{1}{\epsilon^{-1}}\right)_{\mathbf{G}=\mathbf{G}'=0}.$ Since we are only interested in optical properties, we want to avoid having to calculate and invert $\epsilon^{-1}$, which is a large matrix. We use the double inversion procedure of Pick, Cohen, and Martin[Pick1970,Hanke1978,Onida2002] to directly obtain $\epsilon_M$. In this procedure, we replace the Coulomb potential in Fourier space with a modified Coulomb potential, which does not include a long-range contribution. Then, we can construct $\Im \epsilon_M$ from the solutions of the modified BSE $\Im\epsilon_M(\omega) = \frac{8\pi^2e^2}{\omega^2} \sum_S |\hat{\mathbf{\lambda}}\cdot\langle 0|\mathbf{v}|S\rangle |^2 \delta(\omega-\Omega^S) \\ = \frac{8\pi^2e^2}{\omega^2} \sum_S |\sum_{vc\mathbf{k}} A^S_{vc\mathbf{k}}\hat{\mathbf{\lambda}}\cdot\langle v\mathbf{k}|\mathbf{v}|c\mathbf{k}\rangle |^2 \delta(\omega-\Omega^S)$ where $\hat{\mathbf{\lambda}}$ is the polarization vector, and $\mathbf{v}$ is the velocity operator. We are assuming $\mathbf{Q}\approx 0$ and dropping the $\mathbf{Q}$ index, since the momentum carried by light is very small. In the independent QP picture (i.e. neglecting excitonic effects), $\Im\epsilon_M$ is becomes $\Im\epsilon_M = \frac{8\pi^2e^2}{\omega^2} \sum_{vc\mathbf{k}} |\hat{\mathbf{\lambda}}\cdot\langle v\mathbf{k}|\mathbf{v}|c\mathbf{k}\rangle |^2 \delta(\omega-\varepsilon^{\mathrm{QP}}_{c\mathbf{k}} + \varepsilon^{\mathrm{QP}}_{v\mathbf{k}}).$ A comparison of $\Im\epsilon_M$ in the BSE and independent QP inter-band transitions picture is shown in Fig. 1. You can see that including the excitonic effects from BSE results in optical spectra in excellent agreement with experiment. ## 1.2 Usage in BerkeleyGW The optical properties of materials are computed in the Bethe-Salpeter equation (BSE) executables. Here the eigenvalue equation represented by the BSE is constructed and diagonalized yielding the excitation energies and wavefunctions of the correlated electron-hole excited states. There are two main executables: kernel and absorption. In the former, the electron-hole interaction kernel is constructed on a coarse k-point grid, and in the latter the kernel is (optionally) interpolated to a fine k-point grid and diagonalized. First, the kernel executable constructs the direct and exchange kernels as matrices in the basis of electron-hole pairs. The required input files are: • epsmat and eps0mat: dielectric matricees from the epsilon step • WFN_co: mean field wavefunction on a coarse k-grid The exchange ($K^x$) and direct ($K^d$) matrix elements are $\langle vc\mathbf{kQ}|K^x|v'c'\mathbf{k}'\mathbf{Q}\rangle = \int d\mathbf{x}d\mathbf{x}' \phi^*_{c\mathbf{k}+\mathbf{Q}}(\mathbf{x})\phi_{v\mathbf{k}}(\mathbf{x})v(\mathbf{r},\mathbf{r}') \phi^*_{v'\mathbf{k}}(\mathbf{x}')\phi_{c'\mathbf{k}'+\mathbf{Q}}(\mathbf{x}') \\ \langle vc\mathbf{kQ}|K^d|v'c'\mathbf{k}'\mathbf{Q}\rangle = -\int d\mathbf{x}d\mathbf{x}' \phi^*_{c\mathbf{k}+\mathbf{Q}}(\mathbf{x})\phi_{c'\mathbf{k}'+\mathbf{Q}}(\mathbf{x})W(\mathbf{r},\mathbf{r}';\omega=0) \phi^*_{v'\mathbf{k}'}(\mathbf{x}')\phi_{v\mathbf{k}}(\mathbf{x}').$ The kernel matrices are output in the bsemat file. ### 1.2.1 Tips for Running Kernel • If the number of CPUs is less than the number of k-points squared ($N_k^2$), $\mathbf{k}$ and $\mathbf{k}'$ pairs are distributed evenly over the CPUs. Thus, if you are using fewer CPUs than $N_k^2$, you should use a number of CPUs that divides evenly into $N_k^2$. Similarly, if your number of CPUs is greater than $N_k^2$ and less than $N_k^2\cdot N_c^2$, your number of CPUs should divide evenly into $N_k^2\cdot N_c^2$. If you are using more than $N_k^2\cdot N_c^2$ CPUs, the number of CPUs should divide evenly into $N_k^2\cdot N_c^2 \cdot N_v^2$, where $N_c$ and $N_v$ are respectively the number of valence and conduction bands. • If each MPI task has enough memory to store the entire dielectric matrix, you should use the low_comm flag. This minimizes communication and makes the calculation faster. • The kernel executable contains no check-pointing, so make sure to check your output file at the start of your calculation to see if you have enough walltime and memory to finish. • The full list of kernel options can be found here. The absorption code takes the bsemat file from kernel and constructs the BSE Hamiltonian. The required input files are: • bsemat: kernel matrix • WFN_co: the same coarse grid wavefunction used in the kernel step • eqp_co.dat/eqp.dat (optional): QP energies from sigma on the same k-grid as WFN_co/WFN_fi • WFN_fi (optional): wavefunction on a fine k-grid that can be used to interpolate the kernel matrix elements. This file is not needed if you choose not to interpolate (not recommended) or are studying a system without k-points. • WFNq_fi (optional): wavefunction with a small k-shift with respect to the k-grid of WFN_fi. This is used to calculate the velocity matrix elements, which determine the oscillator strength. This file is not needed if you use choose to use the momentum operator, which neglects the nonlocal parts of the pseudopotential. • epsmat and eps0mat: dielectric matrices from the epsilon calculation # 3 References [Albrecht1998] Stefan Albrecht, Lucia Reining, Rodolfo Del Sole, and Giovanni Onida. Ab Initio calculation of excitonic effects in the optical spectra of semiconductors. Phys. Rev. Lett., 80:4510–4513, May 1998. [Cohen2016] M.L. Cohen and S.G. Louie. Fundamentals of Condensed Matter Physics. Cambridge University Press, 2016. [Deslippe2012] Jack Deslippe, Georgy Samsonidze, David Strubbe, Manish Jain, Marvin L. Cohen, and Steven G. Louie. BerkeleyGW: A massively parallel computer package for the calculation of the quasiparticle and optical properties of materials and nanostructures. Comput. Phys. Commun., 183:1269, 201 [Hanke1978] W. Hanke. Dielectric theory of elementary excitations in crystals. Advances in Physics, 27(2):287–341, 1978. [Onida2002] Giovanni Onida, Lucia Reining, and Angel Rubio. Electronic excitations: density-functional versus many-body Green’s-function approaches. Rev. Mod. Phys., 74(2):601–659, jun 2002. [Pick1970] Robert M. Pick, Morrel H. Cohen, and Richard M. Martin. Microscopic theory of force constants in the adiabatic approximation. Phys. Rev. B, 1:910–920, Jan 1970. [Rohlfing1998] Michael Rohlfing and Steven G Louie. Electron-hole excitations in semiconductors and insulators. Phys. Rev. Lett., 81(11):2312–2315, 1998. [Rohlfing2000] Michael Rohlfing and Steven G. Louie. Electron-hole excitations and optical spectra from first principles. Phys. Rev. B, 62(8):4927–4944, aug 2000. [Strinati1988] G. Strinati. Application of the Green’s functions method to the study of the optical properties of semiconductors. Riv. Nuovo Cimento, 11:1, 1988.
2022-06-26 22:38:57
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 57, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.6947422623634338, "perplexity": 1501.0331823076908}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-27/segments/1656103322581.16/warc/CC-MAIN-20220626222503-20220627012503-00728.warc.gz"}
http://www.dummies.com/how-to/content/how-to-determine-whether-an-alternating-series-con.html
An alternating series is a series where the terms alternate between positive and negative. You can say that an alternating series converges if two conditions are met: 1. Its nth term converges to zero. 2. Its terms are non-increasing — in other words, each term is either smaller than or the same as its predecessor (ignoring the minus signs). Using this simple test, you can easily show many alternating series to be convergent. The terms just have to converge to zero and get smaller and smaller (they rarely stay the same). The alternating harmonic series converges by this test: As do the following two series: The alternating series test can only tell you that an alternating series itself converges. The test says nothing about the positive-term series. In other words, the test cannot tell you whether a series is absolutely convergent or conditionally convergent. To answer that question, you must investigate the positive series with a different test. (If the alternating series is convergent as it is, it must be either absolutely or conditionally convergent; it’s just that you can’t determine which it is unless you’re able to figure out whether or not the positive-term series converges.) Now try the following problem. Determine the convergence or divergence of the following series. If convergent, determine whether the convergence is conditional or absolute. 1. Check that the nth term converges to zero. Always check the nth term first because if it doesn’t converge to zero, you’re done — the alternating series and the positive series will both diverge. Note that the nth term test of divergence applies to alternating series as well as positive series. 2. Check that the terms decrease or stay the same (ignoring the minus signs). This is negative for all x ≥ 3 (because the natural log of anything 3 or higher is more than 1 and x-squared, of course, is always positive), so the derivative and thus the slope of the function are negative, and therefore the function is decreasing. Finally, because the function is decreasing, the terms of the series are also decreasing. (Recall that ignoring any number of terms at the beginning of a series doesn’t affect whether the series converges or diverges or whether convergence is conditional or absolute; that’s why it’s okay to begin with x = 3 and n = 3.) That does it. converges by the alternating series test. 3. Determine the type of convergence. You can see that for n ≥ 3 the positive series, is greater than the divergent harmonic series, so the positive series diverges by the direct comparison test. Thus, the alternating series is conditionally convergent. If the alternating series fails to satisfy the second requirement of the alternating series test, it does not follow that your series diverges, only that this test fails to show convergence.
2016-05-04 07:17:02
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8632037043571472, "perplexity": 210.9710105548083}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2016-18/segments/1461860122501.61/warc/CC-MAIN-20160428161522-00149-ip-10-239-7-51.ec2.internal.warc.gz"}
https://latex-tutorial.com/tutorials/hyperlinks/
# How to make clickable links in LaTeX Adding clickable links to LaTeX documents is very straightforward, you only have to add the hyperref package to your preamble. This package allows you to set links with a description as well as add bare urls to your document (News! For more details I have created Advanced LaTeX Cross-references). % ... \documentclass{article} % or any other documentclass %... \usepackage{hyperref} %... \begin{document} %... \end{document} After setting this up, you’re ready to go and add links anywhere to your document. In order to add a link with a description (i.e. making a word clickable), you should use the href command like so: %... \begin{document} \end{document} You will notice, that there’s a colored box shown around the word. Don’t worry, this box is not going to show up in your printed document, but only if you view it on your computer. If you simply want to embed a bare URL, you should use the url command instead, which usage is even simpler: %... \begin{document} You can also link to bare URLs without an additional description: \url{http://www.latex-tutorial.com} \end{document} %... \begin{document} \end{document} Usually, using the default settings and color etc are just fine, but these can also be customized if you want to. This can be done using the hypersetup command in your preamble. Since I’ve never used this feature in any document, I won’t explain how to use it, but you can find a more detailed documentation of the hyperref package here, if you’re curious.
2021-10-26 05:14:20
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.5889303088188171, "perplexity": 907.7256533880707}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-43/segments/1634323587799.46/warc/CC-MAIN-20211026042101-20211026072101-00357.warc.gz"}
https://www.zbmath.org/authors/?q=ai%3Ahe.xiaofei
# zbMATH — the first resource for mathematics ## He, Xiaofei Compute Distance To: Author ID: he.xiaofei Published as: He, X.; He, X. F.; He, Xiaofei Documents Indexed: 43 Publications since 1991 all top 5 #### Co-Authors 2 single-authored 7 Tang, Xianhua 7 Zhang, Qiming 4 Chen, Peng 3 Chen, Guoping 3 Xie, Jingli 3 Xu, Changjin 2 Bu, Jiajun 2 Cai, Deng 2 Chen, Chun 2 Liao, Maoxin 2 Lin, Binbin 2 Lü, Ke 1 Chen, Yan 1 Chen, Yi 1 Chen, Zhengguang 1 Guan, Ziyu 1 Hong, Bin 1 Jhala, Pradhuman 1 Ji, Ming 1 Jiang, Jianchu 1 Li, Xuelong 1 Liu, Ligang 1 Liu, Wei 1 Liu, Wei 1 Min, Wanli 1 Shen, Jianhua 1 Shen, Xukun 1 Tang, Xh 1 Tuo, Qing 1 Wang, Can 1 Wang, Jie 1 Xu, Shibiao 1 Yang, Shangming 1 Ye, Jieping 1 Yi, Zhang 1 Zhang, Chiyuan 1 Zhang, Feihu 1 Zhang, Jiemi 1 Zhang, Lijun 1 Zhang, Weizhong 1 Zhang, Xiaopeng 1 Zhou, Yuan all top 5 #### Serials 5 IEEE Transactions on Image Processing 4 Pattern Recognition 3 Advances in Difference Equations 2 Computers & Mathematics with Applications 2 Journal of Mathematical Analysis and Applications 2 Mathematical Methods in the Applied Sciences 2 Abstract and Applied Analysis 2 Journal of Machine Learning Research (JMLR) 2 International Journal of Mathematical Analysis (Ruse) 1 Nonlinear Dynamics 1 Journal of Inequalities and Applications 1 Electronic Journal of Qualitative Theory of Differential Equations 1 International Journal of Applied Mathematics and Computer Science 1 Journal of Changsha Communications University 1 Journal of Applied Mathematics 1 Communications on Pure and Applied Analysis 1 Mediterranean Journal of Mathematics 1 International Journal of Differential Equations all top 5 #### Fields 16 Ordinary differential equations (34-XX) 8 Computer science (68-XX) 5 Dynamical systems and ergodic theory (37-XX) 5 Difference and functional equations (39-XX) 5 Statistics (62-XX) 4 Information and communication theory, circuits (94-XX) 3 Real functions (26-XX) 3 Global analysis, analysis on manifolds (58-XX) 3 Biology and other natural sciences (92-XX) 1 Associative rings and algebras (16-XX) 1 Partial differential equations (35-XX) 1 Integral equations (45-XX) 1 Operator theory (47-XX) 1 Mechanics of particles and systems (70-XX) 1 Systems theory; control (93-XX) #### Citations contained in zbMATH 30 Publications have been cited 193 times in 178 Documents Cited by Year Bifurcation analysis in a delayed Lotka-Volterra predator-prey model with two delays. Zbl 1297.37044 Xu, Changjin; Tang, Xianhua; Liao, Maoxin; He, Xiaofei 2011 Newton-harmonic balancing approach for accurate solutions to nonlinear cubic-quintic duffing oscillators. Zbl 1168.34321 Lai, S. K.; Lim, C. W.; Wu, B. S.; Wang, C.; Zeng, Q. C.; He, X. F. 2009 Stability and Hopf bifurcation analysis for a Lotka-Volterra predator-prey model with two delays. Zbl 1231.34151 Xu, Changjin; Liao, Maoxin; He, Xiaofei 2011 Infinitely many solutions for a class of fractional Hamiltonian systems via critical point theory. Zbl 1336.34012 Chen, Peng; He, Xiaofei; Tang, X. H. 2016 Lyapunov-type inequalities for even order differential equations. Zbl 1276.34014 He, Xiaofei; Tang, X. H. 2012 Lower bounds for generalized eigenvalues of the quasilinear systems. Zbl 1247.34129 Tang, X. H.; He, Xiaofei 2012 Nonnegative local coordinate factorization for image representation. Zbl 1373.94082 Chen, Yan; Zhang, Jiemi; Cai, Deng; Liu, Wei; He, Xiaofei 2013 On inequalities of Lyapunov for linear Hamiltonian systems on time scales. Zbl 1229.34137 He, Xiaofei; Zhang, Qi-ming; Tang, Xianhua 2011 Lyapunov-type inequalities for a class of even-order differential equations. Zbl 1276.34016 Zhang, Qi-Ming; He, Xiaofei 2012 Stability and bifurcation analysis in a class of two-neuron networks with resonant bilinear terms. Zbl 1218.37122 Xu, Changjin; He, Xiaofei 2011 Vortex merging and spectral cascade in two-dimensional flows. Zbl 1027.76516 Nielsen, A. H.; He, X.; Rasmussen, J. Juul; Bohr, T. 1996 Existence and multiplicity of homoclinic solutions for second-order nonlinear difference equations with Jacobi operators. Zbl 1370.39004 Chen, Peng; He, Xiaofei 2016 Homoclinic solutions for second order discrete $$p$$-Laplacian systems. Zbl 1273.34050 He, Xiaofei; Chen, Peng 2011 Laplacian regularized D-optimal design for active learning and its application to image retrieval. Zbl 1371.94156 He, Xiaofei 2010 Positive solutions of fractional differential inclusions at resonance. Zbl 1273.26009 Chen, Yi; Tang, Xianhua; He, Xiaofei 2013 A discrete analogue of Lyapunov-type inequalities for nonlinear difference systems. Zbl 1298.39005 He, Xiaofei; Zhang, Qi-Ming 2011 Numerical simulation of pulse detonation engine phenomena. Zbl 1081.76574 He, X.; Karagozian, A. R. 2003 Locality pursuit embedding. Zbl 1070.68596 Min, Wanli; Lu, Ke; He, Xiaofei 2004 Parallel vector field embedding. Zbl 1317.68170 Lin, Binbin; He, Xiaofei; Zhang, Chiyuan; Ji, Ming 2013 On Lyapunov-type inequalities for nonlinear dynamic systems on time scales. Zbl 1236.34118 Zhang, Qi-Ming; He, Xiaofei; Jiang, Jianchu 2011 Integral BVPs for a class of first-order impulsive functional differential equations. Zbl 1207.34078 He, Xiaofei; Xie, Jingli; Chen, Guoping; Shen, Jianhua 2010 PM-PM: PatchMatch with Potts model for object segmentation and stereo matching. Zbl 1408.94740 Xu, Shibiao; Zhang, Feihu; He, Xiaofei; Shen, Xukun; Zhang, Xiaopeng 2015 Lyapunov-type inequalities and disconjugacy for some nonlinear difference system. Zbl 1368.39004 Zhang, Qi-Ming; He, Xiaofei; Tang, Xh 2013 Stability criteria for linear Hamiltonian dynamic systems on time scales. Zbl 1273.39002 He, Xiaofei; Tang, Xianhua; Zhang, Qi-Ming 2011 Lyapunov-type inequalities for some quasilinear dynamic system involving the $$(p_1, p_2, \dots, p_m)$$-Laplacian on time scales. Zbl 1235.93187 He, Xiaofei; Zhang, Qi-Ming 2011 Image representation using Laplacian regularized nonnegative tensor factorization. Zbl 1218.68134 Wang, Can; He, Xiaofei; Bu, Jiajun; Chen, Zhengguang; Chen, Chun; Guan, Ziyu 2011 Integral boundary value problems for first order impulsive differential inclusions. Zbl 1183.34034 Xie, Jingli; Chen, Guoping; He, Xiaofei 2009 Regularized query classification using search click information. Zbl 1138.68504 2008 The role of Ekman pumping and the dominance of swirl in confined flows driven by Lorentz forces. Zbl 0947.76093 Davidson, P. A.; Kinnear, D.; Lingwood, R. J.; Short, D. J.; He, X. 1999 Uniform convergence of polynomials associated with varying Jacobi weights. Zbl 0749.41011 He, X.; Li, X. 1991 Infinitely many solutions for a class of fractional Hamiltonian systems via critical point theory. Zbl 1336.34012 Chen, Peng; He, Xiaofei; Tang, X. H. 2016 Existence and multiplicity of homoclinic solutions for second-order nonlinear difference equations with Jacobi operators. Zbl 1370.39004 Chen, Peng; He, Xiaofei 2016 PM-PM: PatchMatch with Potts model for object segmentation and stereo matching. Zbl 1408.94740 Xu, Shibiao; Zhang, Feihu; He, Xiaofei; Shen, Xukun; Zhang, Xiaopeng 2015 Nonnegative local coordinate factorization for image representation. Zbl 1373.94082 Chen, Yan; Zhang, Jiemi; Cai, Deng; Liu, Wei; He, Xiaofei 2013 Positive solutions of fractional differential inclusions at resonance. Zbl 1273.26009 Chen, Yi; Tang, Xianhua; He, Xiaofei 2013 Parallel vector field embedding. Zbl 1317.68170 Lin, Binbin; He, Xiaofei; Zhang, Chiyuan; Ji, Ming 2013 Lyapunov-type inequalities and disconjugacy for some nonlinear difference system. Zbl 1368.39004 Zhang, Qi-Ming; He, Xiaofei; Tang, Xh 2013 Lyapunov-type inequalities for even order differential equations. Zbl 1276.34014 He, Xiaofei; Tang, X. H. 2012 Lower bounds for generalized eigenvalues of the quasilinear systems. Zbl 1247.34129 Tang, X. H.; He, Xiaofei 2012 Lyapunov-type inequalities for a class of even-order differential equations. Zbl 1276.34016 Zhang, Qi-Ming; He, Xiaofei 2012 Bifurcation analysis in a delayed Lotka-Volterra predator-prey model with two delays. Zbl 1297.37044 Xu, Changjin; Tang, Xianhua; Liao, Maoxin; He, Xiaofei 2011 Stability and Hopf bifurcation analysis for a Lotka-Volterra predator-prey model with two delays. Zbl 1231.34151 Xu, Changjin; Liao, Maoxin; He, Xiaofei 2011 On inequalities of Lyapunov for linear Hamiltonian systems on time scales. Zbl 1229.34137 He, Xiaofei; Zhang, Qi-ming; Tang, Xianhua 2011 Stability and bifurcation analysis in a class of two-neuron networks with resonant bilinear terms. Zbl 1218.37122 Xu, Changjin; He, Xiaofei 2011 Homoclinic solutions for second order discrete $$p$$-Laplacian systems. Zbl 1273.34050 He, Xiaofei; Chen, Peng 2011 A discrete analogue of Lyapunov-type inequalities for nonlinear difference systems. Zbl 1298.39005 He, Xiaofei; Zhang, Qi-Ming 2011 On Lyapunov-type inequalities for nonlinear dynamic systems on time scales. Zbl 1236.34118 Zhang, Qi-Ming; He, Xiaofei; Jiang, Jianchu 2011 Stability criteria for linear Hamiltonian dynamic systems on time scales. Zbl 1273.39002 He, Xiaofei; Tang, Xianhua; Zhang, Qi-Ming 2011 Lyapunov-type inequalities for some quasilinear dynamic system involving the $$(p_1, p_2, \dots, p_m)$$-Laplacian on time scales. Zbl 1235.93187 He, Xiaofei; Zhang, Qi-Ming 2011 Image representation using Laplacian regularized nonnegative tensor factorization. Zbl 1218.68134 Wang, Can; He, Xiaofei; Bu, Jiajun; Chen, Zhengguang; Chen, Chun; Guan, Ziyu 2011 Laplacian regularized D-optimal design for active learning and its application to image retrieval. Zbl 1371.94156 He, Xiaofei 2010 Integral BVPs for a class of first-order impulsive functional differential equations. Zbl 1207.34078 He, Xiaofei; Xie, Jingli; Chen, Guoping; Shen, Jianhua 2010 Newton-harmonic balancing approach for accurate solutions to nonlinear cubic-quintic duffing oscillators. Zbl 1168.34321 Lai, S. K.; Lim, C. W.; Wu, B. S.; Wang, C.; Zeng, Q. C.; He, X. F. 2009 Integral boundary value problems for first order impulsive differential inclusions. Zbl 1183.34034 Xie, Jingli; Chen, Guoping; He, Xiaofei 2009 Regularized query classification using search click information. Zbl 1138.68504 2008 Locality pursuit embedding. Zbl 1070.68596 Min, Wanli; Lu, Ke; He, Xiaofei 2004 Numerical simulation of pulse detonation engine phenomena. Zbl 1081.76574 He, X.; Karagozian, A. R. 2003 The role of Ekman pumping and the dominance of swirl in confined flows driven by Lorentz forces. Zbl 0947.76093 Davidson, P. A.; Kinnear, D.; Lingwood, R. J.; Short, D. J.; He, X. 1999 Vortex merging and spectral cascade in two-dimensional flows. Zbl 1027.76516 Nielsen, A. H.; He, X.; Rasmussen, J. Juul; Bohr, T. 1996 Uniform convergence of polynomials associated with varying Jacobi weights. Zbl 0749.41011 He, X.; Li, X. 1991 all top 5 #### Cited by 376 Authors 9 Guo, Zhongjin 8 Shi, Haiping 7 Liu, Xia 6 Aktaş, Mustafa Fahri 6 Leung, Andrew Yee-Tak 6 Tang, Xianhua 6 Zhang, Qiming 6 Zhou, Tao 5 Yang, Hongxiang 5 Zhang, Zizhen 4 Beléndez, Augusto 4 Çakmak, Devrim 4 He, Xiaofei 4 Lo, Kueiming 4 Xu, Changjin 4 Yang, Huizhong 4 Yang, Xiaojing 4 Zhang, Xingyong 3 Álvarez, Mariela L. 3 Arribas, Enrique 3 Beléndez, Tarsicio 3 Biswas, Santanu 3 Chakraborty, Kunal 3 Jiang, Xiaowei 3 Lai, Siu Kai 3 Liu, Juan 3 Ntouyas, Sotiris K. 3 Samanta, Sudip K. 3 Samet, Bessem 3 Tariboon, Jessada 3 Tiryaki, Aydin 3 Zhang, Wei 2 Agarwal, Ravi P. 2 Ahmad, Bashir 2 Al-saedi, Ahmed Eid Salem 2 Arora, Charu 2 Bhatti, Harbax Singh 2 Chattopadhyay, Joydev 2 Chen, Sitong 2 Deng, Haiyun 2 Dhar, Joydip 2 Don, Wai Sun 2 Francés, Jorge 2 Gao, Zhen 2 Gao, Zu 2 Guan, Zhihong 2 Jleli, Mohamed 2 Kar, Tapan Kumar 2 Kim, Yong-In 2 Kumar, Vivek 2 Liao, Maoxin 2 Lim, Chi Wan 2 Özbekler, Abdullah 2 Pal, Nikhil Ranjan 2 Pascual, Carolina 2 Qian, Youhua 2 Saifuddin, Md 2 Sasmal, Sourav Kumar 2 Wang, Liben 2 Wang, Xuedi 2 Wu, Baisheng 2 Zhan, Xisheng 2 Zhang, Jie 2 Zhang, Yuanbiao 1 Ababneh, Faisal 1 Ahmetoğlu, Abdullah 1 Akbarzade, Mehdi 1 Al Arifi, Nassir 1 Al-Darabsah, Isam 1 Al-Dosari, Aeshah 1 Alsakaji, Hebatallah J. 1 Alshomrani, Ali Saleh 1 Altun, Ishak 1 Alzahrani, Abdullah Khamis Hassan 1 Alzahrani, Faris Saeed 1 Aphithana, Aphirak 1 Bachar, Imed 1 Bai, Chuanzhi 1 Bai, Yongzhen 1 Banerjee, Ritwick 1 Bangura, Hamza I. 1 Barari, Amin 1 Bassom, Andrew P. 1 Benhassine, Abderrazek 1 Bernabeu, G. 1 Bhadauria, Beer Singh 1 Bhowmick, Suman 1 Bi, Dianjie 1 Bleda, Sergio 1 Bogner, Thorsten 1 Bota, Constantin 1 Bundău, Olivia 1 Cao, Dengqing 1 Carpentieri, Mario 1 Caruntu, Bogdan 1 Cela, Arben 1 Céspedes, F. J. 1 Chai, Shouxia 1 Chen, Huatao 1 Chen, Junxiu ...and 276 more Authors all top 5 #### Cited in 78 Serials 15 Nonlinear Dynamics 14 Advances in Difference Equations 9 Applied Mathematical Modelling 8 Discrete Dynamics in Nature and Society 7 Applied Mathematics and Computation 6 Pattern Recognition 6 Abstract and Applied Analysis 5 International Journal of Bifurcation and Chaos in Applied Sciences and Engineering 4 Journal of Fluid Mechanics 4 Chaos, Solitons and Fractals 4 Applied Mathematics Letters 4 Mathematical Problems in Engineering 4 Journal of Inequalities and Applications 3 Computers & Mathematics with Applications 3 Journal of the Franklin Institute 3 Complexity 3 International Journal of Applied Mathematics and Computer Science 3 Journal of Nonlinear Science and Applications 2 Ukrainian Mathematical Journal 2 Journal of Optimization Theory and Applications 2 Journal of Vibration and Control 2 European Journal of Mechanics. B. Fluids 2 Communications in Nonlinear Science and Numerical Simulation 2 Journal of Applied Mathematics 2 Journal of Applied Mathematics and Computing 2 Mediterranean Journal of Mathematics 2 Applications and Applied Mathematics 2 Advances in Mathematical Physics 2 Journal of Applied Mathematics & Informatics 2 Journal of Function Spaces 2 International Journal of Applied and Computational Mathematics 1 Acta Mechanica 1 Applicable Analysis 1 Computers and Fluids 1 Journal of Computational Physics 1 Journal of Mathematical Analysis and Applications 1 Shock Waves 1 Advances in Mathematics 1 Annali di Matematica Pura ed Applicata. Serie Quarta 1 Information Sciences 1 Journal of Statistical Planning and Inference 1 Kybernetika 1 Mathematics and Computers in Simulation 1 Mathematische Nachrichten 1 Nonlinear Analysis. Theory, Methods & Applications. Series A: Theory and Methods 1 Mathematical and Computer Modelling 1 Journal of Scientific Computing 1 Multidimensional Systems and Signal Processing 1 Numerical Algorithms 1 Celestial Mechanics and Dynamical Astronomy 1 Test 1 The Journal of Analysis 1 Physics of Fluids 1 Journal of Difference Equations and Applications 1 Differential Equations and Dynamical Systems 1 Soft Computing 1 PAA. Pattern Analysis and Applications 1 Chaos 1 Journal of Dynamical and Control Systems 1 Qualitative Theory of Dynamical Systems 1 International Journal of Nonlinear Sciences and Numerical Simulation 1 Dynamics of Continuous, Discrete & Impulsive Systems. Series A. Mathematical Analysis 1 Discrete and Continuous Dynamical Systems. Series B 1 Sādhanā 1 Journal of Numerical Mathematics 1 Communications on Pure and Applied Analysis 1 Journal of Multiple-Valued Logic and Soft Computing 1 Boundary Value Problems 1 Complex Variables and Elliptic Equations 1 Journal of Zhejiang University. Science A 1 Journal of Biological Dynamics 1 Algorithms 1 International Journal of Differential Equations 1 Science China. Technological Sciences 1 Revista de la Real Academia de Ciencias Exactas, Físicas y Naturales. Serie A: Matemáticas. RACSAM 1 Analysis and Mathematical Physics 1 Journal of Applied Analysis and Computation 1 Fractional Differential Calculus all top 5 #### Cited in 29 Fields 101 Ordinary differential equations (34-XX) 34 Biology and other natural sciences (92-XX) 24 Difference and functional equations (39-XX) 22 Dynamical systems and ergodic theory (37-XX) 17 Numerical analysis (65-XX) 16 Computer science (68-XX) 15 Systems theory; control (93-XX) 13 Partial differential equations (35-XX) 11 Global analysis, analysis on manifolds (58-XX) 11 Fluid mechanics (76-XX) 9 Real functions (26-XX) 6 Statistics (62-XX) 6 Mechanics of particles and systems (70-XX) 6 Mechanics of deformable solids (74-XX) 5 Calculus of variations and optimal control; optimization (49-XX) 4 Operator theory (47-XX) 3 Game theory, economics, finance, and other social and behavioral sciences (91-XX) 3 Information and communication theory, circuits (94-XX) 2 Special functions (33-XX) 2 Integral equations (45-XX) 2 Probability theory and stochastic processes (60-XX) 2 Operations research, mathematical programming (90-XX) 1 Combinatorics (05-XX) 1 Linear and multilinear algebra; matrix theory (15-XX) 1 Functions of a complex variable (30-XX) 1 Potential theory (31-XX) 1 Approximations and expansions (41-XX) 1 Optics, electromagnetic theory (78-XX) 1 Classical thermodynamics, heat transfer (80-XX)
2021-04-18 18:00:59
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.5255619287490845, "perplexity": 12512.131284132884}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-17/segments/1618038507477.62/warc/CC-MAIN-20210418163541-20210418193541-00067.warc.gz"}
http://shqp.xhrh.pw/diamond-chemical-structure.html
The vast majority of gems are minerals. Special interests include preparation of functional (co)polymers and investigation of their self-assembled nanostructures; understanding the self-organization process at surfaces and interfaces; development of novel responsive materials and non-conventional approaches for nano- and micropatterning of complex 2-D and 3-D structures; controlling wetting, adhesion and biofouling on polymer thin films. (Chemistry) a chemical formula indicating the proportion of each element present in a molecule: C6H12O6 is the molecular formula of sucrose whereas CH2O is its empirical formula. The new, regenerated skin is usually smoother and less wrinkled than the old skin. so if u are toking about itz molecular formula then itz just c in the elemental form. Synthetic diamond is a A. Diamond, for example, has the simplest chemical makeup. Therefore each C-atom forms four sigma bonds with neighbouring C-atoms. Diamond Structures Diamond lattice structure The diamond lattice (formed by the carbon atoms in a diamond crystal) consists of two interpenetrating face centered cubic Bravais lattices, displaced along the body diagonal of the cubic cell by one quarter the the length of the diagonal. The National Fire Protection Association (NFPA) defines an outdoor chemical storage building as "a prefabricated structure, manufactured primarily at a site other than the final location of the structure, and transported. Silicon and germanium crystallize with a diamond structure. The crystal structure information includes mineral name, specification, crystal chemical formula, space group, unit cell parameters, coordinates, thermal factors and occupancy of atomic positions as well as literature references on crystal structure determination. The structure of diamond. The structural unit of diamond consists of eight atoms, fundamentally arranged in a cube. You may examine models of partial diamond and graphite structures by clicking on the appropriate structure below. Diamond has no free electrons because they are all involved in bonding and is therefore a poor conductor of electricity. Formula and structure: The chemical formula of ammonia is NH 3, and its molar mass is 17. Each of these carbon atoms is then attached to three other carbon atoms (plus the original atom), and that pattern continues to form a single, giant molecule held together by covalent bonds. The model is validated by using experiments with chemical flames and numerical simulations of thermonuclear flames. Another is that the atoms form a rigid structure—each atom is connected to four others, forming a very regular network. Diamond is the only gem made of a single element: It is typically about 99. In the cubic form of boron nitride, alternately linked boron and nitrogen atoms form a tetrahedral bond network, exactly like carbon atoms do in diamond. Introduction to Materials Science, Chapter 13, Structure and Properties of Ceramics University of Tennessee, Dept. Structures of Metals What is a metal ? Metal Properties. This process is known as doping. Graphite is a mineral composed exclusively of the element carbon. 42 Å (in diamond they are separated by 1. This section describes how covalent bonds can lead to large linear ('1D') e. Crystal structure€ Diamond through reconstruction or chemical reaction. It is the hardest known substance, it is the greatest conductor of heat, it has the highest melting point of any substance (7362° F or 4090° C), and it has the highest refractive index of any natural mineral. The majority of its applications involve applying solid thin-film coatings to surfaces, but it is also used to produce high-purity bulk materials and powders, as well as fabricating composite materials via infiltration techniques. Almost all plant hallucinogens contain the element nitrogen and therefore belong to the large class of chemical compounds known as alkaloids. Although graphite and diamond may appear to have little in common with one another, the two minerals are actually polymorphs. Crystal Structure 5 Crystal Structure In the above, we have discussed the concept of crystal lattice. Carbyne is basically a chain of single carbon atoms, but having twice the tensile strength of graphene , and three times the tensile stiffness of diamond (1,2). This structure results physical properties like the natural hardness and thermal conductivity of diamond. Each atom within a molecule is its unique identity and, by some means, can be removed from the molecule. (APF = 10) Any chemical cartridge respirator with organic vapor cartridge(s)* (APF = 25) Any powered, air-purifying respirator with organic vapor cartridge(s)* (APF = 50) Any air-purifying, full-facepiece respirator (gas mask) with a chin-style, front- or back-mounted organic vapor canister. Within the last few decades, women have been buying more and more diamond jewelry for themselves. Carbon is a chemical element with the symbol C and atomic number 6. Metals also have a giant chemical structure, whether the metal is pure or an alloy. The chemical vapor deposition process functions by delivering precursor gases into a reaction chamber at ambient (room) temperature. Carbon is one of those elements that can have different physical properties depending on what the chemical structure is. The giant covalent structure of diamond. In the 1930s, scientists first began to use two categories to describe a diamond's chemical composition and atomic structure: type I and type II. A covalent crystal contains a three-dimensional network of covalent bonds, as illustrated by the structures of diamond, silicon dioxide, silicon carbide, and graphite. Spider Silk Chemical Structure. The objective is to familiarize the reader with the scientific and engineering aspects of diamond CVD, and to provide experiences researchers, scientists, and engineers in. The carbon atoms are arranged in a lattice, which is a. Structure of Diamond and Uses Structure: All the carbon atoms of Diamond are said to possess strong chemical bonds with that of the four other carbon atoms, thus making a perfect tetrahedron structure and on throughout the crystal. , Chelikowsky J. Diamond Characteristics, Structure and Property. We recommend you use a larger device to draw your structure. The arrangement of carbon atoms in diamonds makes them bond together strongly, while graphite atoms are held together with a weaker bond, creating a soft physical substance. The price of a diamond depends on its size (weight) and the quality (clarity, color, presence of inclusions). You can make a model of the molecular structure of diamond using toothpicks and hard candies. The local environment of each atom is identical in the two structures. 13) Once the DRF expression is known for carbon in diamond, from an isotopic delta- 15 substituted structure, we can apply this function to determine the deconvoluted dopant depth profile for 16 both nitrogen. Diamond has a giant covalent. The basic explanation says that diamond is organized in a giant lattice structure with covalent bonds between carbon atoms. The chemical bonds in graphite are similar in strength to those found in diamond. Diamond has the highest hardness and thermal conductivity of any natural material, properties that are utilized in major industrial applications such as cutting and polishing tools. Analysis of structures shows that atoms can be arranged in a variety of ways, some of which are molecular while others are giant structures. It integrates a multitude of functions, which overcome the work with crystal structure data - in research and education as well as for publications and presentations. In the diagram some carbon atoms only seem to be forming two bonds (or even one bond), but that's not really the case. In fact, most diamonds that have been dated are much older than. The closest Au-Au separation is 288. For 3-D Structure of Fullerene Molecular Structure using Jsmol. Diamond Structures A Diamond is a clear transparent precious gem stone made totally of Carbon atoms (Chemical Composition 'C') crystallised in a cubic (isometric) arrangement which has been highly compressed over millions of years. Spider Silk Chemical Structure. But Apollo plans to get the cost down to $10/carat with high volume manufacturing. As a result, diamond is very hard and has a high melting point. Diamond is our outstanding molecular and crystal structure visualization software. Through partnership and collaboration, we're advancing the circular economy, extending the shelf life of food and paving safer roads. Each atom within a molecule is its unique identity and, by some means, can be removed from the molecule. Carbon has an electronic arrangement of 2,4. Conventional unit cell of the diamond structure: The underlying structure is fcc with a two-atomic basis. In the previous pages, some of the mechanisms that bond together the multitude of individual atoms or molecules of a solid material were discussed. This process repeats itself endlessly to replicate the crystal structure of the diamond seed crystal in three dimensions. Diamond is an exceptional thermal conductor - 4 times better than copper - which gives significance to diamonds being called 'ice'. Perfect Science Icons depict objects and symbols used in science and engineering, including Factory, Labs, and many more icons in png formats. Diamond is a polymorph of the element carbon. The crystal structure information includes mineral name, specification, crystal chemical formula, space group, unit cell parameters, coordinates, thermal factors and occupancy of atomic positions as well as literature references on crystal structure determination. To turn it into silicon dioxide, all you need to do is to modify the silicon structure by including some oxygen atoms. In contrast to natural diamonds, their synthetic counterparts are created in laboratory conditions. (1988) Diamond and Zinc-Blende Structure Semiconductors. The giant covalent structure of diamond. An impurity with fewer valence electrons (such as Al; see the periodic table) takes up space in the solid structure, but contributes fewer electrons to the valence band, thus generating an electron deficit (Figure 8). Three isotopes occur naturally, 12 C and 13 C being stable, while 14 C is a radionuclide , decaying with a half-life of about 5,730 years. However, it is helpful to outline and understand several of the underlying principles of silicone chemistry. cases the customer is selecting diamond jewelry as a gift. Gold | Au | CID 23985 - structure, chemical names, physical and chemical properties, classification, patents, literature, biological activities, safety/hazards. 25, 2006 J. Stay Current with the NEC 2020. Although they are composed of carbon atoms, diamond and graphite have different chemical and physical properties that arise according to the differences in their structures. 3 Giant Covalent structure DIAMOND • In diamond, all the electrons in the outer shell of each carbon atom (2. , and Boardman, Shelby J. With over 2000 known minerals, each has its own definite chemical composition. BNWT Mens ARMANI JEANS classic chino shorts Navy W36” RRP £149 Bargain £42. A second form called lonsdaleite with hexagonal symmetry is also found. Diamond is a form of the element carbon - see also graphite. Not part of the. It is nonmetallic and tetravalent —making four electrons available to form covalent chemical bonds. The water causes the hardening of concrete through a process called hydration. Strong chemical bonding forces exist within the layer planes, yet the bonding energy. carbon means graphite not diamond. Charcoal is 65-85% carbon, with the rest being made from ash and volatile chemicals, which break up any structures which would otherwise form, although it contains micro-crystals of graphite. It is the hardest material known to man and more or less inert - able to withstand the strongest and most corrosive of acids. At high pressures, formation of denser. Many people believe that diamonds are formed from the metamorphism of coal. Diamond is an excellent electrical insulator, Graphite is a good conductor of electricity. (Chemistry) a chemical formula indicating the proportion of each element present in a molecule: C6H12O6 is the molecular formula of sucrose whereas CH2O is its empirical formula. Diamond is a solid form of the element carbon with its atoms arranged in a crystal structure called diamond cubic. It can be written C (gr) but is usually written as just C. The hardness and density of diamonds can be explained by their crystal structure. Chemical Makeup. Volatile matter of the coal samples in this study area decreases from 29 to 22 percent through 244m (800ft) of section. The seeded growth is now one carbon atom thicker. Fact is, diamond is a very expensive, naturally occurring substance, whereas cubic zirconia. Graphite is a mineral composed exclusively of the element carbon. It is grown one carbon atom at a time in a customized CVD (chemical vapor deposition) process. Identify a chemical's health, physical, and environmental hazards. • Diamond has a very high melting and. The diamond carat refers to the mass of the diamond. These two very different minerals have exactly the same chemical formula (C), but the crystal structure of the two minerals is very different. All of the covalent bonds in diamond are identical. Today’s Internet technology is changing the diamond market, diamond manufacturers now have a direct link. In diamond each C-atom is sp3-hybridized. Its flexibility and structure also make it the leading candidate as the primary component of next-generation, ultra-high speed circuitry in everything from computers, to smartphones, to televisions. Silicon atoms form covalent bonds and can crystallize into a regular lattice. Carbyne is basically a chain of single carbon atoms, but having twice the tensile strength of graphene , and three times the tensile stiffness of diamond (1,2). Diamond * Colourless transparent substance with extra ordinary brilliance due to its high refractive index. This is one version of the Diamond, with slightly different definitions for each numbered category (0 through 4) than are in your handout. Lithium crystal structure image (space filling style). Each C atom forms four bonds, tetrahedrally arranged, to other C atoms, resulting in an open, but strongly bonded, 3D-structu. What is the Structure of Graphite? When you come across carbon as a reactant or electrode, carbon means graphite not diamond. Ceramic bonds are mixed, ionic and covalent, with a proportion that depends on the particular ceramics. 25, 2006 1 / 53. Atoms arranged in orderly repeating 3D array: crystalline. By partnering with the brightest minds in chemistry, we bridge the gap between the lab and the market for innovators and practitioners. Streak - Streak is the color of the mineral in powdered form. The following link leads to a 3-D representation of a diamond that you can manipulate. Best Answer: Impure amorphous carbon, which always bonds covalently and is most stable when bonded to 4 other carbon atoms. Physical and Chemical Properties: Sodalite is a deep, rich blue stone with white inclusions typically occurring in nepheline, syenites and related rocks. Sapphire is aluminium oxide in the purest form with no porosity or grain boundaries, making it theoretically dense. Develop great-looking scientific software faster with a collection of engineering icons. Chemical Research in Chinese Universities Synthesis, Structure Characterization and Biological Activity of Layered Vanadium Oxides [NH3(CH2)2NH(CH2)2NH3][V6O14] 2005 Issue 1005-9040. Our unique products, based on the proprietary MOLECULAR REBAR® technology, together with our team of world-class nanotechnology experts alter your battery's "DNA", unlocking its full. in experiments. Crystal Structure 5 Crystal Structure In the above, we have discussed the concept of crystal lattice. That idea continues to be the "how diamonds form" story in many science classrooms. While the rarity, beauty and high value of diamonds account for their positive symbolic meaning, these same factors have often spawned violence and bloodshed over control of lucrative diamond sources. Grown Diamond Corporation offers IGI & GCAL certified lab created diamonds (aka man made diamonds). Auburn, MA and Greater Boston Buick, Chevrolet, and GMC customers will be happy to hear that we have different options to help them buy the vehicle of their dreams. A chemical peel is a technique used to improve the appearance of the skin on the face, neck or hands. The union between the electron structures of atoms is known as the chemical bond. Learn more about carbon uses, the carbon atom, carbon properties, hydrocarbons, carbon structure, carbon fiber, carbon monoxide, your carbon footprint and other amazing carbon facts. is the Arthur E. Diamond has many unequaled qualities and is very unique among minerals. Still want to try? Try rotating the device so that it is in a landscape position. Join in creating a vision for Diamond Bar's development. Soot and graphite are also made up of carbon atoms and have the same chemical symbol, C. Sapphire is aluminium oxide in the purest form with no porosity or grain boundaries, making it theoretically dense. Diamond carbon is very strong, one of the hardest substances known to us, but graphite carbon is very soft and is known to be used as the “lead” in lead pencils. The high density of this arrangement makes. Diamonds are made of carbon atoms linked together in a lattice structure. Find diamond stock images in HD and millions of other royalty-free stock photos, illustrations and vectors in the Shutterstock collection. Carbon (from Latin: carbo "coal") is a chemical element with the symbol C and atomic number 6. Allotropes of Carbon Allotropes of carbon: a) Diamond, b) Graphite, c) Lonsdaleite, d) C60 (Buckminsterfullerene or buckyball), e) C540, f) C70, g) Amorphous carbon, and h) single-walled carbon nanotube, or buckytube. Diamond is the hardest known natural mineral, which makes it an excellent abrasive and makes it hold polish and luster extremely well. The orangey red is relevant to an Asia-based company while also drawing in the eye. The information in CAMEO Chemicals comes from a variety of data sources. Diamond is the hardest naturally occurring material known. The seeded growth is now one carbon atom thicker. Crystal Structure 5 Crystal Structure In the above, we have discussed the concept of crystal lattice. SUMMARY The present article describes a minimally invasive technique used for the restoration of loss of tooth structure caused by erosion of intrinsic etiology. One of the two atoms is sitting on the lattice point and the other one is shifted by$\frac{1}{4}\$ along each axes. The arrangement of carbon atoms in diamonds makes them bond together strongly, while graphite atoms are held together with a weaker bond, creating a soft physical substance. Covalent bonding is the key to the crystal structures of the metalloids. Professor Laurent Chapon, Physical Sciences Director at Diamond Light Source concludes, "Catalysis is estimated to be involved in 90% of all chemical processes and in the creation of 60% of the. Future geometric template. The stone’s name is derived from the Greek word adamas, which translates to “unconquerable. The reason is the small energy di erence between the 2s- and the 2p-state, so that it is easily possible to excite one electron from the 2s-state into the 2p-state. Ice Ih is the normal form of ice; ice Ic is formed by depositing vapor at very low temperatures (below 140°K). (From Grant & Hackh's Chemical Dictionary, 5th ed). The structure of diamond. Chemical structure of polysiloxanes Silicone additives, often also referred to as "silicones," can be used without understanding their basic underlying chemistry. Ch 00 - Chemical Safety. Diamond and graphite are examples of allotropes, where the same element forms two distinct crystalline forms. Diamond, with a very compact structure Graphite, showing its layered crystal structure In the diamond structure, each carbon atom is linked to four other ones in the form of a very compact three-dimensional network (covalent crystals), hence its extreme hardness and its property as an electric insulator. Quartz is one of the most common minerals in the Earth’s crust. This web site highlights areas of the chemical world and illustrates the structures behind the words. Crystal: Definition, Types, Structure & Properties Video. Black Diamond Structures™ is a global leader in nanotechnology with the mission to help manufacturers create the next generation of world-class batteries. The chemical compounds of living things are known as organic compounds because of their association with organisms and because they are carbon-containing compounds. Carbon allotropes that lack crystalline structure are amorphous, or without crystalline shape. Inthe diamond structure,each carbon atom forms four covalent bonds with four other carbon atoms to form a 3-dimensional tetrahedral structure, which continues throughout the structure. 2 Crystal Structures. The local environment of each atom is identical in the two structures. Diamond is the hardest naturally occurring material known. The closest Au-Au separation is 288. The system of carbon allotropes spans an astounding range of extremes, considering that they are all merely structural formations of the same element. Graphite is used as pencil lead and has a. Carbon (C) is the only element present. 53 g/cm3 for diamond. To complete a crystal structure, one needs to attach the basis (a fixed group of atoms) to each lattice point, i. Diamond is extremely strong owing to the structure of its carbon atoms, where each carbon atom has four neighbors joined to it with covalent bonds. Graphite is simply an allotrope of carbon, and therefore only contains carbon atoms in its chemical formula. Diamond is a form of carbon in which each carbon atom is joined to four other carbon atoms, forming a giant covalent structure. The carbon atoms of a diamond are connected in a very compact and structured way. 53 g/cm3 for diamond. To turn it into silicon dioxide, all you need to do is to modify the silicon structure by including some oxygen atoms. Diamond is made up of repeating units of carbon atoms joined to four other carbon atoms via the strongest chemical linkage, covalent bonds. Each atom joins four other atoms in regular tetrahedrons (the red lines show the bonding between atoms). Adobe Reader is required to open the documents. One reason is that the chemical bond between each carbon atom that makes up a diamond is extremely strong. Carbyne is basically a chain of single carbon atoms, but having twice the tensile strength of graphene , and three times the tensile stiffness of diamond (1,2). hexagons (see Figure 1). Nondirected bond, structures of very high coordination and density; high electrical conductivity; ductility Metallic Li, Na, Cu, Ta 0. Copper Atomic Structure. 42 Å (in diamond they are separated by 1. Minerals occur naturally in the earth’s crust and are defined as inorganic solids that have characteristic chemical composition and crystalline structures. These two very different minerals have exactly the same chemical formula (C), but the crystal structure of the two minerals is very different. A diamond is made entirely from carbon, the same element that makes up the graphite used in pencil leads. This science article is a stub. Affordable and used by thousands of scientists around the world. Structure of Diamond and Graphite The structure of diamond Carbon has an electronic arrangement of 2,4. However, it is helpful to outline and understand several of the underlying principles of silicone chemistry. Diamond has the highest hardness and thermal conductivity of any natural material, properties that are utilized in major industrial applications such as cutting and polishing tools. Cancel Anytime. This trend reflects their growing success in business, professional. The potassium feldspar group is composed of three mineral polymorphs, each having the same chemical composition, but slightly different crystal structures. The reason diamond is so hard has to do mainly with its crystal structure, which describes how the atoms pack. Diamond has a tetrahedral structure (each carbon atom is bonded to 3 other carbon atoms in a strong structure), graphite is lso bonded to 3 other atoms but they form layers which easily separate from each other (eg writing with a pencil). Fullerenes are a group of closed-cage carbon particles of which the archetype is buckminsterfullerene, C 60, whose structure is shown on the right. Polymorphs--same chemical composition but different crystal structures e. These two very different minerals have exactly the same chemical formula (C), but the crystal structure of the two minerals is very different. Our lab grown diamonds are made using both HPHT & CVD process. Silicon carbide, also known as carborundum, is a unique compound of carbon and silicon and is one of the hardest available materials. Each carbon atom is in a rigid tetrahedral network where it is equidistant from its neighboring carbon atoms. Because they are not discrete molecules - there is no 'diamond' molecule the same way there are molecules of caffeine, benzoic acid, citric acid, N,N-dimethylaminopyridine, etc. 2 above, is NaCl (sodium chloride). (a) How many carbon atoms are there per unit cell? (b) What is the coordination number for each carbon atoms? (C. The combination of favourable chemical, electrical, mechanical, optical, surface, thermal, and durability properties make sapphire a preferred material for high performance system and component designs. Diamond is a very valuable material, and people have been working for centuries to create them in laboratories and factories. The allotropes of carbon have very different chemical and physical properties. This gives the graphite crystals a hexagonal shape. A diamond consists of a giant three-dimensional network of carbon atoms. Diamond synchrotron light source enables study of sub-micron objects in great detail, with potential applications in processing of fine chemical powders. A few months back I'd seen a show on TV where they demonstrated how companies were now making "cultured" diamonds in the lab. As we reminisce, the births of my own children come to mind. The body-centred cubic ( bcc ) structure is the most stable form for lithium metal at 298 K (25°C). In contrast to natural diamonds, their synthetic counterparts are created in laboratory conditions. Extreme conditions. The atomic arrangement of a diamond is called a crystal structure. diamond and graphite B. is the Arthur E. There are other exotic allotropes of carbon (graphenes and fullerenes among them) but they are much less common. A crystal of diamond is one giant. Perfect Science Icons depict objects and symbols used in science and engineering, including Factory, Labs, and many more icons in png formats. In addition to being soft and slippery, graphite also has a much lower density than diamond. An irregular carbon structure used to fire smelting plants. it is just a morphed state of carbon. Each carbon atom in the layer is. It belongs to the emerging class of ribosomal disorders. Crystal structure€ Diamond through reconstruction or chemical reaction. Diamond is composed of the single element carbon, and it is the arrangement of the C atoms in the lattice that give diamond its amazing properties. The union between the electron structures of atoms is known as the chemical bond. In these solids the atoms are linked to each other by covalent bonds rather than by electrostatic forces or by delocalized valence electrons that work in metals almost like a "glue". Atoms arranged in orderly repeating 3D array: crystalline. Its tetrahedral, single-bonded structure brings the carbon atoms closer together, on average, than they are in graphite. Student Assignments. simple chemical nature of diamond allowed its chemistry to be determined very early on [2], while the high crystalline quality of many natural diamonds permitted the crystal structure of diamond to be determined in the pioneering X-ray diffraction studies [3]. You can make a model of the molecular structure of diamond using toothpicks and hard candies. General Notes. The different temperatures, such as 110, 120, and 130 °C for 6 h, were used for coating diamond in the high-pressure reactor respectively. Chemistry deals with such topics as the properties of individual atoms, how atoms form chemical bonds to create chemical compounds, the interactions of substances through inter-molecular forces that give matter its general properties, and the interactions between substances. The guide on nomenclature and graphic representation of chemical formulae has been prepared to reply to a number of questions from the European Pharmacopoeia Commission and users of the Ph. It integrates a multitude of functions, which overcome the work with crystal structure data - in research and education as well as for publications and presentations. Both diamond and graphite consist of carbon atoms bonded together in three-dimensional structures. Chemical bonding in the hardest substance on Earth 1. [email protected] Each ion is 4-coordinate and has local tetrahedral geometry. However, it is found in several other forms too. Common chemical compounds are also provided for many elements. The structure of fullerene is like in a cage shape due to which it looks like a football. The activated carbon-hydrogen species travels across the surface of the diamond seed until it finds an available carbon atom, and then attaches itself to this seed atom. Other names used for synthetic diamonds include: "lab. Molecular geometry refers to the spatial arrangement of atoms in a molecule and the chemical bonds that hold the atoms together. Structure influenced by crystal structure at and near. Although there are many differences between these two substances, the main difference between diamond and graphite is that diamond is made out of sp 3 hybridized carbon. Structure of Diamond and Graphite The structure of diamond Carbon has an electronic arrangement of 2,4. In this regard, these elements resemble nonmetals in their behavior. Greatest Hits by Larry Gatlin & The Gatlin Brothers (Cassette) NEW,14k Yellow Gold Fn Diamond Trio His Her Bridal Set Engagement Ring Wedding Band,Oval Kitchen Worktop Savers in Blue Gloss Finish Acrylic 3mm. Be able to list and describe the three types of chemical bonds found in living things. Thus, the author has presented here the atomic structure of graphene and has shown how the radii and bonding of carbon differ from that of benzene, although they both involve hexagons. To complete a crystal structure, one needs to attach the basis (a fixed group of atoms) to each lattice point, i. The chemical structure is repeating chain with alternating single and triple bonds ---. In the chamber is a heated substrate (seed material such as a sapphire or small diamond chip). 3 Giant Covalent structure DIAMOND • In diamond, all the electrons in the outer shell of each carbon atom (2. NEW - jPOWD Structure File from the American Mineralogist Crystal Structure Database is Present. The allotropes are known as diamond and graphite. A diamond consists of a giant three-dimensional network of carbon atoms. Carbon is one of those elements that can have different physical properties depending on what the chemical structure is. RDChemicals- The R&D Chemicals is a database of chemical compounds accessible over the Internet. The orangey red is relevant to an Asia-based company while also drawing in the eye. Diamond’s crystal structure is isometric, which means the carbon atoms are bonded in essentially the same way in all directions. Chemical drawing and publishing software for desktop, web and mobile. This physical property makes diamond useful for cutting tools, such as diamond-tipped glass cutters and oil. The structure and performance of the metallic coating on diamond surface were tested by Scanning Electron Microscopy (SEM), X-ray Diffraction (XRD) and diamond compressive strength instrument. Regarding UV and deep-UV regions, Chinese scientists have made remarkable contributions. The vast majority of gems are minerals. Fixed chemical structure Diamond has a hardness of 10 because it is the hardest of all the minerals. These properties are called Physical properties and Chemical properties:. Compare the structure of diamond and graphite, both composed of just carbon. Crystallography Open Database. , This piece has had all the excess chemicals and elements burned off and all that remains is the jagged and irregular carbon structure. The new skin is usually smoother and less wrinkled than the old skin. This means that the atoms are arranged in a repetitive pattern and are closely packed. It is when minerals have the same chemical composition but different crystal structures resulting in different minerals. Both diamond and graphite are allotropes of carbon. Other forms of carbon are amorphous they lack a regular structure. In the case of carbon, the atoms form either giant macromolecular structures (diamond and graphite) in which all of the atoms in the bulk structure are joined together by covalent bonds making giant molecules, or smaller molecules (buckminster fullerene) in which there are only discrete molecules made up of 60 carbons in a structure resembling. The chemical formula of diamond is C which is the chemical symbol for the element carbon. The difference between lab diamonds and diamond simulants is chemical composition. The diamond structure was one of the first crystal structures determined by X-ray diffraction, and revolutionised ideas about chemical bonds in solids. Chemical bonding in the hardest substance on Earth 1. A second form called lonsdaleite with hexagonal symmetry is also found. The structure and performance of the metallic coating on diamond surface were tested by Scanning Electron Microscopy (SEM), X-ray Diffraction (XRD) and diamond compressive strength instrument. In the picture of diamond above, each blue ball represents a carbon atom. Diamond and Related Materials is an international, interdisciplinary journal which publishes articles covering both basic and applied research on diamond materials and related materials. Choose from over a million free vectors, clipart graphics, vector art images, design templates, and illustrations created by artists worldwide!. Your healthcare provider may occasionally change your dose to make sure you get the best result. The ability of carbonate to produce diamond by itself implies that diamond could be a very common mineral in Earth’s lower mantle, where carbonates are abundant and pressures and temperatures. Synthetic diamonds are man-made materials that have the same chemical composition, crystal structure, optical properties and physical behavior as natural diamonds. The team dubbed this material 'diamondoid nitrogen', because of its striking similarity to the 10-carbon adamantane cage structure, which is the basic carbon sub-unit of diamond. As a result, diamond is the ultimate abrasive, whereas graphite is an excellent lubricant. in experiments. 99,CARDIGAN MEN'S SWEATER DIAMOND CASUAL DARK BLUE SLIM FIT TIGHT JERSEY GOLFINO,LUXE OH `DOR 100% Cashmere V Pullover Preppy Benjamin hummer pink 46-58 S-XL. The rigid structure, held together by strong covalent bonds, makes diamond very hard. 417, a high dispersion of 0. Element Titanium - Ti. It belongs to group 14 of the periodic table. Diamond has no free electrons because they are all involved in bonding and is therefore a poor conductor of electricity. Cubic Zirconia vs Diamond comparison. Its tetrahedral, single-bonded structure brings the carbon atoms closer together, on average, than they are in graphite. Springer Series in Solid-State Sciences, vol 75.
2020-03-30 14:06:46
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.4110209345817566, "perplexity": 1740.4520005105453}, "config": {"markdown_headings": true, "markdown_code": false, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-16/segments/1585370497042.33/warc/CC-MAIN-20200330120036-20200330150036-00415.warc.gz"}
https://nkschuurman.com/Practical2.html
Last week you got acquainted with R. As a warm-up this afternoon, we will start with an exercise that functions as a recap of what you learned last time. After that, we will practice with reading in external data files and saving data and results, we will practice with exploring and plotting our data. Further, we will practice finding and installing packages in R, and we will use these packages to perform some more advanced analyses in R. Note however that the aim of this afternoon is not necessarily to teach you to use these packages for these analyses, but to get you acquainted with using packages, and R, in general. Usually there are several available packages for certain analyses, and you may find that you prefer other packages than the ones we used today. This is what is nice about R: you can easily pick and choose, and mix and match what you like, as there are many resources available. On the other hand, this can be overwhelming! If you have any questions or suggestions, feel free to ask/suggest. ## Exercise 1: Lets see what you remember from last week! ### 1.1 Open R. Set your working directory to the folder of your choice, in which you will save all your work from the practical. Your working directory is the base directory for your current R session. When you want to save something in R, R first automatically directs you to your working directory. Find our what your current working directory is by typing: getwd() ## [1] "/home/noemi/Werk/Onderwijs/Noemi R cursus/practical2" You can set your working directory using the GUI, by clicking File…Set…working directory. Alternatively, you can specify it in the following way in the R console, using the function setwd(). setwd("c:/documents/Rcodes/Practical2") # Be sure to specify the correct file path inside setwd for your PC ### 1.2 Open a new R-script. In your script, write code using the instructions below: • First, add a comment in your R-script to indicate you are working on Exercise 1. You can indicate that something is a comment rather than code by using # before you comment. • Make a character vector called vecchar with in it six elements (words or letters). • Make a numeric matrix called matnum with 2 columns and six rows of elements (numbers). • Make a dataframe called dat_chanum out of vecchar and matnum with 3 columns and six rows of elements. Remember: Dataframes and lists can contain data that have mode characters, as well as data that have mode numeric, while matrices and vectors can only contain 1 type of data. • Make a list called list_alles that contains vecchar, matnum, and dat_chanum. Now, Run your code by selecting the code in you R-script, and pressing Ctrl-R (Windows), Ctrl-Enter (Linux), or Cmd-Enter (Mac). Call each of your objects to see what they look like. Inspect the mode of each of the four objects you made. # Exercise 1 vecchar <- c("apple", "pear", "orange", "cherry", "peach", "kiwi") matnum <- matrix(c(1,2,0,4,1,0,3,5,1,10,2,4), ncol = 2, nrow = 6, byrow=FALSE) dat_chanum <- data.frame(vecchar,matnum) list_alles <- list(vecchar,matnum,dat_chanum) tvecchar matnum dat_chanum list_alles mode(tvecchar) mode(matnum) mode(dat_chanum) mode(list_alles) ### 1.3 Save your R-script file as Practical2.R in your working directory. You can use the standard Ctrl-s (Windows/Linux) or Cmd-s (Mac) or use the file menu in the GUI. It is a good idea to regularly save your R script throughout this practical. ### 1.4 Inspect your global environment and empty it. Check your global environment again to make sure it is empty. Clean your console by pressing ctrl l. Browse through your history by pressing the up arrow key in the R-console. ls() rm(list = ls()) ls() ### 1.5 R comes with a bunch of datasets. Use the function data() to see what data is available. Call the dataset ToothGrowth, and the dataset women. Obtain information on the datasets using ?ToothGrowth and ?women. Inspect the structure of the datasets. data() ToothGrowth women ?ToothGrowth ?women str(ToothGrowth) str(women) ### 1.6 Calculate the means and standard deviations for each variable in the two datasets. mean(ToothGrowth[,1]) ##Or alternatively mean(ToothGrowth$len) mean(ToothGrowth[,2]) ##Or alternatively mean(ToothGrowth$supp) mean(ToothGrowth[,3]) ##Or alternatively mean(ToothGrowth$dose) sd(ToothGrowth[,1]) ##Or alternatively sd(ToothGrowth$len) sd(ToothGrowth[,2]) ##Or alternatively sd(ToothGrowth$supp) sd(ToothGrowth[,3]) ##Or alternatively sd(ToothGrowth$dose) #Interestingly, the function mean does not work for factor variables, but the function sd does because it treats factors as a truely numeric variable (and it uses the factor levels to calculate the sd). mean(women$height) ##Or alternatively mean(women[,1]) mean(women$weight) ##Or alternatively mean(women[,2]) sd(women$height) ##Or alternatively sd(women[,1]) sd(women$weight) ##Or alternatively sd(women[,2]) #Bonuspoints for using the apply function: apply(ToothGrowth,2,mean) ## the apply function does not work here because the function mean does not work for our factor variable. Instead, you can use lapply or sapply, which are the same as the apply function, but then specifically for a list object. With these functions, you apply a function to each object in a list (and as you may remember, although dataframes look like matrices, they are secretly lists). lapply returns the calculated values in a list format, and sapply returns the values in a numeric format. Another demonstration why it is handy to know about different types of objects, and the modes of data, when you are working in R. lapply(ToothGrowth, mean) sapply(ToothGrowth,mean) apply(women,2,mean) # for dataset women, all three apply functions work because although it is a dataframe, it contains only truely numeric variables, and in that sense behaves like a matrix. lapply(women,mean) sapply(women,mean) apply(ToothGrowth,sd) sapply(ToothGrowth,sd) lapply(ToothGrowth,sd) lapply(women,sd) hist(ToothGrowth$len) plot(women$height, women$weight) ## 1.8 Calculate the correlation between the height and average weights in dataset women using function cor.test(). Perform a regression analysis for the dataset ToothGrowth with length as a dependent variable, and dose and supplement type as predictors. correlatie<-cor.test(women[,1],women[,2]) correlatie regressie<-lm(ToothGrowth$len ~ 1 + ToothGrowth$supp + ToothGrowth$dose) regressie summary(regressie) It is also possible to add an interaction to the model. interactie<-lm(ToothGrowth$len ~ 1 + ToothGrowth$supp + ToothGrowth$dose + ToothGrowth$supp*ToothGrowth$dose) interactie summary(interactie) ## Exercise 2: Data Preparation in R. In this exercise we will work with a dataset that we have to load into R. It is an adjusted version of the dataset that can be found here:http://spss.allenandunwin.com.s3-website-ap-southeast-2.amazonaws.com/data_files.html. The data contains 12 variables: participant id, sex, the participants main source of stress (Work, Family/Friends, Money/Finances, or Health/ilness), six items from an optimism questionnaire, a total score on a life satisfaction questionnaire (higher = more satisfied), a total score on a stress questionnaire (higher score = more stress), and a total score on a self esteem questionnaire (higher score = more self esteem). Unfortunately, our data is in two separate files. One part of the data is stored in the .Rdata file RegressionANOVAdata1.Rdata. The second part is stored in the RegressionANOVAdata2.sav file. We will have to load both files, and get them into one dataframe. Further, we need to calculate sum or mean scores for the optimism questionnaire, of which some items are reverse coded. When we are done, we will save the data in a separate file. In exercise 3, we use the data for a more elaborate regression analysis (and we also obtain anova statistics) in R. ### 2.A First, empty your global environment. Then get part 1 of the data into R with function load(). ###empty and check global environment. rm(list = ls()) ls() ## character(0) ### great, now we can start with a clean slate. Load the .Rdata file with the function “load()”. You specify the path to the file in between the round brackets. If the datafiles are saved in your working directory, you only have to specify the filename of the file. load("c:\practical2\data\RegressionANOVAdata1.Rdata") ##use the correct path to the datafile for your computer :) ###or if the data is in the working directory load("RegressionANOVAdata1.Rdata") Your data is now loaded. ### 2.B Check your global environment. Call the object you just imported into R by loading the datafile. Inspect the structure of the dataset with str(). Use the function head() on the dataset. Rdata files contains objects from R. Lets see what objects we loaded into our global environment when we loaded our datafile. ls() ## [1] "datapart1" Call datapart1 to see part 1 of our data. Use str() to inspect the structure of the data. ls() datapart1 You can use the function head() to only inspect the first few cases of your dataframe. head(datapart1) ## id sex mainsource tlifesat tpstress tslfest ## 2 9 MALES Work 30 22 34 ## 3 425 FEMALES Family or Friends 33 19 31 ## 4 307 MALES Work 33 31 40 ## 5 440 MALES Work 16 27 21 ## 7 341 FEMALES Money/Finances 5 39 18 ## 8 300 MALES Work 25 39 34 ### 2.C Start by looking at the data file RegressionANOVAdata2.sav in spss. To read in spss data in R, we need a package. Install and load package Hmisc. Then, load the second part of the data in file RegressionANOVAdata2.sav into R with the function spss.get(). • You can intall a package using the buttons in the gui: .. … Then you need to choose a mirror where to download the package from, and pick the package you would like to install from a large alphabetical list. • You can also install a package using code in the R console, like this: install.packages("Hmisc") If the install was succesful, you will see this message at the end: “The downloaded source packages are in…” Now we need to load the installed package in order to be able to use its functions. library("Hmisc") ## Loading required package: grid ## Loading required package: lattice ## Loading required package: survival ## Loading required package: Formula ## Loading required package: ggplot2 ## ## Attaching package: 'Hmisc' ## ## The following objects are masked from 'package:base': ## ## format.pval, round.POSIXt, trunc.POSIXt, units You may see it also loads some other packages the Hmisc package needs. When it is loaded, we can use the function spss.get() to load our spss file, and transform it into an R dataframe. Note: This function is based on the function read.spss() in package foreign, which is a more well known function and package for importing spss data to R (when you google get spss file in r, you will most likely find package foreign before you find package Hmisc). However, spss.get() adds some helpful functionality, such as being able to read spss date variables and translating them to R data variables, and dealing better with variable labels. Assign the data you read in with spss.get() to an object called datapart2. #datapart2 <- spss.get(file="RegressionANOVAdata2.sav") #if your data is not saved in your working directory, specify the complete filepath for the file= argument. Call the second part of the data to look at it, inspect its structure with str(). #datapart2 #head(datapart2) #str(datapart2) ### 2.C Start by looking at the data file RegressionANOVAdata2.csv in a basic text editor or excel. Then, load the second part of the data in file RegressionANOVAdata2.csv into R with the function read.table(). ?read.table The help file describes what the function read.table does: “Reads a file in table format and creates a data frame from it, with cases corresponding to lines and variables to fields in the file.” That is, it is function that can be used to read for instance .csv, .txt, and .dat files, and puts it in a dataframe. In the arguments for the function you need to specify how the data was stored, for instance: - how are variables seperated from each other in the file (with a tab: sep=“”, or a space: sep=" “, or a comma: sep=”," or…) - if comma’s or dots are used to indicate decimal points - if there are variable names in the first line of the data file or not. Assign the data you read in to an object called datapart2. datapart2 <- read.table(file="RegressionANOVAdata2.csv", header=TRUE, sep="\t") #if your data is not saved in your working directory, specify the complete filepath for the file= argument. We specify the argument header = TRUE because the variable names are specified at the top of the datafile. sep="\t" because the data is tab delimited. Call the second part of the data to look at it, inspect its structure with str(). datapart2 head(datapart2) str(datapart2) ### 2.D Store datapart1 in an object called fulldata. Now add the data in datapart2 to fulldata, such that the variables in datapart2 are in columns 7 to 12 of fulldata. fulldata<-datapart1 fulldata[,7:12]<-datapart2 head(fulldata) ### 2.E Item 2, 4, and 6 from the optimism scale (op2,op4, and op6) are reversely scored: The items are scored from 1 to 5, and the higher the score, the lower the optimism. Make new items that are not reversely scored (such that a high score indicates high optimism), and put them in column 13, 14, and 15 of our dataframa fulldata. fulldata[,13]<- 6 - fulldata$op2 fulldata[,14]<- 6 - fulldata$op4 fulldata[,15]<- 6 - fulldata$op6 The names of our newly made variables are V13, V14 and V15. We can use function colnames (stands for column names) to change the names. colnames(fulldata)[13:15]<- c("op2R", "op4R", "op6R") ### 2.F Use apply and the function mean or sum to calculate mean scores across the optimism items for each participant. Put the mean scores in the 16th column of fulldata. Rirst, select the variables we want to sum from the dataset. We can store it in variable “items”. Then use the apply function to sum over each row of the object items. items <- fulldata[,c(7,9,11,13:15)] fulldata[,16]<- apply(items,MARGIN=1,FUN=sum) ##or for mean scores rather than sumscores: apply(items,MARGIN=1,FUN=mean) Alternatively, you can do it in one go like this: fulldata[,16]<- apply(fulldata[,c(7,9,11,13:15)],MARGIN=1,FUN=sum) ##or for mean scores rather than sumscores: apply(fulldata[,c(7,9,11,13:15)],MARGIN=1,FUN=mean) We can give our new variable a suitable name using colnames() colnames(fulldata)[16]<- c("sumOpt") ### 2.G Save your full dataset in an .Rdata file using save(), and in an .csv or .txt file using write.table(). ###In an Rdata file you can easily save many objects you made in R. ## to save just fulldata you would do: save(fulldata, file="DataforExercise3.Rdata") ### If you want to save your file somewhere other than your working directory, specify the complete path, including the filename. Like this: file = "c:\practical2\data\DataExercise2.Rdata". ## to save fulldata, datapart1, and datapart2 save(fulldata, datapart1,datapart2, file="DataforExercise3.Rdata") You can load this file into R like we did in exercise 2.A. Now, to save it as a .txt or .csv: write.table(fulldata, file="DataforExercise3.csv", sep="\t", row.names=FALSE, col.names=TRUE) #I chose to make a tab-delimited, but you can choose anything to separate the variables, for example, sep=";". You now have successfully organized your data in R. We will analyse this dataset in Exercise 3 and 4. ## Exercise 3: A Regression Analysis in R. In this exercise we will work with a dataset that we have to load into R. It is an adjusted version of the dataset that can be found here:http://spss.allenandunwin.com.s3-website-ap-southeast-2.amazonaws.com/data_files.html. The data contains 16 variables: - participant • id • sex • the participants main source of stress (Work, Family/Friends, Money/Finances, or Health/ilness) • six items from an optimism questionnaire and three reversed items from the optimism questionnaire • a total score on a life satisfaction questionnaire (higher = more satisfied) • a total score on a stress questionnaire (higher score = more stress) • a total score on a self esteem questionnaire (higher score = more self esteem) • a total score on an optimism questionnaire (higher score = more optimism) We will perform two regression analyses on the data to find out if gender moderates the relationship between life satisfaction and optimism. We will also be checking some assumptions. • If you did Exercise 2, load one of the datafiles you made in exercise 2G: the file “DataforExercise3.csv” using read.table(), or the file “DataforExercise3.Rdata” using load(). • If you did not do Exercise 2, open and inspect the file ’DataRegression.csv in a text editor or in excel. Then load it using function read.table(). ?read.table The help file describes what the function read.table does:“Reads a file in table format and creates a data frame from it, with cases corresponding to lines and variables to fields in the file.” That is, it is function that can be used to read for instance .csv, .txt, and .dat files, and puts it in a dataframe. In the arguments for the function you need to specify how the data was stored, for instance: - how are variables seperated from each other in the file (with a tab: sep=“”, or a space: sep=" “, or a comma: sep=”," or…) - if comma’s or dots are used to indicate decimal points - if there are variable names in the first line of the data file or not. Assign the data you read in to an object called fulldataset. fulldataset <- read.table(file="DataRegression.csv", header=TRUE, sep="\t") #if your data is not saved in your working directory, specify the complete filepath for the file= argument. We specify the argument header = TRUE because the variable names are specified at the top of the datafile. sep="\t" because the data is tab delimited. You can also use the function head() to only inspect the first few cases of your dataframe. head(fulldataset) ## id sex mainsource tlifesat tpstress tslfest op1 op2 op3 op4 ## 1 9 MALES Work 30 22 34 2 3 4 3 ## 2 425 FEMALES Family or Friends 33 19 31 3 1 3 3 ## 3 307 MALES Work 33 31 40 3 1 5 3 ## 4 440 MALES Work 16 27 21 3 2 3 2 ## 5 341 FEMALES Money/Finances 5 39 18 3 5 1 4 ## 6 300 MALES Work 25 39 34 4 1 3 1 ## op5 op6 op2R op4R op6R sumOpt ## 1 5 4 3 3 2 19 ## 2 3 4 5 3 2 19 ## 3 5 1 5 3 5 26 ## 4 1 3 4 4 3 18 ## 5 1 4 1 2 2 10 ## 6 4 2 5 5 4 25 ### 3.B We do not need all the separate items of the optimism scale for our analyses. Make a new object called datareg with all variables except the individual optimism items (op1 to op6, and op2R, op4R, and op6R). Inspect the structure of the data with str(). Use summary() on the data to obtain some descriptives. datareg<-fulldataset[,-c(7:15)] str(datareg) summary(datareg) ##Note that for the sumscore variables there are some missing data indicated with "NA". These will be listwise deleted automatically when we use these variables for the regression analysis in R. ### 3.C We want to find out if sex and optimisim predict life satisfaction, but first, try to explore the data a bit. Some things you can do : - Make a histogram for the dependent variable using hist(). Give the plot the main title “Histogram of Life Satisfaction”. • Get the correlation matrix for all the continuous variables using cor(). • Obtain the average life satisfaction for men, and for women, using mean(). • Make a scatterplot of life satisfaction and optimism, for only the women using plot(). Use the argument col="red" to plot in red. Use the argument xlab to label the x axis “Optimism”. • Add the points for men to the plot using points(). Use the argument col="blue" to plot in blue, and pch=3 to change the shape of the points. hist(datareg[,"tlifesat"], main="Histogram of Life Satisfaction") cor(datareg[,4:7], use="pairwise.complete.obs") # What happens when you do not specify the second argument, na.rm=TRUE? #make two filter variables, one to select men and one to select women: filtf = datareg[,2]=="FEMALES" filtm = filtf==FALSE #use the filter variables to get the life satisfaction means for women and men mean(datareg[filtf,"tlifesat"], na.rm=TRUE) #na.rm = TRUE to not use the missings mean(datareg[filtm,"tlifesat"], na.rm=TRUE) #na.rm = TRUE to not use the missings #use the filter variables to make a scatterplot for women plot(datareg[filtf,"sumOpt"],datareg[filtf,"tlifesat"],col="red", xlab= "Optimism", ylab="Life Satisfaction") # Then add the points for men points(datareg[filtm,"sumOpt"],datareg[filtm,"tlifesat"], col="blue", pch=2) ### 3.D Fit a regression model with life satisfaction as the dependent variable, and sex, optimism and an interaction term for sex and optimism as predictors. Call your regression analysis reg1. Use summary() to get some more results, such as p-values for the regression coefficients and R-squared. Interpret the results. reg1 <- lm(tlifesat ~ 1+ sex + sumOpt + sex*sumOpt, data=datareg) summary(reg1) ### 3.E We probably should have checked some assumptions before we interpreted the results… We will do this right now. First, we will make some plots to see if our residuals are normally distributed, and to evaluate if there is heteroskedasticity. • Use plot() on your regression analysis to get some diagnostic plots. • Obtain the predicted values and residuals using predict() and residuals() on the regression analysis. • Make a histogram for the residuals. • use the function shapiro.test() on the residuals to test if the assumption of normally distributed residuals is violated. plot(reg1) # The first plot we see shows a plot of our predicted values and the residuals. The plot should look like a pretty much evenly distributed blob of points. It also labels some potential outliers. # Press enter to go to the next plot. This is a QQplot for the residuals, and if they are normally distributed the should fall on a straight line. It again labels some potential outliers. # Disregard the third plot # The fourth plot can be used for outlier diagnostics, and gives on overview of the standardized residuals and leverage values. resids = residuals(reg1) pred = predict(reg1) hist(resids) shapiro.test(resids) ## we can also remake the basic heteroskedasticityplot ourselves, quite simply: plot(x=pred,y=resids) ## Exercise 4: A Factorial ANOVA in R. In this exercise we will work with a dataset that we have to load into R. It is an adjusted version of the dataset that can be found here:http://spss.allenandunwin.com.s3-website-ap-southeast-2.amazonaws.com/data_files.html. The data contains 16 variables: - participant • id • sex • the participants main source of stress (Work, Family/Friends, Money/Finances, or Health/ilness) • six items from an optimism questionnaire and three reversed items from the optimism questionnaire • a total score on a life satisfaction questionnaire (higher = more satisfied) • a total score on a stress questionnaire (higher score = more stress) • a total score on a self esteem questionnaire (higher score = more self esteem) • a total score on an optimism questionnaire (higher score = more optimism) We will perform a factorual ANOVA to find out if and how experienced stress is related to the type of main source of stress, and to gender. • If you did Exercise 2, load one of the datafiles you made in exercise 2G: the file “DataforExercise3.csv” using read.table(), or the file “DataforExercise3.Rdata” using load(). • If you did not do Exercise 2, open and inspect the file ’DataRegression.csv in a text editor or in excel. Then load it using function read.table(). ?read.table The help file describes what the function read.table does:“Reads a file in table format and creates a data frame from it, with cases corresponding to lines and variables to fields in the file.” That is, it is function that can be used to read for instance .csv, .txt, and .dat files, and puts it in a dataframe. In the arguments for the function you need to specify how the data was stored, for instance: - how are variables seperated from each other in the file (with a tab: sep=“”, or a space: sep=" “, or a comma: sep=”," or…) - if comma’s or dots are used to indicate decimal points - if there are variable names in the first line of the data file or not. Assign the data you read in to an object called fulldataset. fulldataset <- read.table(file="DataRegression.csv", header=TRUE, sep="\t") #if your data is not saved in your working directory, specify the complete filepath for the file= argument. We specify the argument header = TRUE because the variable names are specified at the top of the datafile. sep="\t" because the data is tab delimited. You can also use the function head() to only inspect the first few cases of your dataframe. head(fulldataset) ## id sex mainsource tlifesat tpstress tslfest op1 op2 op3 op4 ## 1 9 MALES Work 30 22 34 2 3 4 3 ## 2 425 FEMALES Family or Friends 33 19 31 3 1 3 3 ## 3 307 MALES Work 33 31 40 3 1 5 3 ## 4 440 MALES Work 16 27 21 3 2 3 2 ## 5 341 FEMALES Money/Finances 5 39 18 3 5 1 4 ## 6 300 MALES Work 25 39 34 4 1 3 1 ## op5 op6 op2R op4R op6R sumOpt ## 1 5 4 3 3 2 19 ## 2 3 4 5 3 2 19 ## 3 5 1 5 3 5 26 ## 4 1 3 4 4 3 18 ## 5 1 4 1 2 2 10 ## 6 4 2 5 5 4 25 ### 4.B We do not need all the separate items of the optimism scale for our analyses. Make a new object called datareg with all variables except the individual optimism items (op1 to op6, and op2R, op4R, and op6R). Inspect the structure of the data with str(). Use summary() on the data to obtain some descriptives. datareg<-fulldataset[,-c(7:15)] str(datareg) summary(datareg) ##Note that for the sumscore variables there are some missing data indicated with "NA". These will be listwise deleted automatically when we use these variables for the regression analysis in R. ### 4.C We want to find out if sex and optimisim predict life satisfaction, but first, try to explore the data a bit. Some things you can do : - Make a histogram for the dependent variable using hist(). Give the plot the main title “Histogram of Stress”. • Obtain the stress level satisfaction for men, and for women, using mean(). • Make boxplots of stress per main source of stress, for only the women using plot(). Use the argument col="red" to plot in red. Use the argument xlab to label the x axis “main source of stress”. • Make boxplots of stress per main source of stress, for only the men using plot(). Use the argument col="blue" to plot in red. Use the argument xlab to label the x axis “main source of stress”. hist(datareg[,"tpstress"], main="Histogram of Stress") #make two filter variables, one to select men and one to select women: filtf = datareg[,2]=="FEMALES" filtm = filtf==FALSE #use the filter variables to get the life satisfaction means for women and men mean(datareg[filtf,"tlifesat"], na.rm=TRUE) #na.rm = TRUE to not use the missings mean(datareg[filtm,"tlifesat"], na.rm=TRUE) #na.rm = TRUE to not use the missings par(mfrow=c(1,2)) ##par() can be used to set all kind of options (graphical parameters) in advance for making plots. Here we use it to specify mfrow, which is used to determine how many rows and columns a plot should have. Here we say 1 row, 2 columns. In this way, we can make two plots next to each other. Want to know more? Search for R: plotting with par() in google. #use the filter variables to make a scatterplot for women plot(datareg[filtf,"mainsource"],datareg[filtf,"tpstress"],col="red", xlab= "Main source of stress", ylab="Stress") #use the filter variables to make a scatterplot for men plot(datareg[filtm,"mainsource"],datareg[filtm,"tpstress"],col="blue", xlab= "Main source of stress", ylab="Stress") ### 4.D I know I said we are going to do an ANOVA, but actually we will first do a regression. We can use the results for the regression to get ANOVA statistics. To do this, do the following: • First check if sex and main source of stress are really coded as factors with str(). It would be strange to treat our categorical variable main source of stress as a numerical variable. • Fit a regression model with stress as the dependent variable, and sex, and main source of stress as predictors. Call your regression analysis reg_ano. • Use summary() to get some more results, such as p-values for the regression coefficients and R-squared. Note that R automatically made dummy variables for the factor predictor variables, the reference category is ‘Family/friends’ for main source of stress, and females’ for sex. Interpret the (regression) results. reg_ano <- lm(tpstress ~ 1+ sex + mainsource, data=datareg) summary(reg_ano) We probably should have checked some assumptions before we interpreted the results… In Exercise 3.E we show how to check some model assumptions for the regression analysis: If you would like to check them you can use the same techniques presented there. ### 4.E Get the ANOVA results by using fucntion anova() on the regression analysis. Also get the ANOVA results with aov Did you know that SPSS actually performs a regression analysis in the background when you perform an ANOVA model in the general linear model menu? We are basically doing the same thing, but openly. We can get the overall ANOVA results using the function anova() on the regression model. anova(reg_ano) Alternatively, it is possible to immediately fit an ANOVA. It works the same way as the regression analysis, but instead you use function aov(). anova1<-aov(tpstress ~ 1+ sex + mainsource, data=datareg ) summary(anova1) Interpret the results. ### 4.F Do Tukey corrected post hoc tests for main sources of stress by using function TukeyHSD() on our oav analysis. We didn’t specify any specific hypotheses about the differences in the means of stress, so we want to do post hoc tests to see if there are any paiwise differences in stress for different main sources of stress. We can do this as follows with the function TukeyHSD(). TukeyHSD(anova1) ### 4.G Specify contrasts and change the dummy coding for the main source of stress in the regression. Healthy is often considered most important to peoples’ the happiness. If this is the case, we may expect that stress is higher when the main source of stress is health/ilness related than for the other categories. Use the function C() to specify contrasts (not to be confused with the vector function c()) to compare each other main source of stress to health/ilness (basically change the reference group in the dummy coding). Use another planned contrast to compare health/ilness to the other three categories together. In R, you specify contrasts by altering the factor variable in your dataframe. The standard ‘treatment’ contrast can be used for a standard dummy coding, and we can change the reference category wit the argument base=... Check the order of the levels of your factor variable with the function levels(), and the current specified contrasts for the factor variable with contrasts(). Then use the number of the element that contains your reference of choice to specify the base argument. Use the contrast function in your regression analysis instead of the earlier predictor mainsource to perform the analysis with different dummy coding. levels(datareg$mainsource) contrasts(datareg$mainsource) reg_ano2<-lm(tpstress ~ 1+ sex + C(datareg$mainsource, contr=contr.treatment, base=2), data=datareg) summary(reg_ano2) # Note if you want to change the factor in your dataset to have a certain contrast (not just only for one analysis), you should change the factor in the dataset as follows: datareg$mainsource<-C(datareg$mainsource, contr=contr.treatment, base=2) ##check the contrast set for your factor variable contrasts(datareg$mainsource) Interpret the results. • Make a planned contrast to compare health/ilness to the other three categories together using C() To do a contrast that is not default in R, we need to specify our own ’contrast matrix. This is a matrix with contrast coding, with in each column a specific contrast. # We only want to specify one contrast, so our matrix will have 1 column (so that it can actually be considered a vector), and a row for each level of our categorical predictor variable. Remember, we want to compare the second level to all the other levels combined. To specify an orthogonal contrast we want our contrast codings to sum up to zero, and the categories that should be pooled together should be assigned the same number. contrast_mat <- matrix(c(1,-3,1,1),4,1) ##now use this matrix in the C() function. We should also specify in the C() function that we want only 1 contrast with the arguments how.many - the default is one less than the number of levels. reg_ano3<-lm(tpstress ~ 1+ sex + C(datareg\$mainsource, contr=contrast_mat, how.many=1), data=datareg) summary(reg_ano3) Interpret the results. ## Exercise 5: SEM in R with package Lavaan. To do sem analyses in R we will use package lavaan. Lavaan is not the only sem package available in R. There is also package ‘sem’, and package ‘openmx’ (and there may be even more packages available.) Lavaan is in my opinion most userfriendly, so we will use this today. It aims to provide an alternative to mplus, can mimic mplus, and is continuously being developed. You may like to try the other packages some time as well - openmx is known as having a quite steep learning curve, but when you know it apparently you can fit almost any model with it. To start, read a bit about Lavaan on its website: http://lavaan.ugent.be/. The website alsno has a nice tutorial you can follow. Note also that there are some things you can’t do with lavaan yet (from http://lavaan.ugent.be/tutorial/before.html): “The lavaan package is not finished yet. But it is already very useful for most users, or so we hope. However, some important features that are currently NOT available in lavaan are: • support for hierarchical/multilevel datasets (multilevel cfa, multilevel sem) • support for discrete latent variables (mixture models, latent classes) • Bayesian estimation We hope to add these features in the next (two?) year(s) or so." We will start by installing the package, loading, data, and fitting a very basic confirmatory factor analysis on the continuous scores of 72 boys on 6 measures of certain cognitive skills, just to get you started. If you want to practice with more difficult models, you can practice by following the lavaan tutorial. ### 5.A Install and load package lavaan. • You can intall a package using the buttons in the gui: .. … Then you need to choose a mirror where to download the package from, and pick the package you would like to install from a large alphabetical list. • You can also install a package using code in the R console, like this: install.packages("lavaan") If the install was succesful, you will see this message at the end: “The downloaded source packages are in…” Now we need to load the installed package in order to be able to use its functions. library("lavaan") ## This is lavaan 0.5-20 ## lavaan is BETA software! Please report any bugs. ### 5.B Load the data file Boys.dat into R by following the instructions below. First, open and inspect the file Boys.dat in a text editor or in excel. This file Boys.dat contains the scores of 72 boys on 6 measures of cognitive skills. Note that the variable names are not included in the top of the file. In order, the six variables are: - Visual perception scores • Scores on a test of spatial visualization • Scores on a test of spatial orientation • Paragraph comprehension scores • Sentence completion scores • Word meaning test scores Load the file Boys.dat into R using function read.table(). ?read.table The help file describes what the function read.table does:“Reads a file in table format and creates a data frame from it, with cases corresponding to lines and variables to fields in the file.” That is, it is function that can be used to read for instance .csv, .txt, and .dat files, and puts it in a dataframe. In the arguments for the function you need to specify how the data was stored, for instance: - how are variables seperated from each other in the file (with a tab: sep=“”, or a space: sep=" “, or a comma: sep=”," or…) - if comma’s or dots are used to indicate decimal points - if there are variable names in the first line of the data file or not. Assign the data you read in to an object called boysdata. boysdata<- read.table(file="Boys.dat", header=FALSE, sep="\t") #if your data is not saved in your working directory, specify the complete filepath for the file= argument. We specify the argument header = FALSE because the variable names are not specified at the top of the datafile. sep="\t" because the data is tab delimited. The names for the variables in our dataframe are V1 to V6. Give them more appropriate names using function colnames(). colnames(boysdata)<- c("visper","spatvis", "spator", "parcom", "sencompl", "wordmean") ### 5.C Specify a CFA lavaan model. It is assumed that the six variables in boysdata measure two factors, that is Spatial Ability, which is measured by the first three variables, and Verbal Ability, which is measured by the last three variables. Specify the two-factor model in lavaan. Use the lavaan tutorial: http://lavaan.ugent.be/tutorial/syntax1.html to see how the syntax works. We will later use function cfa() or sem() to fit the model. For these functions, lavaan will autoamtically scale the model for you. Further, by default, lavaan will include variances for all observed and latent variables, and covariances between the latent variables to the model, so you do not have to specify these in the model. That means that if you do not want those parameters included in the model, you will have to restrict these parameters to zero (we will try restricting parameters later). Alternatively, you can use function lavaan(), then you will have to specify the full model, exactly like you want it, yourself. Call your lavaan model object boysmodel1. boysmodel1 <- ' ### Two factors, spatial and verbal spatial =~ visper + spatvis + spator verbal =~ parcom + sencompl + wordmean ' ### 5.D Fit the model using function cfa() or sem(). Look at the many arguments you can specify for function cfa() and sem() by looking up the functions help files. Currently these functions do basically the same thing, so it does not matter which one you will use. Fit your model with lavaan using function sem or cfa. Call your sem analysis cfa1. cfa1 <- sem(model=boysmodel1, data=boysdata) cfa1 is now on object of lavaan class. Look at all the things you can obtain lavaan objects in the helpfiles by calling ?lavaan-class, under the heading ‘methods’. Here you will find all kinds of functions you can use on the lavaan object. Look at the summary() function for the lavaan class object at the bottom. Inspect the results with summary(). Look at what results lavaan returns by default and interpret the results. How does lavaan automatically scale the model - in the factor loadings, or in the factor variances? Does the two-factor model fit the data? Are the two factors correlated? summary(cfa1) Now, use summary() and specify an argumentto obtain fit measures with your results. summary(cfa1,fit.measures=TRUE) Does the two-factor model fit the data? ### 5.E Mimic Mplus. Lavaan does many things in the same way as Mplus, but not always. You can specify that you want lavaan to run exactly like Mplus - if they figured out how to do that for your type of model - by specifying mimic=“Mplus” in the sem() or cfa() function. Try it. cfa1 <- sem(model=boysmodel1, data=boysdata, mimic="Mplus") summary(cfa1) ### 5.F Changing the way we scale the model by freeing and fixing parameters in lavaan. Lavaan automatically scaled our model by fixing one factor loading to 1 for each factor. However, we may want to scale in the factor variances instead. We can do this by fixing the factor variances to one, and freeing the factor loadings. Use the lavaan tutorial: http://lavaan.ugent.be/tutorial/syntax1.html to see how to do more with the model syntax. Change the model so scaling is done in the factor variances. Call you model boysmodel2. boysmodel2<- ' ### Two factors, spatial and verbal verbal =~ NA*parcom + sencompl + wordmean verbal ~~ 1*verbal # variance spatial spatial ~~ 1*spatial # variance verbal ' cfa2 <- sem(model=boysmodel2, data=boysdata) summary(cfa2) ## Exercise 6: Multilevel modeling in R with package lme4. To do multilevel modeling in R we will use package lme4. Lme4 is not the only mutlilevel modeling package available in R (there are quite many), but it is one of the most popular packages. One popular alternative is nlme. We will start by installing the package and loading data. After that, we will fit a basic multilevel model on the data. ### 6.A Install and load package lme4. • You can intall a package using the buttons in the gui: .. … Then you need to choose a mirror where to download the package from, and pick the package you would like to install from a large alphabetical list. • You can also install a package using code in the R console, like this: install.packages("lme4") If the install was succesful, you will see this message at the end: “The downloaded source packages are in…” Now we need to load the installed package in order to be able to use its functions. library("lme4") ## Loading required package: Matrix ### 6.B Load the data file popular.dat into R by following the instructions below. First, open and inspect the file popular.dat in a text editor or in excel. This file popular.dat is data from chapter 2 of Joop Hox’s book on multilevel modeling “Multilevel Analysis: Techniques and Applications”. It contains simulated data for 2000 pupils in 100 classes. It contains the following variables: • pupil number • class • extraversion score for each pupil • sex for each pupil • teacher experience in each class • popularity ratings for the pupil by their peers • popularity rating for the pupil bu their teacher. • standardized scores for the latter five variables presented above • centered scores for extraversion, teacher experience, and sex. Load the file popular.dat into R using function read.table(). ?read.table The help file describes what the function read.table does:“Reads a file in table format and creates a data frame from it, with cases corresponding to lines and variables to fields in the file.” That is, it is function that can be used to read for instance .csv, .txt, and .dat files, and puts it in a dataframe. In the arguments for the function you need to specify how the data was stored, for instance: - how are variables seperated from each other in the file (with a tab: sep=“”, or a space: sep=" “, or a comma: sep=”," or…) - if comma’s or dots are used to indicate decimal points - if there are variable names in the first line of the data file or not. Assign the data you read in to an object called popudat. popudat<- read.table(file="popular.dat", header=TRUE, sep=";") #if your data is not saved in your working directory, specify the complete filepath for the file= argument. We specify the argument header = TRUE because the variable names are specified at the top of the datafile. sep=";" because the data is delimited with the symbol ;. You can use the function head() to only inspect the first few cases of your dataframe. head(popudat) ## pupil class extrav sex texp popular popteach Zextrav Zsex ## 1 1 1 5 girl 24 6.3 6 -0.1703149 0.9888125 ## 2 2 1 7 boy 24 4.9 5 1.4140098 -1.0108084 ## 3 3 1 4 girl 24 5.3 6 -0.9624772 0.9888125 ## 4 4 1 3 girl 24 4.7 5 -1.7546396 0.9888125 ## 5 5 1 5 girl 24 6.0 6 -0.1703149 0.9888125 ## 6 6 1 4 boy 24 4.7 5 -0.9624772 -1.0108084 ## Ztexp Zpopular Zpopteach Cextrav Ctexp Csex ## 1 1.486153 0.8850133 0.66905609 -0.215 9.737 0.5 ## 2 1.486153 -0.1276291 -0.04308451 1.785 9.737 -0.5 ## 3 1.486153 0.1616973 0.66905609 -1.215 9.737 0.5 ## 4 1.486153 -0.2722923 -0.04308451 -2.215 9.737 0.5 ## 5 1.486153 0.6680185 0.66905609 -0.215 9.737 0.5 ## 6 1.486153 -0.2722923 -0.04308451 -1.215 9.737 -0.5 ### 5.C Specify a random intercept model for the peer popularity ratings of the pupils. Call your multilevel model object randomint. #randomint <- ### 5.D Fit the model using function cfa() or sem(). Look at the many arguments you can specify for function cfa() and sem() by looking up the functions help files. Currently these functions do basically the same thing, so it does not matter which one you will use. Fit your model with lavaan using function sem or cfa. Call your sem analysis cfa1. cfa1 <- sem(model=boysmodel1, data=boysdata) cfa1 is now on object of lavaan class. Look at all the things you can obtain lavaan objects in the helpfiles by calling ?lavaan-class, under the heading ‘methods’. Here you will find all kinds of functions you can use on the lavaan object. Look at the summary() function for the lavaan class object at the bottom. Inspect the results with summary(). Look at what results lavaan returns by default and interpret the results. How does lavaan automatically scale the model - in the factor loadings, or in the factor variances? Does the two-factor model fit the data? Are the two factors correlated? summary(cfa1) Now, use summary() and specify an argumentto obtain fit measures with your results. summary(cfa1,fit.measures=TRUE) Does the two-factor model fit the data? ### 5.E Mimic Mplus. Lavaan does many things in the same way as Mplus, but not always. You can specify that you want lavaan to run exactly like Mplus - if they figured out how to do that for your type of model - by specifying mimic=“Mplus” in the sem() or cfa() function. Try it. cfa1 <- sem(model=boysmodel1, data=boysdata, mimic="Mplus") summary(cfa1) ### 5.F Changing the way we scale the model by freeing and fixing parameters in lavaan. Lavaan automatically scaled our model by fixing one factor loading to 1 for each factor. However, we may want to scale in the factor variances instead. We can do this by fixing the factor variances to one, and freeing the factor loadings. Use the lavaan tutorial: http://lavaan.ugent.be/tutorial/syntax1.html to see how to do more with the model syntax. Change the model so scaling is done in the factor variances. Call you model boysmodel2. boysmodel2<- ' ### Two factors, spatial and verbal summary(cfa2)
2021-12-01 22:13:19
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.41727593541145325, "perplexity": 2407.7918594635935}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-49/segments/1637964360951.9/warc/CC-MAIN-20211201203843-20211201233843-00147.warc.gz"}
http://mathematica.stackexchange.com/questions/3080/split-a-string-at-specific-positions?answertab=oldest
# Split a string at specific positions Given a string of alphanumerical characters, how to split it simply and quickly at the center of continuous letter-substrings? Is there an elegant and fast solutions out there in the "computational universe"? The splitter should create "syllables" with one digit as a nucleus for each syllable, that is, at the end there should be only one digit per sublist. When there are more letter characters between digits, letters should be shared by the bordering digits (here I simulated a half share, distributing towards the right bordering digit in case of an odd number of letters), and starting and ending letter-sequences should be just attached to the closest digit. "xxx00xxx000x0xx0xxxx000xx0xx" (* original string *) "xxx0 | 0x | xx0 | 0 | 0 | x0x | x0xx | xx0 | 0 | 0x | x0xx" (* intermediate *) {"xxx0", "0x", "xx0", "0", "0", "x0x", "x0xx", "xx0", "0", "0x", "x0xx"} (* end *) Note that the string never contains spaces by default. - Sorry @Mr.Wizard, you're not slow at all, I purposefully edited out my own lame solution to prevent any bias, and with it the specification (unpurposefully). Please see edit. –  István Zachar Mar 16 '12 at 12:34 @Mr.Wizard Take each continuous sequence, such as xxxx or 00 and put a separator in the middle. Then split at the separators. I'd implement that but it's not elegant. –  Szabolcs Mar 16 '12 at 12:35 @Szabolcs: Almost, with the minor addition, that continuous digit sequences should be split at each digit (now of course this is not really the issue here). –  István Zachar Mar 16 '12 at 12:47 Here is a faster version of István's function: split[s_String] := StringReplace[s, { StartOfString ~~ l : LetterCharacter .. :> l, l : LetterCharacter .. ~~ EndOfString :> l, l : LetterCharacter .. :> StringInsert[l, " ", 1 + Quotient[StringLength@l, 2] ], d : Repeated[DigitCharacter, {2, ∞}] :> StringJoin @ Riffle[Characters@d, " "] }] // StringSplit Timings: str = StringJoin @@ (RandomInteger[{0, 1}, {500000}] /. {0 -> "0", 1 -> "x"}); First@AbsoluteTiming[istvan = splitIstvan@str;] First@AbsoluteTiming[mrwizard = split@str;] istvan === mrwizard 0.7710441 0.4260243 True - Good point, the longer list the better speed up +1! The relative timings 0.7 for 5 10^4, 0.57 for 5 10^5 and 0,56 for 2.5 10^6. –  Artes Mar 17 '12 at 11:21 Even more optimized, amazing! –  István Zachar Mar 17 '12 at 11:30 Since it gives the fastest and clearest solution (praising myself as well), I've accepted this answer. –  István Zachar Mar 19 '12 at 11:23 @István thanks :-) –  Mr.Wizard Mar 19 '12 at 13:06 Linked lists seem to be a good data structure to implement matching with some look-ahead behavior - which is what is needed here. Here is a linear-time solution based on linked lists: ClearAll[toLinkedList]; toLinkedList[l_] := Fold[{#2, #1} &, {}, Reverse@l] This computes the distance to the next zero in the linked list: Clear[nzl]; nzl[{}, _] := 0; nzl[{Except["0"], tail_}, len_] := nzl[tail, len + 1]; nzl[{"0", _}, len_] := len; This is a main recursive "engine" ClearAll[ff]; ff[accum_, current_, {}, _, _] := ll[accum, Flatten@current]; ff[accum_, current_, {h : Except["0"], tail_}, nextZeroFullLength_, nextZeroLength_] /; nextZeroLength == IntegerPart[(nextZeroFullLength + 1)/2] := ff[ll[accum, Flatten@{current, h}], {}, tail, nextZeroFullLength, nextZeroLength - 1]; ff[accum_, current_, {h : Except["0"], tail_}, nextZeroFullLength_, nextZeroLength_] := ff[accum, {current, h}, tail, nextZeroFullLength, nextZeroLength - 1]; ff[accum_, current_, {"0", t : {"0", tail_}}, _, _] := ff[ll[accum, Flatten@{current, {"0", {}}}], {}, t, 1, 1]; ff[accum_, current_, {"0", t : {_, {"0", _}}}, _, _] := ff[ll[accum, Flatten@{current, "0"}], {}, t, nzl[t, 0], nzl[t, 0] - 1]; ff[accum_, current_, {"0", tail_}, _, _] := ff[accum, {current, "0"}, tail, nzl[tail, 0], nzl[tail, 0] - 1]; and the final function: ClearAll[splitString]; splitString[str_String] := Block[{ll, result}, SetAttributes[ll, HoldAllComplete]; Map[StringJoin, List @@ Flatten[#, Infinity, ll]] &@ ff[ll[], {}, #, nzl[#, 0], 0] &@toLinkedList@Characters[str] ]; You use this as splitString["xxx00xxx000x0xx0xxxx000xx0xx"] Not sure if this is elegant though, it's clearly not too brief. - +1 It might not be brief, but it's quick! (And scales well.) It is noteworthy, though, that splitString["xxx"] returns {"x", "xx"}. Not necessarily wrong: the behavior for an input with no digits was not specified. –  whuber Mar 16 '12 at 16:58 I have some problems when run on strings of length 10000: $IterationLimit::itlim: Iteration limit of 4096 exceeded. >> – István Zachar Mar 16 '12 at 17:23 @Istvan Just wrap the code into Block[{$IterationLimit = Infinity}, splitString[...]]. This is safe. –  Leonid Shifrin Mar 16 '12 at 17:41 Indeed, and now it qualifies as the second fastest. Nice recursive solution. –  István Zachar Mar 16 '12 at 18:00 @Istvan Thanks. So far, it looks like you've got to accept your own solution, unless someone comes up with something yet shorter and faster (which I doubt) :-) –  Leonid Shifrin Mar 16 '12 at 18:09 This is a Euclidean Allocation operation on a one-dimensional grid. That immediately suggests use of Nearest: Clear[midsplit]; midsplit[s_String] := Module[ {digits = First[#] & /@ StringPosition[s, DigitCharacter], runs}, runs = Accumulate[ Length[#] & /@ Split[Last[#] & /@ ( Nearest[digits, #] & /@ Range[StringLength[s]])]]; StringTake[s, {#1, #2}] &, {Most[Prepend[runs + 1, 1]], runs}] ]; midsplit[s_String] /; Length[StringCases[s, DigitCharacter]] == 0 := {s}; (The last line takes care of cases where no digit appears at all; Nearest chokes on an empty list for the first argument.) Most of the code is devoted to reformatting the input into a binary raster representation (the computation of digits) and then using the results of Nearest to extract the associated substrings (the computation of runs and subsequent application of StringTake). - +1 for brevity and interesting idea / link. Mine is faster for large strings, but yours is shorter and reveals interesting techniques/algorithm. I actually tested on random strings of length several thousands and our results agree. –  Leonid Shifrin Mar 16 '12 at 17:03 I did the same kind of testing :-). Your algorithm scales exceptionally well. –  whuber Mar 16 '12 at 17:03 Thanks :-). I tried hard to both make it linear time and avoid imperative style with assignments etc. For very large lists, one has to lift the $IterationLimit. I could have used less rules, but then ff would not be tail-recursive in M sense (it would affect $RecursionLimit then, which I normally try to avoid at all costs). Also, pattern-matcher can be nicely utilized here to implement look-ahead behavior in a clear fashion, which I exploited as well. I actually think that linked lists are under-used in Mathematica programming. –  Leonid Shifrin Mar 16 '12 at 17:09 This basically uses the method as outlined in the question: split[str_] := Module[{pos, str1}, pos = Ceiling[Mean /@ StringPosition[str, Repeated[LetterCharacter, {2, Infinity}], Overlaps -> False]]; str1 = StringInsert[str, " ", pos]; pos = StringPosition[str1, Repeated[DigitCharacter, {2}], Overlaps -> True][[All, 2]]; StringSplit[StringInsert[str1, " ", pos]]] split["000xxxx0000xxx00x0"] {"0", "0", "0xx", "xx0", "0", "0", "0x", "xx0", "0x0"} Edit Apparently I misunderstood the splitting rules. Hopefully I got it right this time split[str_] := Module[{pos, str1}, pos = Ceiling[ Mean /@ StringPosition[str, DigitCharacter ~~ Repeated[LetterCharacter] ~~ DigitCharacter]]; str1 = StringInsert[str, " ", pos]; pos = StringPosition[str1, Repeated[DigitCharacter, {2}], Overlaps -> True][[All, 2]]; StringSplit[StringInsert[str1, " ", pos]]] Testing the solution of the string in the question: split["xxx00xxx000x0xx0xxxx000xx0xx"] {"xxx0", "0x", "xx0", "0", "0", "x0x", "x0xx", "xx0", "0", "0x", "x0xx"} - This has some problems: for example it splits "000xx00xxx" into {"0", "0", "0x", "x0", "0x", "xx"} instead of {"0", "0", "0x", "x0", "0xxx"}. –  István Zachar Mar 16 '12 at 17:38 @IstvánZachar In that case I misunderstood your splitting rules. –  Heike Mar 16 '12 at 17:45 I have updated my question to reflect the rules more correctly. If you could update your solution, I would gladly run it against the others again, as I see that it has potential! –  István Zachar Mar 16 '12 at 17:49 @IstvánZachar I've updated my solution. –  Heike Mar 16 '12 at 17:53 Interesting alternative - +1. –  Leonid Shifrin Mar 16 '12 at 18:07 In the meantime, I figured out a quite simple way, and I was amazed, that it turned out quite fast - the same reason why Heike's solution is fast: using the string pattern matcher is perhaps the best option here. splitIstvan[s_String] := StringSplit@StringReplace[s, { StartOfString ~~ l : LetterCharacter .. :> l, l : LetterCharacter .. ~~ EndOfString :> l, l : LetterCharacter .. :> (StringTake[l, Floor[StringLength@l/2]] <> " " <> StringTake[l, -Ceiling[StringLength@l/2]]), d : DigitCharacter .. :> StringJoin@Riffle[Characters@d, " "] }]; str = StringJoin @@ (RandomInteger[{0, 1}, {10000}] /. {0 -> "0", 1 -> "x"}); { First@AbsoluteTiming[whuber = splitWhuber@str;], First@AbsoluteTiming[ leonid = Block[{\$IterationLimit = Infinity}, splitLeonid@str];], First@AbsoluteTiming[heike = splitHeike@str;], First@AbsoluteTiming[istvan = splitIstvan@str;] } {istvan === whuber, istvan === leonid, istvan === heike} {12.0120211, 0.0624001, 0.1092002, 0.0312001} {True, True, True} - Interesting. I somehow had a hunch that string-based operations won't be enough here, and you just proved me wrong. +1. –  Leonid Shifrin Mar 16 '12 at 17:43 Excellent solution: it will be difficult to improve on its performance or brevity. –  whuber Mar 16 '12 at 17:43 Well I actually gave up on the string patternmatcher and went on to do some algorithmic trial-and-error, that is why I posted the question. And then came this idea. –  István Zachar Mar 16 '12 at 17:50 Edit I fixed two shortcomings in my earlier submission: f[str_] := Module[{r1}, r1 = StringReplace[ str, {d : NumberString /; StringLength[d] > 1 :> StringInsert[d, " ", Range[2, StringLength[d]]]}]; StringSplit@ FixedPoint[ StringReplace[#, (d1 : DigitCharacter) ~~ (w : LetterCharacter ..) ~~ (d2 : DigitCharacter) :> d1 ~~ StringInsert[w, " ", Floor[StringLength[w]/2] + 1] ~~ d2] &, r1] ] The first StringReplace inserts a break between all adjacent digits. The second StringReplace places a break in any run of letters. These two breaks are sufficient to parse all the cases. FixedPoint is needed because the second instance of StringReplace is not always able to find all the relevant cases to replace on the first pass. There was also a second rule in the first StringReplace that was a sloppy (and faulty) hack. Examples: f["xxx00xxx000x0xx0xxxx000xx0xx"] (* Out *) {"xxx0", "0x", "xx0", "0", "0", "x0x", "x0xx", "xx0", "0", "0x", \ "x0xx"} f["x0xxx00x00"] (*Out *) {"x0x", "xx0", "0", "x0", "0"} Speed check using István's metric: str = StringJoin @@ (RandomInteger[{0, 1}, {10000}] /. {0 -> "0", 1 -> "x"}); First[f[str] // AbsoluteTiming] (* Out*) 0.037927 - While it's pretty fast, it fails on e.g. this: "x0xxx00x00", and returns {"x0x", "xx00", "x00"} where not all the digits are split. BTW, you can use space instead of " | " as a dummy, which simplifies cases. –  István Zachar Mar 16 '12 at 18:37 I'll check the failing condition. Regarding " | ", anything can be a separator; I simply matched the separator to produce output like yours. –  David Carraher Mar 16 '12 at 18:40 You're right. There is something wrong with the code (I mixed up code from two different attempts.) –  David Carraher Mar 16 '12 at 18:47 @István The updated version works better, I believe. –  David Carraher Mar 16 '12 at 22:13
2014-09-02 11:46:52
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.22976461052894592, "perplexity": 13648.613790179517}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2014-35/segments/1409535921957.9/warc/CC-MAIN-20140901014521-00178-ip-10-180-136-8.ec2.internal.warc.gz"}
https://hungrybeagle.com/index.php/ma10/ma10-algebra/ma10-4-exponents-and-radicals/140-ma10-4-2-integral-exponents
10.4.2 Integral Exponents We can use the exponent laws and patterns in tables to explain what happens when you have negative exponents.  Exponent laws can also be applied with both positive and negative exponents. Resources • Notes: Integral Exponents and the Exponent Laws • Square root calculator: Do you want to find a square root to 120,000 decimal places? This site is awesome! Assignment • p169 #1-6, 25 • p169 #7-9, 11, 14, 15, 17, 18 26, *20 *22 • Extra Practice #1-4, 6 (in your Student Note Package) Attachments: FileDescriptionFile size 10.4.2a.notes.pdf 3840 kB
2018-01-21 18:25:45
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8025364279747009, "perplexity": 2830.4153086307915}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-05/segments/1516084890823.81/warc/CC-MAIN-20180121175418-20180121195418-00632.warc.gz"}
https://web2.0calc.com/questions/circle-gamma-is-the-incircle-of-triangle-abc-and
+0 # Circle $\Gamma$ is the incircle of $\triangle ABC$ and is also the circumcircle of $\triangle XYZ$. The point $X$ is on $\overline{BC}$, poi 0 88 2 +281 Circle $\Gamma$ is the incircle of $\triangle ABC$ and is also the circumcircle of $\triangle XYZ$. The point $X$ is on $\overline{BC}$, point $Y$ is on $\overline{AB}$, and the point $Z$ is on $\overline{AC}$. If $\angle A=40^\circ$, $\angle B=60^\circ$, and $\angle C=80^\circ$, what is the measure of $\angle AYX$? michaelcai  Sep 27, 2017 Sort: #1 +90617 0 Circle \;\;\Gamma\;\; \text{ is the incircle of }\triangle ABC \text{ and is also the circumcircle of }\triangle XYZ\text{ The point X is on }\overline{BC} \text{ point Y is on }\overline{AB}, \textand the point Z is on }\overline{AC}.\;\; If\;\; \$\angle A=40^\circ, \;\;\angle B=60^\circ,\;\; and \;\;\angle C=80^\circ, \text{what is the measure of }\angle AYX\; ? $$\text{Circle }\Gamma\;\; \text{ is the incircle of } \triangle ABC \text{ and is also the circumcircle of } \triangle XYZ\\ \text{ The point X is on }\overline{BC} \text{and the point Y is on }\overline{AB}, \\ \text{and the point Z is on }\overline{AC}.\\ If\;\; \angle A=40^\circ, \;\;\angle B=60^\circ,\;\; and \;\;\angle C=80^\circ, \text{what is the measure of }\angle AYX\; ?$$ Let O be the centre of the circle. So OXY is an isosceles triangle 120+2 so < AYX = I am sorry, this was a full answer but 3/4 of it has been deleted. There is obviously a software problem. I shall report it as a problem :/ Melody  Sep 27, 2017 edited by Melody  Sep 27, 2017 #2 +77092 +1 See the following image : Construct angle bisectors of each vertex angle of triangle ABC Angle AYC  =  180 - angle ACY  - angle YAC  =   180 - 40  - 40   =  100° Angle AOC   = 180  - angle OAC  - angle OCA  = 180 - 20 - 40  =  120° And angle  YOX  is a vertical angle to angle AOC  ....so it measures  120° And since they are equal radii, OY  = OX So angle OYX  = angle OXY So triangle OYX is isosceles And.....angle OYX  = [  180 - 120 ] / 2   = 60  / 2   = 30° And AYX  =  angle AYC + angle OYX =   100 + 30   = 130° CPhill  Sep 28, 2017 edited by CPhill  Sep 28, 2017 edited by CPhill  Sep 28, 2017 edited by CPhill  Sep 28, 2017 ### 18 Online Users We use cookies to personalise content and ads, to provide social media features and to analyse our traffic. We also share information about your use of our site with our social media, advertising and analytics partners.  See details
2017-10-22 17:34:23
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9302334785461426, "perplexity": 10409.611818910378}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-43/segments/1508187825399.73/warc/CC-MAIN-20171022165927-20171022185927-00482.warc.gz"}
http://www.gradesaver.com/textbooks/math/algebra/introductory-algebra-for-college-students-7th-edition/chapter-5-section-5-1-adding-and-subtracting-polynomials-exercise-set-page-348/35
## Introductory Algebra for College Students (7th Edition) $-\frac{2}{5}x^{4}$ + $x^{3}$ - $\frac{1}{8}x^{2}$ ($\frac{1}{5}x^{4}$ + $\frac{1}{3}x^{3}$ + $\frac{3}{8}x^{2}$ + 6) + (-$\frac{3}{5}x^{4}$ + $\frac{2}{3}x^{3}$ - $\frac{1}{2}x^{2}$ - 6) The like terms are $\frac{1}{5}x^{4}$ and -$\frac{3}{5}x^{4}$ (both containing $x^{4}$) , $\frac{1}{3}x^{3}$ and $\frac{2}{3}x^{3}$ (both containing $x^{3}$) , $\frac{3}{8}x^{2}$ and - $\frac{1}{2}x^{2}$ (both containing $x^{2}$) , 6 and -6 both are constants We begin by grouping these pairs of like terms = ($\frac{1}{5}x^{4}$ -$\frac{3}{5}x^{4}$) + ($\frac{1}{3}x^{3}$ +$\frac{2}{3}x^{3}$) + ($\frac{3}{8}x^{2}$ - $\frac{1}{2}x^{2}$ )+ (6 -6) =$-\frac{2}{5}x^{4}$ + $x^{3}$ - $\frac{1}{8}x^{2}$
2018-04-20 10:58:36
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.4850888252258301, "perplexity": 749.0732439013215}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-17/segments/1524125937440.13/warc/CC-MAIN-20180420100911-20180420120911-00393.warc.gz"}
http://openstudy.com/updates/4f20a370e4b076dbc348e058
anonymous 4 years ago 2/3 of the men are married to 3/4 of the women in Fractionville. What fraction of the people in Fractionville are married? I've gotten it to 2/3x = 3/4y but I'm not sure what to do from here. 1. anonymous you should have two equations, since you have two unknowns. so if you let x+y=1 you should be in good shape. (x and y are the corresponding fractions of the town that are male and female) 2. phi you need another equation x+y=1 means the men and women add up to the total population 3. anonymous x = the number of men in fractionville and y = the number of women in fractionville 4. anonymous how can x + y = 1 though? 5. anonymous if x is the fraction of the population that is male and y is the fraction of the town that is female, they should add up to 1 which is the fraction of people in the population 6. anonymous but how is x the fraction of- okay I kinda see, but explain how i get my x and y definitions? 7. asnaseer @JunkieJim is correct 8. anonymous i just dont understand it though shouldnt x + y = P (population)? 9. anonymous Yes, that's exactly what we're doing, only we're dividing everything by P so you've got $\frac{x}{Population} +\frac{y}{Population} = \frac{Population}{Population}$ so that everything remains as fractions 10. phi Think of it this way: some fraction of the people are men and some fraction are women so x+y=1 the fraction of the population that are married is 2/3 x (another fraction) 11. phi which of course is the same fraction as 3/4 y 12. anonymous I appreciate how you are taking the time to explain it to me. And I'm understanding it better. I can do the systems from here, but that x + y = 1 is bothering me 13. anonymous what you said makes sense Phi, I'm just wondering how i got my definitions for x and y 14. asnaseer @mridrik: another way of looking at this is to call the total population p and the number of men m. then, number of women = p - m and you can write:$\frac{2m}{3}=\frac{3(p-m)}{4}$ 15. phi your definitions should be modified x = the fraction of the population that are men in fractionville and y = the fraction of the population that are women in fractionville 16. anonymous so are my definitions wrong, or are they right but do not help me advance with the problem? 17. phi I would *not* use x = the number of men in fractionville and y = the number of women in fractionville 18. anonymous with asnaseers way, we get m = 9/17p, so how could I use that to get the answer? 19. asnaseer 2m/3 are married 20. phi you can only find m/p (a fraction) 21. anonymous so 2/3 * 9/17 gives us ? 22. anonymous now I'm confused again 23. phi the fraction married 24. anonymous but we already knew the fraction married 2/3 25. anonymous i have a feeling i sound pretty stupid right now to you guys 26. asnaseer no - 2/3 is the fraction of men that are married - not the fraction of the population that are married. 27. phi double it (we have to count the women) 28. anonymous so 2/3 * 9/17 * 2? 29. phi Sounds good. we know 2/3 * 9/17 = 6/17 of the population is married men 3/4 * 8/17 is 6/17 of the population = married women total fraction 12/17 30. anonymous You guys are genii! (I think thats genius plural) Thank you, so 12/17 of the population of fractionville is married? 31. asnaseer :) we've just had a bit more practice than you have mridrik thats all. and yes phi has the correct answer. 32. anonymous It makes me happy to know that people care about another's will to learn. 33. asnaseer thats what makes this site so wonderful :) 34. anonymous If i could, i would make a program that gave each of you over 9000 medals 35. asnaseer :O) - careful - we may drown in glory - lol! 36. anonymous yeah lol, my math teacher couldnt even get this today in class 37. asnaseer try and be gentle on him - after all - he is also human :) 38. anonymous yes, maybe I'll get the chance to explain what you showed me to the class, even though i dont that well with public speaking, only about 20 students in my algebra 2 class 39. asnaseer good luck my friend... 40. anonymous thanks for your help, have you ever heard of mathcounts? 41. asnaseer no - what are they? 42. anonymous It's a program that my school participtes in with schools from around the United States, we go to regionals first, (district is too small) and usually win that, and then go to state where this one girl always wins, its from 6th to 8th grade (my last year) and its a blast when we go there to lexington this year (KY) but they questions get ridiculously hard towards the end of the competition and i bet you would beast them up 43. asnaseer I'm sure as time goes by you will become a mathematical ninja and win that contest hands down :) I'm probably far too old for that now :) 44. anonymous oh no way, the way you win is the top ten scorers in the written test get to go to a quick recall type thing, and she has won that in 6th grade and 7th grade, I got 40 something place and my friend got in the 30s, but there are a LOT of people there so thats pretty good i guess, our school got 10th place overall last year 45. asnaseer that is an extremely good achievement. and never think that you will not be able to achieve even better goals for yourself - nothing in this world is impossible if you work hard enough at it. you certainly sound like someone who is willing to work hard to achieve high goals in life. 46. anonymous Thanks, I would like to be an engineer when I grow up, not really sure what they do, but everyone says it involves math so that sounds fun. 47. asnaseer yes - there is a lot of maths involved in engineering. your main focus should be to do something you really enjoy - after all, you will most likely be working for the rest of your life in one particular job - so might as well pick something you really enjoy. anyway - must go now. good talking to you. 48. anonymous you too
2017-01-20 18:22:26
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.49705570936203003, "perplexity": 1225.4842335801031}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560280850.30/warc/CC-MAIN-20170116095120-00410-ip-10-171-10-70.ec2.internal.warc.gz"}
https://verification.asmedigitalcollection.asme.org/nondestructive/article/4/2/021007/1092359/A-Theoretical-Numerical-Study-on-Ultrasound-Wave
## Abstract Recently published experimental works on remotely bonded fiber Bragg grating (FBG) ultrasound (US) sensors show that they display some unique characteristics that are not observed with directly bonded FBG sensors. These studies suggest that the bonding of the optical fiber strongly influences how the ultrasound waves are coupled from the structure to the FBG sensor. In this paper, the analytical model of the structure-adhesive-optical fiber section, treated as an ultrasound coupler, is derived and analyzed to explain the observed experimental phenomena. The resulting dispersion curve shows that the ultrasound coupler possesses a cutoff frequency, above which a dispersive longitudinal mode exists. The low propagation speed of the dispersive longitudinal mode leads to multiple resonances at and above the cutoff frequency. To characterize the resonant characteristics of the ultrasound coupler, a semi-analytical model is implemented and the scattering parameters (S-parameters) are introduced for broadband time-frequency analysis. The simulation was able to reproduce the experiment observations reported by other researchers. Furthuremore, the behaviors of the remotely bonded FBG sensors can be explained based on its resonant characteristics. ## 1 Introduction Structural health monitoring (SHM) technology has been under intensive studies in the past decades because it has the potential to shift maintenance of infrastructures from safe life practice or schedule-based schemes to condition based maintenance [14]. Since detectable damage could take a long time to develop and its location is typically unknown, an effective SHM system should be able to detect damage over a large area without incurring significant cost or weight penalty. Due to this requirement, ultrasound (US)-based detection and optical fiber sensors are two of the most common sensing schemes for SHM systems. Ultrasound-based techniques detect the abnormalities in the ultrasound or guided waves propagating in the structures and infer the health condition of the structures from these abnormalities. Since ultrasound waves can propagate over a long distance in plates, tubes, cylinders, etc., one ultrasound transducer can cover an area that is much larger than its physical size [5,6]. Optical fiber sensors, on the other hand, detect damage based on the characteristics of light propagating inside the fiber core. They are attractive for SHM primarily due to their light weight, compact size, low cost, and immunity to electromagnetic interferences, etc. [79]. Among various optical fiber sensors, fiber Bragg grating (FBG) based sensors are the most widely accepted sensors [1012]. Typically, FBG sensors are bonded directly on the structure to ensure that the FBG experiences the same displacement, and thus the strain, as the hosting structure. The displacement changes the FBG periods, leading to a shift in the FBG reflectance frequency. Compared with other optical fiber sensors, one unique advantage of the FBG sensors is that the FBG is directly inscribed into a conventional optical fiber. As such, the interface between the sensing element (i.e., the FBG section) and the optical fiber for signal transmission is seamless. Incorporating FBG sensors in an optical fiber therefore does not require labor-intensive integration. In addition, the physical measurands extracted from the spectral parameter of the FBG render the measurements more reliable, more robust, and more sensitive to minute changes. Since the reflectance spectrum of an FBG can have a very narrow bandwidth of a fraction of nanometers, multiple FBG sensors can be implemented in a single strand of optical fiber based on the principle of wavelength division multiplexing [13]. This unique feature enables deploying a large number of FBG sensors without incurring substantial cost or weight penalties. While optical fibers are mainly used as optical waveguides, studies have been carried out in the past to investigate optical fibers as ultrasound waveguides [1416]. Dubbed “acoustic fiber,” optical fibers were considered as a means for long-distance data and energy transfer as well as delay lines [17]. A focus of these studies was on designing the mechanical properties of the fiber cladding and core to confine the ultrasound wave within the fiber core. However, analysis done by Mbamou et al. [15] concluded that “the usual glass fibers are not as good for acoustical as for optical applications.” A different strategy was developed by the SHM community in exploiting the optical fiber as ultrasound waveguide sensors [1822]. In these applications, the ultrasound wavelength of interest is much larger than the fiber diameter. As such, the optical fiber can be treated as being homogenous and the differences in the material properties of the fiber core, cladding, and coating are neglected. Based on similar principles, fibers made of different materials, such as copper [23,24], aluminum [25], steel [26,27], etc., were also studied as ultrasound waveguides for environmental monitoring or epoxy curing. Compared with other SHM sensors, however, the ultrasound waveguide sensors received rather limited attention. In this paper, we present an analytical model for studying ultrasound wave coupling between two ultrasound waveguides, e.g., a structure and an optical fiber, through an adhesive layer. Treating the structure-adhesive-fiber section as an ultrasound coupler having four ports, the concept of scattering parameters is introduced to characterize its resonant characteristics. The response of the ultrasound coupler to a narrowband tone-burst input is simulated numerically by varying the parameters of the adhesive layer. These parametric studies reproduce the experimental observations reported in the literature and provide physical explanation to these observations. ## 2 Analytical and Numerical Simulation Model The physical model of an optical fiber bonded to a structure is shown in Fig. 1(a). In finite element simulation models [30,35,36], the optical fiber is fully or partially encapsulated in the top portion of the adhesive layer. Assuming the ultrasound wave originates at the left side of the structure and propagates toward the bonded section, upon encountering the bonded section, it is coupled to the optical fiber in both forward (i.e., to the right) and backward (i.e., to the left) propagating directions. The physical model is idealized as the simplified model shown in Fig. 1(b), in which the top portion of the adhesive with the embedded optical fiber is homogenized as a superstrate with material properties differing from the rest of the adhesive layer. The optical fibers leading to and from the bonded section are assumed to be connected to the superstrate at the edges. Since the optical fiber only supports the longitudinal wave [30], the simplified model shown in Fig. 1(b) can be represented by the one-dimensional (1D) extensional bar model shown in Fig. 1(c). Considering that the optical fiber is very light and has a very low attenuation, the forward and backward propagating ultrasound waves in the optical fibers are expected to have the same amplitudes as the displacements at the left and right edges of the superstrate, respectively. Therefore, including the optical fibers in the 1D simulation model is not necessary. Ultrasound waves are generated by applying a time-varying force at the left edge of the substrate. The response of the system to this time-varying force is calculated in the frequency domain, following a procedure described in Refs. [37,38]. Two absorption sections were added to the left and right ends of the substrate (i.e., the structure) to eliminate any reflections that may cause numerical aliasing. To minimize the refection at the absorber–substrate interface, the material properties of the absorption sections are identical to those of the substrate except that they have a very small mechanical loss coefficient and a very large length (e.g., 100 m). By implementing the model semi-analytically without dividing the absorbers into small elements, the large lengths do not introduce any additional computation burden. Fig. 1 Fig. 1 Close modal The 1D simulation model is sectioned along the interfaces where the cross-sectional area changes, i.e., at the edges of the ultrasound coupler and the absorber–substrate interfaces. As such, the model can be divided into two types of homogenous section, i.e., the absorber/ substrate section and the ultrasound coupler section. For the absorber/substrate sections, the extensional bar model is adopted to simulate the longitudinal ultrasound modes. For the ultrasound coupler, its governing equation can be derived assuming the displacements of the substrate and superstrate are coupled through the shear deformation of the adhesive layer [37,38] (see Fig. 2). As such, the shear stress τ of the adhesive layer can be expressed as $τ(x,t)=Gaγa=Ga[ub(x,t)−up(x,t)ha]$ (1) where γa is the shear strain of the adhesive and Ga is the adhesive shear modulus. The subscribes b, p, a represent the substrate, the superstrate, and the adhesive, respectively. u represents the displacement and h represents the thickness. Fig. 2 Fig. 2 Close modal The governing equations for the longitudinal deformations of the substrate and superstrate are [37,39,40] $∂2ub∂x2−ρbEb∂2ub∂t2=−τbEbhb$ (2a) and $∂2up∂x2−ρpEp∂2up∂t2=τpEphp$ (2b) in which, ρ and E stand for the density and the Young's modulus. τb = τ and τp = ατ for an adhesive having a shear transfer ratio of α. Combining Eqs. (1) and (2) results in an analytical governing equation for the ultrasound coupler, i.e. $∂4u¯p∂x4+A∂2u¯p∂x2+Bu¯p=0$ (3) whose solution is $u¯p(x)=aieβix+die−βix,i=1,2$ (4) in which βi are the roots of $βi4+Aβi2+B=0$ (5) The two constants A and B are functions of the geometrical and mechanical properties of the substrate, superstrate, and adhesive layer as well as the angular frequency ω, i.e. $A=C2C1+ρbEb(ω2−αρbhbGaha)$ (6a) $B=C2C1ρbEb(ω2−αρbhbGaha)+αC1ρbEb1ρbhbGaha$ (6b) $C1=−(ρphp)(Epρp)(haGa)$ (6c) and $C2=1−ω2(ρphp)(haGa)$ (6d) The design parameters of the ultrasound coupler, therefore, include the Young's modulus-density ratio Eb/ρb as well as the density-thickness product ρbhb of the substrate and superstrate, and two adhesive parameters, i.e., the shear modulus-thickness ratio Ga/ha and the shear transfer ratio α. ## 3 Propagation Modes and Dispersion Curve of Ultrasound Coupler—Analytical Solution The resonant characteristics of the ultrasound coupler can be explained based on the governing equation given in Eq. (5). The characteristic roots of the Eq. (5) can be expressed as $β1=−A−A2−4B2andβ2=−A+A2−4B2$ (7) To support wave propagation, at least one of the roots βi, i = 1, 2 must be complex. Since $Δ=A2−4B=[C2C1−1Eb(ρbω2−αGahbha)]2+4αGaEbhbha×GaEphpha>0$ (8) whether βi is complex or not depends on the signs of A and B. As tabulated in Table 1, the ultrasound coupler supports only one mode if B < 0 or B = 0 $∧$A > 0 and it supports two modes if B > 0 $∧$A ≥ 0. Consequently, the cutoff frequency for the second propagation mode can be analytically solved by setting B = 0, i.e. $fc=ωc2π=12πGaha(αhbρb+1hpρp)$ (9) Table 1 Relationship between the signs of A and B and the characteristic roots of ultrasound coupler's governing equation A < 0A = 0A > 0 β1β2β1β2β1β2 B < 0CRCRCR B = 0RR00CR B > 0RRCCCC A < 0A = 0A > 0 β1β2β1β2β1β2 B < 0CRCRCR B = 0RR00CR B > 0RRCCCC Clearly, fc is dependent of the adhesive property Ga/ha as well as the substrate and superstrate mass parameters, hbρb, and hpρp. On the other hand, it is independent of the Young's moduli of the substrate or superstrate. The dispersion curve of the ultrasound coupler, which represents the relationship between the group velocities of the two modes and the frequency, is calculated from the characteristic roots βi and shown in Fig. 3. The substrate is an aluminum alloy with mechanical properties as the followings: Young's modulus E = 71 GPa, density ρ = 2770 kg/m3, and Poisson's ratio υ = 0.33. The superstrate is assumed to have the same properties as the optical fiber, i.e., E = 66 GPa, ρ = 2170 kg/m3, and υ = 0.15 (see tabe 1 in Wee et al. [36]). The mechanical properties of the adhesive are typically unknown, and their values provided in the publications can vary widely [41,42]. To study the adhesive effects, it is common to vary the adhesive properties in a selected range [36,37,43]. The adhesive for this study is initially assumed to have a Young's modulus of 2.5 GPa and a Poisson ratio of υ = 0.39. For an adhesive thickness of 185 µm, a substrate thickness of 0.8 mm, and a superstrate thickness of 125 µm, the cutoff frequency of the coupler is calculated from Eq. (9) as 713 kHz. Below the cutoff frequency, there is only one propagation mode with a group velocity identical to that of the substrate. Above the cutoff frequency, one propagation mode has a group velocity that reduces at a very gradual rate with the increasing frequencies. The group velocity of the second propagation mode, however, increases rapidly from zero at fc and approaches that of the substrate at high frequencies. In other words, the ultrasound coupler supports a dispersive wave above the cutoff frequency. This is different from conventional extensional bars, which only have one nondispersive mode [44]. Fig. 3 Fig. 3 Close modal ## 4 Resonant Characteristics of Ultrasound Coupler—S-parameter Representation Once the governing equation for the ultrasound coupler is established, the numerical simulation of the 1D model shown in Fig. 1(c) can be implemented by adopting the reverberation matrix method (RMM) described in Refs. [45,46] and applying the boundary and continuity conditions [37,38]. For more detailed descriptions of the RMM and the simulation method, the readers should refer to the cited Refs. [37,38,45,46]. Since the constants A and B are functions of the angular frequency ω, it is expected that the behavior of the ultrasound coupler is frequency dependent. Therefore, a broadband analysis of the ultrasound coupler is necessary, which can be facilitated using the S-parameters [47], a concept that is commonly used in the microwave community for representing a linear-time-invariant network. As shown in Fig. 4, an ultrasound coupler can be considered as a 4-port network; port 1 and 2 represent the left and right edges of the substrate while port 3 and 4 represent the left and right edges of the superstrate, respectively. The transmission S-parameter Sj1 is the frequency spectrum of the output uj(x, ωi) at port j (j = 2, 3, and 4), when the ultrasound is generated at port 1 using an impulse force F(ωi) = 1. Port 1 is selected to be at several wavelengths away from the left edge of the bonding section to eliminate the edge resonance effect (see discussions in Sec. 5.1). Once the S-parameters are available, the time-frequency response of the ultrasound coupler can be calculated using the procedure described in Refs. [47,48]. Fig. 4 Fig. 4 Close modal Figure 5 shows the S-parameters for three different adhesive lengths La. When La is small, i.e., La = 1 mm, only one resonant peak is observed at the cutoff-frequency fc, due to the very small group velocity of the dispersive ultrasound mode. As La increases to 10 mm, four additional resonant peaks appear above fc. Below fc, the S41 curve only has a slight “bulge” at around 100 kHz, as highlighted by the circle. However, it is difficult to discern whether it is a resonant peak or not. At La = 50 mm, the number of resonance peaks increased dramatically above fc. In addition, there are clear resonant peaks below fc, e.g., at 83, 130, 178 kHz, etc. It is interesting to note that the resonant peak at fc exists regardless of the adhesive length La while the other resonant peaks change locations with La and the number of resonant peaks increases with La. We suspect that the resonance peaks are related to the ultrasound waves being bounced back and forth between the two free edges of the superstrate. If this hypothesis is true, the resonance frequencies would be functions of the propagation speed and the bonding length. This explains why a resonance peak exists at the cutoff frequency fc with any bonding length because of the low propagation speed at fc. Verifying such a hypothesis, however, would require more extensive investigations and will be a subject of future study. The S21 curve displays a few notches at high frequencies. These notches represent the antiresonances, which is similar to the ultrasound spectrum generated using a surface bonded piezoelectric wafer active transducer [5,49]. Notice also that these notches have very narrow bandwidths. In order to observe these notches experimentally, broadband frequency-domain measurements, such as laser ultrasonics, may be needed and will be a subject of future study. Fig. 5 Fig. 5 Close modal ## 5 Explanations of Experimental Observations Taking advantage of the computation efficiency and time-frequency analysis capability of the simulation model, we were able to perform comprehensive parametric studies on the bonding condition of remotely bonded FBG sensors. These studies provide the theoretical explanations to the experimental observations reported in published works [29,3133], as discussed below. ### 5.1 Why Does Remotely Bonded Fiber Bragg Gratings Display Enhanced Sensor Responses?. Wee et al. reported that the response of a remotely bonded FBG could be five times larger than if the FBG is directly bonded [29]. When an FBG is bonded directly on a structure, the adhesive typically covers the entire length of the FBG and even the optical fiber leading to and from the FBG. Therefore, the displacement measured by a directly bonded FBG can be approximated as the displacements at the center of the superstrate. In contrast, when the FBG is bonded on the structure remotely, the displacements at the edges of the superstrate is coupled to the optical fiber. The spectra of the displacements at three locations of the superstrate, i.e., at the left edge, the center, and the right edge, for an adhesive length of 10 mm, are shown in Fig. 6(a). Below the cutoff frequency fc, the displacements at the right edge of the superstrate are consistently larger than the displacements at the center or at the left edge. Above fc, however, both edges experience the same displacements while the center has a slightly lower displacement, except at the resonant peaks. The maximum displacements along the length of the substrate and superstrate are shown in Fig. 6(b), generated using a 300 kHz 5.5 cycle tone-burst excitation. Near the left edge of the ultrasound coupler (i.e., at x = 0.2 m), the maximum displacement of the substrate fluctuates along the length and the displacement at the left edge of the superstrate is substantially smaller than the rest of the superstrate. In contrast, the right edge of the superstrate displays a substantially larger maximum displacement than other locations. Away from the edges, the substrate and superstrate of the ultrasound coupler have almost identical displacements. The differences in the maximum displacements at the edges and at the center of the ultrasound coupler is due to the edge resonance effect. Edge resonant effect refers to the generation of large displacements in the near-field of scattering sources, such as free edges [50,51], step cross-sectional changes [52,53], cracks [54,55], wedges [56], etc. At a scattering source, waves with propagation constants different from that of the incident waves are excited in order to satisfy the boundary condition. The interference of the incident and scattered waves leads to wave enhancements in the immediate vicinity of the scattering source [54,57,58]. This effect is more obvious when the adhesive length increases to 50 mm, as shown in Fig. 6(c). In this case, the large displacements are seen at the locations of the substrate close to the left edge of the coupler and at the right edge of the superstrate. The displacements decay rapidly with the distance near the edges and remains constants at locations that are more than about one wavelength away from the edges. It worth noting that the substrate also displays some edge effects near the edges of the bonded section, albeit the amplitude is much smaller than the superstrate. This is because the substrate is continuous while the superstrate has two free edges. In other words, the substrate sections to the left and right side of the bonded section limit the displacement of the substrate under the superstrate. Fig. 6 Fig. 6 Close modal While the present work is focused on the remotely bonded FBG ultrasound sensor, some insights can be also drawn with respect to the directly bonded FBG sensors. For the directly bonded FBG sensors, the assumption is that the displacement experienced by the FBG sensor is the same as that of the structure. This is true only when the FBG sensor is more than one or two wavelengths away from the edges of the adhesive, as Figs. 6(b) and 6(c) indicate. Otherwise, the edge effect will have an impact on the response of the directly bonded FBG sensor as well. In addition, when the adhesive length is short, as in the case of Fig. 6(b), the maximum displacements of the FBG sensor may vary along its length. In other words, different portion of the FBG may experience different displacement amplitudes. This could lead to the broadening of the FBG spectrum. Therefore, the adhesive length should be sufficiently long to ensure uniform displacement amplitude along the FBG length, as in the case of Fig. 6(c). Fig. 7 Fig. 7 Close modal ### 5.3 Why Does the Coupling Efficiency Increase With the Bonding Length?. Wee et al. discovered that the FBG response increases with the adhesive length up to a certain distance [31]. To investigate the effect of the bonding length La on the coupling efficiency of the ultrasound coupler, we performed a parametric study on La. The maximum displacements at the right edge of the superstrate with different La are normalized with the displacement of La = 1 mm and are plotted in Fig. 8(a). Again, the excitation was selected to be a 300 kHz 5.5 cycle tone-burst signal. The Young's modulus of the adhesive was 500 MPa and the thickness was 22 µm [33]. Since the shear transfer ratio of the adhesive is typically unknown, two shear transfer ratios, i.e., α = 1 and 0.5, were studied. For α = 1, the normalized maximum displacement increased initially with La, reaching a maximum value of 1.35 at La = 6 mm and then decreases with La until La = 10 mm. Its value then fluctuates slightly with La. Changing α to 0.5 reduces this fluctuation, making the trend agree better with the experimental results. The S41 parameters for these two cases are shown in Figs. 8(b) and 8(c). For α = 1, the S41 parameters do not have any resonance peak below the cutoff frequency fc for any bonding length less than 6 mm. A small resonance peak starts to appear when La = 7 mm. This resonance peak becomes more prominent as La increases. In addition, the resonance frequency shifts toward the left and additional resonance peaks, albeit small, appear at higher frequencies as La increases. Since the tone-burst frequency was fixed at 300 kHz, the shift of the resonance peaks resulted in the amplitude fluctuation of the tone-burst response. In comparison, the resonance peaks are less prominent, i.e., have lower amplitudes and broader bandwidths, for α = 0.5 and thus the fluctuation of the tone-burst response is less significant. For smaller La, however, the amplitude of the tone-burst response is not affected by the shear transfer ratio α. These results suggest that the bonding length La should be optimized to achieve the maximum tone-burst response, especially when the shear transfer ratio is large. Unfortunately, the shear transfer ratio of the adhesive is typically unknown. Measuring the shear transfer ratio from the resonance characteristics of the ultrasound coupler could be a subject of future study. Fig. 8 Fig. 8 Close modal ## 6 Conclusions The analytical model of an ultrasound coupler, coupling longitudinal waves from one waveguide to the other via the adhesive layer, is developed. We discovered that the ultrasound coupler possesses a cutoff frequency, above which a dispersive longitudinal mode can propagate. Treating the ultrasound coupler as a 4-port network, a semi-analytical model was implemented to calculate its broadband S-parameters. Parametric studies show that the ultrasound coupler displays very different resonant behaviors at frequencies below and above the cutoff frequency and the adhesive properties have strong influences on these behaviors. We also discovered that the unique behaviors of remotely bonded FBG ultrasound sensors are contributed by the resonance of the ultrasound coupler. In the future, more detailed investigations of the resonant characteristics of ultrasound couplers will be carried out using noncontact ultrasound sensing technique with the aim of inversely determining the adhesive properties from the measured resonances. In addition, the source of the resonances and the relationship between the resonance frequency, the wave speed, and the bonding length, will need more detailed investigations. ## Acknowledgment This work is supported by the Office of Naval Research (Grant No. N00014-19-1-2098). The supports and suggestions of the program manager, Dr. Ignacio Perez, are greatly appreciated. Professor Huang also thanks Drs. Wee and Peters at North Carolina State University for stimulating discussions. ## Conflict of Interest There are no conflicts of interest. ## Data Availability Statement The datasets generated and supporting the findings of this article are obtainable from the corresponding author upon reasonable request. Data provided by a third party listed in Acknowledgment. ### Appendix #### Remotely Bonded Fiber Bragg Grating Ultrasound Sensors As shown in Fig. 9(a), an FBG is a periodic modulation of the refractive index inscribed in the core of a single mode fiber [13]. There are two different ways to interrogate an FBG, i.e., based on the spectrum or the intensity. For spectrum-based interrogation, the FBG is onnected to the broadband source of an optical spectrum analyzer (OSA) through an optical circulator, as shown in Fig. 9(b). The broadband light, guided inside the fiber core, is first routed toward the FBG by the circulator. When it encounters the FBG, a portion of the light is reflected at the interfaces with a refractive index change. Since the light reflected at different interfaces have different phases, the superposition of the light results in a reflection with a narrow wavelength λB, which is governed by the effective refractive index of the optical fiber neff and the grating period Λ, i.e., λB = 2 neffΛ. The reflected light is then re-directed by the circulator to the input of the OSA. The OSA outputs the spectrum of the reflected light, based on which the FBG wavelength λB can be determined. Ultrasound sensing, however, requires a much higher sampling rate than that of an OSA. To track the high-speed variation of the ultrasound wave, intensity-based interrogation schemes, such as the one shown in Fig. 9(c), was developed [59]. The interrogation light, emitted by a laser diode with a narrow wavelength λI, is tuned to the midpoint of the slope of the FBG reflection spectrum. The FBG spectrum shifts in response to the ultrasound wave, causing the intensity of the reflected light to fluctuate. This fluctuation can be measured using a photodiode to achieve the required high sampling rate. An FBG ultrasound sensor is typically bonded directly on a structure using adhesive [60] (see Fig. 9(e)). The deformation of the structure is transferred to the FBG via the adhesive layer. As such, the grating period and in turn the FBG reflectance spectrum change with the deformation of the structure. Recently, researchers experimented bonding the optical fiber at a location away from the FBG [2833]. In these works, the ultrasound wave propagating in the structure is coupled to the optical fiber through the adhesive layer and then propagates along the optical fiber to reach the FBG sensor, as shown in Fig. 9(d). As such, the FBG sensor does not measure the deformation of the structure directly. Rather, it measures the displacement of the optical fiber that is coupled from the structure by the adhesive. In other words, “The FBG-inscribed optical fiber was used not only as an optical transmission line but also as an ultrasonic transmission line” [28]. Fig. 9 Fig. 9 Close modal ## References 1. Perez , I. , DiUlio , M. , Maley , S. , and Phan , N. , 2010 , “ Structural Health Management in the Navy ,” Struct. Heal. Monit. , 9 ( 3 ), pp. 199 207 . 10.1177/1475921710366498 2. Yuan , F. G. , 2016 , Structural Health Monitoring (SHM) in Aerospace Structures , , MA . 3. Farrar , C. R. , and Worden , K. , 2007 , “ An Introduction to Structural Health Monitoring ,” Philos. Trans. A Math. Phys. Eng. Sci. , 365 ( 1851 ), pp. 303 315 . 4. Chang , F.-K. , 2016 , Structural Health Monitoring 2013, Volume 1 and 2—A Roadmap to Intelligent Structures , DEStech Publications , Stanford, CA . 5. Giurgiutiu , V. , 2005 , “ Tuned Lamb Wave Excitation and Detection With Piezoelectric Wafer Active Sensors for Structural Health Monitoring ,” J. Intell. Mater. Syst. Struct. , 16 ( 4 ), pp. 291 305 . 10.1177/1045389X05050106 6. Raghavan , A. , and Cesnik , C. E. S. , 2007 , “ Review of Guided-Wave Structural Health Monitoring ,” Shock Vib. Dig. , 39 ( 2 ), pp. 91 114 . 10.1177/0583102406075428 7. López-Higuera , J. M. , Cobo , L. R. , Incera , A. Q. , and Cobo , A. , 2011 , “ Fiber Optic Sensors in Structural Health Monitoring ,” J. Light. Technol. , 29 ( 4 ), pp. 587 608 . 10.1109/JLT.2011.2106479 8. Chan , T. H. T. , Yu , L. , Tam , H. Y. , Ni , Y. Q. , Liu , S. Y. , Chung , W. H. , and Cheng , L. K. , 2006 , “ Fiber Bragg Grating Sensors for Structural Health Monitoring of Tsing Ma Bridge: Background and Experimental Observation ,” Eng. Struct. , 28 ( 5 ), pp. 648 659 . 10.1016/j.engstruct.2005.09.018 9. Guo , H. , Xiao , G. , , N. , and Yao , J. , 2011 , “ Fiber Optic Sensors for Structural Health Monitoring of Air Platforms ,” Sensors , 11 ( 4 ), pp. 3687 3705 . 10.3390/s110403687 10. Kahandawa , G. C. , Epaarachchi , J. , Wang , H. , and Lau , K. T. , 2012 , “ Use of FBG Sensors for SHM in Aerospace Structures ,” Photonic Sens. , 2 ( 3 ), pp. 203 214 . 10.1007/s13320-012-0065-4 11. Majumder , M. , , T. K. , Chakraborty , A. K. , Dasgupta , K. , and Bhattacharya , D. K. , 2008 , “ Fibre Bragg Gratings in Structural Health Monitoring—Present Status and Applications ,” Sens. Actuators, A , 147 ( 1 ), pp. 150 164 . 10.1016/j.sna.2008.04.008 12. Todd , M. D. , Nichols , J. M. , Trickey , S. T. , Seaver , M. , Nichols , C. J. , and Virgin , L. N. , 2007 , “ Bragg Grating-Based Fibre Optic Sensors in Structural Health Monitoring ,” Philos. Trans. A Math. Phys. Eng. Sci. , 365 ( 1851 ), pp. 317 343 . 10.1098/rsta.2006.1937 13. Yun-Jiang , R. , and Rao , Y. J. , 1997 , “ In-Fibre Bragg Grating Sensors ,” Meas. Sci. Technol. , 8 ( 4 ), pp. 355 375 . 10.1088/0957-0233/8/4/002 14. Jen , C. K. , 1985 , “ Similarities and Differences Between Fiber Acoustics and Fiber Optics ,” IEEE Ultrasonics Symposium , San Francisco, CA , Oct. 16–18 , pp. 1128 1133 . 15. Mbamou , D. N. , Helfmann , J. , Muller , G. , Brunk , G. , Stein , T. , and Desinger , K. , 2001 , “ A Theoretical Study on the Combined Application of Fibres for Optical and Acoustic Waveguides ,” Meas. Sci. Technol. , 12 ( 10 ), pp. 1631 1640 . 10.1088/0957-0233/12/10/303 16. Shibata , N. , Azuma , Y. , Horiguchi , T. , and Tateda , M. , 1988 , “ Identification of Longitudinal Acoustic Modes Guided in the Core Region of a Single-Mode Optical Fiber by Brillouin Gain Spectra Measurements ,” Opt. Lett. , 13 ( 7 ), p. 595 . 10.1364/OL.13.000595 17. Safaai-Jazi , A. , Jen , C. K. , and Farnell , G. W. , 1986 , “ Analysis of Weakly Guiding Fiber Acoustic Waveguide ,” IEEE Trans. Ultrason. Ferroelectr. Freq. Control , 33 ( 1 ), pp. 59 68 . 10.1109/T-UFFC.1986.26797 18. Lee , J. R. , and Tsuda , H. , 2006 , “ Sensor Application of Fibre Ultrasonic Waveguide ,” Meas. Sci. Technol. , 17 ( 4 ), pp. 645 652 . 10.1088/0957-0233/17/4/006 19. Lim , S. H. , Oh , I. K. , and Lee , J. R. , 2009 , “ Ultrasonic Active Fiber Sensor Based on Pulse-Echo Method ,” J. Intell. Mater. Syst. Struct. , 20 ( 9 ), pp. 1035 1043 . 10.1177/1045389X08098769 20. Fukuma , N. , Kubota , K. , Nakamura , K. , and Ueha , S. , 2006 , “ An Interrogator for Fibre Bragg Grating Sensors Using an Ultrasonically Induced Long-Period Optical Fibre Grating ,” Meas. Sci. Technol. , 17 ( 5 ), pp. 1046 1051 . 10.1088/0957-0233/17/5/S18 21. Leal , W. A. , Carneiro , M. B. R. , Freitas , T. A. M. G. , Marcondes , C. B. , and Ribeiro , R. M. , 2018 , “ Low-Frequency Detection of Acoustic Signals Using Fiber as an Ultrasonic Guide With a Distant in-Fiber Bragg Grating ,” Microw. Opt. Technol. Lett. , 60 ( 4 ), pp. 813 817 . 10.1002/mop.31061 22. Quero , G. , Crescitelli , A. , Consales , M. , Pisco , M. , Cutolo , A. , Galdi , V. , and Cusano , A. , 2012 , “ Resonant Hydrophones Based on Coated Fiber Bragg Gratings ,” J. Light. Technol. , 30 ( 15 ), pp. 2472 2481 . 10.1109/jlt.2012.2200233 23. Atkinson , D. , and Hayward , G. , 2001 , “ The Generation and Detection of Longitudinal Guided Waves in Thin Fibers Using a Conical Transformer ,” IEEE Trans. Ultrason. Ferroelectr. Freq. Control , 48 ( 4 ), pp. 1046 1053 . 10.1109/58.935721 24. Shah , H. , Balasubramaniam , K. , and Rajagopal , P. , 2017 , “ In-Situ Process- and Online Structural Health-Monitoring of Composites Using Embedded Acoustic Waveguide Sensors ,” J. Phys. Commun. , 1 ( 5 ), p. 055004 . 10.1088/2399-6528/aa8bfa 25. Atkinson , D. , and Hayward , G. , 1998 , “ Fibre Waveguide Transducers for Lamb Wave NDE ,” IEE Proc. Sci. Meas. Technol. , 145 ( 5 ), pp. 260 268 . 10.1049/ip-smt:19982214 26. Neill , I. T. , Oppenheim , I. J. , and Greve , D. W. , 2007 , “ A Wire-Guided Transducer for Acoustic Emission Sensing ,” Proc. SPIE 6529, Sensors and Smart Structures Technologies for Civil, Mechanical, and Aerospace Systems , 652913, Apr. 18. http://dx.doi.org/10.1117/12.715358 27. Vogt , T. , Lowe , M. , and Cawley , P. , 2003 , “ Cure Monitoring Using Ultrasonic Guided Waves in Wires ,” J. Acoust. Soc. Am. , 114 ( 3 ), pp. 1303 1313 . 10.1121/1.1589751 28. Tsuda , H. , Sato , E. , Nakajima , T. , Nakamura , H. , Arakawa , T. , Shiono , H. , Minato , M. , Kurabayashi , H. , and Sato , A. , 2009 , “ Acoustic Emission Measurement Using a Strain-Insensitive Fiber Bragg Grating Sensor Under Varying Load Conditions ,” Opt. Lett. , 34 ( 19 ), p. 2942 . 10.1364/OL.34.002942 29. Wee , J. , Wells , B. , Hackeny , D. , , P. , and Peters , K. , 2016 , “ Increasing Signal Amplitude in Fiber Bragg Grating Detection of Lamb Waves Using Remote Bonding ,” Appl. Opt. , 55 ( 21 ), pp. 5564 5569 . 10.1364/AO.55.005564 30. Davis , C. , Norman , P. , Rajic , N. , and Bernier , M. , 2018 , “ Remote Sensing of Lamb Waves Using Optical Fibres—An Investigation of Modal Composition ,” J. Light. Technol. , 36 ( 14 ), pp. 2820 2826 . 10.1109/jlt.2018.2816563 31. Wee , J. , Hackney , D. , , P. , and Peters , K. , 2017 , “ Bi-Directional Ultrasonic Wave Coupling to FBGs in Continuously Bonded Optical Fiber Sensing ,” Appl. Opt. , 56 ( 25 ), pp. 7262 7268 . 10.1364/AO.56.007262 32. Wee , J. , Hackney , D. , , P. , and Peters , K. , 2018 , “ Experimental Study on Directionality of Ultrasonic Wave Coupling Using Surface-Bonded Fiber Bragg Grating Sensors ,” J. Light. Technol. , 36 ( 4 ), pp. 932 938 . 10.1109/JLT.2017.2769960 33. Wee , J. , Hackney , D. , and Peters , K. , 2019 , “ Preferential Directional Coupling to Ultrasonic Sensor Using Adhesive Tape ,” Opt. Eng. , 58 ( 7 ), p. 1 . 10.1117/1.OE.58.7.072003 34. Wu , Q. , Yu , F. , Okabe , Y. , and Kobayashi , S. , 2015 , “ Application of a Novel Optical Fiber Sensor to Detection of Acoustic Emissions by Various Damages in CFRP Laminates ,” Smart Mater. Struct. , 24 ( 1 ), p. 015011 . 10.1088/0964-1726/24/1/015011 35. Yu , F. , Okabe , Y. , Wu , Q. , and Shigeta , N. , 2016 , “ Fiber-Optic Sensor-Based Remote Acoustic Emission Measurement of Composites ,” Smart Mater. Struct. , 25 ( 10 ), p. 105033 . 10.1088/0964-1726/25/10/105033. 36. Wee , J. , Hackney , D. A. , , P. D. , and Peters , K. J. , 2017 , “ Simulating Increased Lamb Wave Detection Sensitivity of Surface Bonded Fiber Bragg Grating ,” Smart Mater. Struct. , 26 ( 4 ), p. 1016808 . 10.1088/1361-665x/aa646b 37. Islam , M. M. M. , and Huang , H. , 2014 , “ Understanding the Effects of Adhesive Layer on the Electromechanical Impedance (EMI) of Bonded Piezoelectric Wafer Transducer ,” Smart Mater. Struct. , 23 ( 12 ), p. 125037 . 10.1088/0964-1726/23/12/125037 38. Islam , M. M. M. , and Huang , H. , 2016 , “ Effects of Adhesive Thickness on the Lamb Wave Pitch-Catch Signal Using Bonded Peizoelectric Wafer Transducers ,” Smart Mater. Struct. , 25 ( 8 ), p. 085014 . 10.1088/0964-1726/25/8/085014 39. Crawley , E. F. , De Luis , J. , and Luisj , J. D. , 1987 , “ Use of Piezoelectric Actuators as Elements of Intelligent Structures ,” AIAA J. , 25 ( 10 ), pp. 1373 1385 . 10.2514/3.9792 40. Yan , W. , Lim , C. W. , Cai , J. B. , and Chen , W. Q. , 2007 , “ An Electromechanical Impedance Approach for Quantitative Damage Detection in Timoshenko Beams With Piezoelectric Patches ,” Smart Mater. Struct. , 16 ( 4 ), pp. 1390 1400 . 10.1088/0964-1726/16/4/054 41. Rabinovitch , O. , and Vinson , J. R. , 2002 , “ Adhesive Layer Effects in Surface-Mounted Piezoelectric Actuators ,” J. Intell. Mater. Syst. Struct. , 13 ( 11 ), pp. 689 704 . 10.1177/1045389X02013011001 42. M. M. , 1987 , , Springer, New York , New York . 43. Ha , S. , and Chang , F.-K. , 2010 , “ Adhesive Interface Layer Effects in PZT-Induced Lamb Wave Propagation ,” Smart Mater. Struct. , 19 ( 2 ), p. 025006 . 10.1088/0964-1726/19/2/025006 44. Rao , S. S. , 2007 , Vibration of Continuous System , John Willey and Sons, Inc. , New Jersey . 45. Pao , Y.-H. , Keh , D.-C. , and Howard , S. M. , 1999 , “ Dynamic Response and Wave Propagation in Plane Trusses and Frames ,” AIAA J. , 37 ( 5 ), pp. 594 603 . 10.2514/2.778 46. Howard , S. M. , and Pao , Y.-H. , 1998 , “ Analysis and Experiments on Stress Waves in Planar Trusses ,” J. Eng. Mech. , 124 ( 8 ), pp. 884 891 . 10.1061/(ASCE)0733-9399(1998)124:8(884) 47. Huang , H. , and Bednorz , T. , 2014 , “ Introducing S-Parameters for Ultrasound-Based Structural Health Monitoring ,” IEEE Trans. Ultrason. Ferroelectr. Freq. Control , 61 ( 11 ), pp. 1856 1863 . 10.1109/TUFFC.2014.006556 48. Zahedi , F. , and Huang , H. , 2017 , “ Time–Frequency Analysis of Electro-Mechanical Impedance (EMI) Signature for Physics-Based Damage Detections Using Piezoelectric Wafer Active Sensor (PWAS) ,” Smart Mater. Struct. , 26 ( 5 ), p. 055010 . 10.1088/1361-665x/aa64c0 49. Huang , H. , 2020 , “ Resonances of Surface-Bonded Piezoelectric Wafer Active Transducers and Their Effects on the S0 Pitch-Catch Signal ,” Proc. SPIE11379, Sensors and Smart Structures Technologies for Civil, Mechanical, and Aerospace Systems 2020, 113790I , Apr. 23, p. 18 . http://dx.doi.org/10.1117/12.2559312 50. Morvan , B. , Wilkie-Chancellier , N. , Duflo , H. , Tinel , A. , and Duclos , J. , 2003 , “ Lamb Wave Reflection at the Free Edge of a Plate ,” J. Acoust. Soc. Am. , 113 ( 3 ), pp. 1417 1425 . 10.1121/1.1539521 51. Auld , B. A. , and Tsao , E. M. , 1977 , “ A Variational Analysis of Edge Resonance in a Semi-Infinite Plate ,” IEEE Trans. Sonics Ultrason. , 24 ( 5 ), pp. 317 326 . 10.1109/T-SU.1977.30952 52. Puthillath , P. , Galan , J. M. , Ren , B. , Lissenden , C. J. , and Rose , J. L. , 2013 , “ Ultrasonic Guided Wave Propagation Across Waveguide Transitions: Energy Transfer and Mode Conversion ,” J. Acoust. Soc. Am. , 133 ( 5 ), pp. 2624 2633 . 10.1121/1.4795805 53. Schaal , C. , and Mal , A. , 2016 , “ Lamb Wave Propagation in a Plate With Step Discontinuities ,” Wave Motion , 66 , pp. 177 189 . 10.1016/j.wavemoti.2016.06.012 54. Mallet , L. , Lee , B. C. , Staszewski , W. J. , and Scarpa , F. , 2004 , “ Structural Health Monitoring Using Scanning Laser Vibrometry: II. Lamb Waves for Damage Detection ,” Smart Mater. Struct. , 13 ( 2 ), pp. 261 269 . 10.1088/0964-1726/13/2/003 55. Dewhurst , R. J. , Edwards , C. , and Palmer , S. B. , 1986 , “ Noncontact Detection of Surface-Breaking Cracks Using a Laser Acoustic Source and an Electromagnetic Acoustic Receiver ,” Appl. Phys. Lett. , 49 ( 7 ), pp. 374 376 . 10.1063/1.97591 56. Edwards , R. S. , Dutton , B. , Clough , A. R. , and Rosli , M. H. , 2011 , “ Enhancement of Ultrasonic Surface Waves at Wedge Tips and Angled Defects ,” Appl. Phys. Lett. , 99 ( 9 ), p. 9 . 10.1063/1.3629772 57. Ziaja-Sujdak , A. , Cheng , L. , , R. , and Staszewski , W. J. , 2018 , “ Near-Field Wave Enhancement and ‘Quasi-Surface’ Longitudinal Waves in a Segmented Thick-Walled Hollow Cylindrical Waveguide ,” Struct. Heal. Monit. , 17 ( 2 ), pp. 346 362 . 10.1177/1475921717694505 58. Boonsang , S. , 2009 , “ Photoacoustic Generation Mechanisms and Measurement Systems for Biomedical Applications ,” Int. J. Appl. Biomed. Eng. , 2 ( 1 ), pp. 17 23 . 59. Fomitchov , P. A. , and Krishnaswamy , S. , 2003 , “ Response of a Fiber Bragg Grating Ultrasonic Sensor ,” Opt. Eng. , 42 ( 4 ), pp. 956 963 . 10.1117/1.1556372 60. Betz , D. C. , Thursby , G. , Culshaw , B. , and Staszewski , W. J. , 2006 , “ Identification of Structural Damage Using Multifunctional Bragg Grating Sensors: I. Theory and Implementation ,” Smart Mater. Struct. , 15 ( 5 ), pp. 1305 1312 . 10.1088/0964-1726/15/5/020
2023-01-30 15:08:45
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 15, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.48056483268737793, "perplexity": 2872.5036460562096}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2023-06/segments/1674764499819.32/warc/CC-MAIN-20230130133622-20230130163622-00565.warc.gz"}
https://testbook.com/question-answer/a-second-order-control-system-exhibits-100-oversh--607d67b7cb837d4255f01ef0
# A second order control system exhibits 100% overshoot. Its damping ratio is: This question was previously asked in UJVNL AE EE 2016 Official Paper View all UJVNL AE Papers > 1. Less than 1 2. Equal to 1 3. Greater than 1 4. Equal to zero ## Answer (Detailed Solution Below) Option 4 : Equal to zero ## Detailed Solution Concept: The transfer function of the standard second-order system is: $$TF = \frac{{C\left( s \right)}}{{R\left( s \right)}} = \frac{{ω _n^2}}{{{s^2} + 2ζ {ω _n}s + ω _n^2}}$$ Characteristic equation: $${s^2} + 2ζ {ω _n} + ω _n^2 = 0$$ ζ is the damping ratio ωn is the undamped natural frequency $${M_p} = {e^{\frac{{ - ζ \pi }}{{\sqrt {1 - {ζ ^2}} }}}}$$   ----(1) Calculation: Given: Mp = 100% From the above equation, $${M_p} = {e^{\frac{{ - ζ \pi }}{{\sqrt {1 - {ζ ^2}} }}}}$$ $$ln\;1 = \frac{{ - \zeta \pi }}{{\sqrt {1 - {\zeta ^2}} }}$$ ; (ln 1 = 0) So, ζ = 0 Note: Mp is the maximum peak overshoot of the closed-loop transfer function $$M_p \ \alpha \ \frac{1}{ζ}$$
2021-09-23 20:46:04
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.7110728025436401, "perplexity": 7386.461663193699}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-39/segments/1631780057447.52/warc/CC-MAIN-20210923195546-20210923225546-00325.warc.gz"}
https://nubtrek.com/maths/vector-properties/properties-vector-cross-product/vector-cross-product-of-orthogonal-vectors
Server Error Server Not Reachable. This may be due to your internet connection or the nubtrek server is offline. Thought-Process to Discover Knowledge Welcome to nubtrek. Books and other education websites provide "matter-of-fact" knowledge. Instead, nubtrek provides a thought-process to discover knowledge. In each of the topic, the outline of the thought-process, for that topic, is provided for learners and educators. Read in the blogs more about the unique learning experience at nubtrek. mathsProperties of Vector ArithmeticsProperties of Cross Product ### Cross Product of Orthogonal Vectors click on the content to continue.. When two vectors are called 'orthogonal' vectors? color(coral)(text(orth)) + color(deepskyblue)(text(gonia)) means color(coral)(text(right))+color(deepskyblue)(text(angled)) • have 90^@ angle between them • perpendicular to each other • the vectors are right-angled • all the above • all the above The answer is 'All the above' Given the definition of cross product as vec p xx vec q = |vec p||vec q|sin theta hat n What is vec p xx vec q, when the given vectors are orthogonal? • |p||q|sin 90 hat n • (|p||q| xx 1)hat n • |p||q|hat n • all the above • all the above The answer is 'All the above'. •  Magnitude of the cross product of orthogonal vectors is the product of the magnitudes of the vectors. Cross Product of Orthogonal Vectors: For any pair of orthogonal vectors vec p, vec q in bbb V, |vec p xx vec q| = |p||q| slide-show version coming soon
2018-12-12 10:03:53
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.31627461314201355, "perplexity": 10760.723190669922}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-51/segments/1544376823817.62/warc/CC-MAIN-20181212091014-20181212112514-00509.warc.gz"}
https://solvedlib.com/n/environmental-env-202-environmental-engineering-earth-sciences,5470521
5 answers # Environmental ENV 202 Environmental Engineering _ Earth Sciences Department Engineering Systems I: Analgticad &eCortpetational Analysis CLASS HOMEWORK: Evaluate the following ###### Question: Environmental ENV 202 Environmental Engineering _ Earth Sciences Department Engineering Systems I: Analgticad &eCortpetational Analysis CLASS HOMEWORK: Evaluate the following definite integral. xdx 1/24 I2 (D) 2. What is the standard deviation of 1,4 and 72 25 B) 3.0 6.0 3 . Which is a true statement about the two vectors? V= i+2j+k Vz = 1+-3j - 7k (A) Both vectors pass through (0, -1, 6) (B) The vectors are parallel. The vectors are orthogonal The angle between the vectors is 17.48 What is the area bounded byy-0.y7e', x-0 and x-1? (A) (D) 3.4 Which (xY) point is a relative maximum minimum? 5.A function of x is given below: y = x4 _ 15x2 + 2x +5 (A) (-2-1) (B) (-2.-2) C) (2.-2) (D) (-1,-1.75) 2/3. The lines intersect line is % The slope of a second line The slope of & between the lines? (B) 508 609 80" point (3,1). What is the acute angle ## Answers #### Similar Solved Questions 5 answers ##### CHs [.LDA, THF -780H3C_13) CHs [.LDA, THF -780 H3C_ 13)... 1 answer ##### Back in days what was the first medical school to require a college degree for admission... back in days what was the first medical school to require a college degree for admission ?... 5 answers ##### BIO Traumatic brain injury According to a report on traumatic brain injury, the force that a professional boxer’s fist exerts on his opponent’s head is equivalent to being hit with a 5.9 kg bowling ball traveling at 8.9 m/s that stops in 0.018 s. Determine the average force that the fist exerts on the head. BIO Traumatic brain injury According to a report on traumatic brain injury, the force that a professional boxer’s fist exerts on his opponent’s head is equivalent to being hit with a 5.9 kg bowling ball traveling at 8.9 m/s that stops in 0.018 s. Determine the average force that the fist exerts ... 5 answers ##### Denrtvcnerneanercudan dnentuud Fennattiettt "MU)er lntenhtrrcttrmcrarrment oe~ nar/ ecpmnterh romlx datucdeth n nutunnueIn utaoherolluz4Endtac Ict Ieteetalhoth ImtDuarndtuMattnEtttt|4 Denrtv cnerneane rcudan dnentuud Fennattiettt "MU)er lntenh trrcttr mcrarrment oe~ nar/ ecpmnterh romlx datucdeth n nutun nueIn utaoherolluz4 Endtac Ict Ieteetalhoth Imt Duarndtu MattnEt ttt| 4... 1 answer ##### U. 310.4565 PCX= 3) = 3C₂ (0.178) ( 7 POINTS) 5) FIND THE MEAN, VARIANCE, AND... U. 310.4565 PCX= 3) = 3C₂ (0.178) ( 7 POINTS) 5) FIND THE MEAN, VARIANCE, AND STANDARD DEVIATION FOR THE FOLLOWING PROBABILITY DISTRIBUTION. X P(x)_ x PCX) 4.16 {xP(x) 0 1 0.05 0.00000 MEAN: 2 1 0.17 0.34 000 4 1 0.43 1.72 oot VARIANCE: 2.8544 61 0.35 2.1000 41600 STANDARD DEVIATION:... 1 answer ##### The Champion Jumper of the Insect World. The froghopper, Philaenus spumarius, holds the world record for insect jumps. When leaping at an angle of $58.0^{\circ}$ above the horizontal, some of the tiny critters have reached a maximum height of 58.7 $\mathrm{cm}$ above the level ground. (See Nature, Vol. 424 , July $31,2003,$ p. $509 . )$ (a) What was the takeoff speed for such a leap? (b) What horizontal distance did the froghopper cover for this world-record leap? The Champion Jumper of the Insect World. The froghopper, Philaenus spumarius, holds the world record for insect jumps. When leaping at an angle of $58.0^{\circ}$ above the horizontal, some of the tiny critters have reached a maximum height of 58.7 $\mathrm{cm}$ above the level ground. (See Nature, V... 1 answer ##### A block with mass m =7.5 kg is hung from a vertical spring. When the mass... A block with mass m =7.5 kg is hung from a vertical spring. When the mass hangs in equilibrium, the spring stretches x = 0.27 m. While at this equilibrium position, the mass is then given an initial push downward at v = 4.2 m/s. The block oscillates on the spring without friction. 3) After t = 0.32 ... 1 answer 5 answers ##### AamatanMea mant Eesua nriemtheLnrcoaer(h = n1/u4 Loc Vatilaurt-r Aamatan Mea mant Eesua nriemthe Lnrco aer(h = n1/u 4 Loc Vatil aurt-r... 5 answers ##### Apply Simosonisfcrowng Inlegral, Aale table ehowingapprorimalon? and BrctsStow your work and wle outDnulas You VseTeceice Cadsin t dt5(327= (Type integers decinals Rouno sly decima placegneeded_| Apply Simosonis fcrowng Inlegral, Aale table ehowing approrimalon? and Brcts Stow your work and wle out Dnulas You Vse Teceice Cad sin t dt 5(327= (Type integers decinals Rouno sly decima placeg needed_|... 1 answer ##### Use a graphing utility to graph the two equations. Use the graphs to approximate the solution of the system. Round your results to three decimal places. $\left\{\begin{array}{r} 0.5 x+2.2 y=9 \\ 6 x+0.4 y=-22 \end{array}\right.$ Use a graphing utility to graph the two equations. Use the graphs to approximate the solution of the system. Round your results to three decimal places. $\left\{\begin{array}{r} 0.5 x+2.2 y=9 \\ 6 x+0.4 y=-22 \end{array}\right.$... 1 answer ##### SYSTEM OF LINEAR EQUATIONS WITH PARAMETERS. $left{egin{array}{l} (a+5) x+(2 a+3) y-(3 a+2)=0 \ (3 a+10) x+(5 a+6) y-(2 a+4)=0 end{array} ight}$ SYSTEM OF LINEAR EQUATIONS WITH PARAMETERS. $left{egin{array}{l} (a+5) x+(2 a+3) y-(3 a+2)=0 \ (3 a+10) x+(5 a+6) y-(2 a+4)=0 end{array} ight}$... 5 answers ##### Find the inverse of the matrix B8J in ters of k Determine all value(s) of k for which the above matrix is not invertible_ Find the inverse of the matrix B8J in ters of k Determine all value(s) of k for which the above matrix is not invertible_... 5 answers ##### Use tne 'component technique' for adding vectors to find tne total cisplacement person who walks the following three paths (displacements) on flat field: First, she walks 25.0m in & direction 49.00 north of east_ Then, she walks 23.0 m heading 15.00 north of east: Finally. she turns and walks 32.0 m in a direction 68.0' south of east (Take picture and upload your complete solution on white paper). Use tne 'component technique' for adding vectors to find tne total cisplacement person who walks the following three paths (displacements) on flat field: First, she walks 25.0m in & direction 49.00 north of east_ Then, she walks 23.0 m heading 15.00 north of east: Finally. she turns an... 1 answer ##### ASK YOUR TEACHER A house is advertised as having 1920 square feet under roof. What is... ASK YOUR TEACHER A house is advertised as having 1920 square feet under roof. What is the area of this house in square meters? PRACTICE ANOTHER HINT 585,4 Xm Need Help? Read it Watch • (-/2.5 Points] DETAILS SERCP11 1.5.P.026. MY NOTES ASK YOUR TEACHER PRACTICE ANOTHER Suppose your hair grows a... 5 answers ##### Bass: The bass in Clear Lake have weights that are normallydistributed with a mean of 2.3 pounds and a standard deviation of0.8 pounds.(a) If you catch one random bass from Clear Lake, find theprobability that it weighs less than 1 pound? Round your answer to4 decimal places.(b) If you catch one random bass from Clear Lake, find theprobability that it weighs more than 3 pounds? Round your answer to4 decimal places.(c) If you catch one random bass from Clear Lake, find theprobability that it weig Bass: The bass in Clear Lake have weights that are normally distributed with a mean of 2.3 pounds and a standard deviation of 0.8 pounds. (a) If you catch one random bass from Clear Lake, find the probability that it weighs less than 1 pound? Round your answer to 4 decimal places. (b) If you catch o... 1 answer ##### Suppose at the current price, the demand for copper is estimated at -3.14. What happens to... Suppose at the current price, the demand for copper is estimated at -3.14. What happens to sales revenue if the government imposes a price ceiling below the free market equilibrium price in the copper market? It cannot be determined without information on prices. Sales revenue remains unchanged beca... 5 answers ##### {42 POINT8} ErsyeQ = in 2 + & {OrC- 2 = (1 - 0* , [esving Yout Solvc te eqvaton {42 POINT8} ErsyeQ = in 2 + & {OrC- 2 = (1 - 0* , [esving Yout Solvc te eqvaton... 1 answer ##### Question Completion Status: 1-2 For the same circuit, derive a formula for Zth in terms of... Question Completion Status: 1-2 For the same circuit, derive a formula for Zth in terms of all of the variables in the circuit. Please upload a file showing all your work for this question. Do not make numeric substitution at this point. Numeric substitution will be done with a MATLAB script. R1 L1 ... 5 answers ##### Use induction on to prove that 4h _ 15n 1 is divisible by 225 for all non-negative integers_ Use induction on to prove that 4h _ 15n 1 is divisible by 225 for all non-negative integers_... 5 answers ##### (6 pts) Draw the products of the following starting materials_ a) Ethyl bromide Mgb) lodobenzene MgMethyl bromide 2 Li-,(6 pts) Draw the products of each of the following reactions You will need t0 look up structures of the aldehydes and ketones. You are not responsible for knowing names of formaldehyde and kelones:1Mg 2 Acetone3 HaOt1 Mg 2. Acetophenone3H3Ot1 Mg Formaldehyde 3 HaOt (6 pts) Draw the products of the following starting materials_ a) Ethyl bromide Mg b) lodobenzene Mg Methyl bromide 2 Li-, (6 pts) Draw the products of each of the following reactions You will need t0 look up structures of the aldehydes and ketones. You are not responsible for knowing names of forma... 5 answers ##### Cunt (S the major Product Nadet Cunt (S the major Product Nadet... -- 0.101091--
2023-04-01 22:33:01
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 2, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.3754994869232178, "perplexity": 6458.92655394528}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": false}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2023-14/segments/1679296950363.89/warc/CC-MAIN-20230401221921-20230402011921-00192.warc.gz"}
https://testbook.com/question-answer/which-of-the-following-expression-is-the-correct-f--5e84421ff60d5d3f318010e4
# Which of the following expression is the correct formulae for the deflection sensitivity ‘S’ of a CRT, ifD = deflection on the fluorescent screenL = distance from the center of the deflection plates to the screenLd = effective length of the deflection platesd = distances between the deflection platesEd = Potential between deflecting plates Ea = accelerating voltage This question was previously asked in UGC NET Paper 2 (Electronic Science) December 2019 Official Paper View all UGC NET Papers > 1. $$S = \frac{{L{l_d}}}{{2{d^2}{E_a}}}$$ 2. $$S = \frac{{2d{E_a}}}{{L{l_d}n}}$$ 3. $$S = \frac{{L{l_d}}}{{2d{E_d}}}$$ 4. $$S = \frac{{L{l_d}}}{{2d{E_a}}}$$ Option 4 : $$S = \frac{{L{l_d}}}{{2d{E_a}}}$$ Free Official Paper 1: Held on 24 Sep 2020 Shift 1 13272 50 Questions 100 Marks 60 Mins ## Detailed Solution The deflection sensitivity (S) of a CRT (Cathode Ray Tube) is defined as the deflection (in meters) on the fluorescent screen (D) per volt of the deflecting voltage (Ed), i.e. $$S = \frac{D}{{{E_d}}}$$       ---(1) This is explained with the help of the following diagram: Electrostatic Deflection (D) is a method of aligning the path of charged particles by applying an electric field between deflecting plates. Mathematically it is calculated as: $$D = \frac{{L.{l_d}.{E_d}}}{{2d.{E_a}}}$$ Using Equation (1), the Deflection Sensitivity is derived as: $$S = \frac{{L.{l_d}}}{{2d{E_a}}}$$
2021-10-27 10:51:33
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.6611388921737671, "perplexity": 3795.9688289899973}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-43/segments/1634323588113.25/warc/CC-MAIN-20211027084718-20211027114718-00687.warc.gz"}
https://www.codecademy.com/courses/discrete-math/lessons/sequences-and-summations/exercises/sum-of-an-arithmetic-progression
Learn This famous arithmetic sequence (progression) problem was supposedly solved by the mathematician Gauss in his childhood. In essence, one solution involves starting at the center of the 100 numbers and adding the two terms (e.g., 50 + 51), and then spreading out in both directions. The learner soon realizes that each pair is equal to 101 and that there are 50 pairs. In general, forms like this one are challenging to discover. An infinite arithmetic progression grows without bounds. A finite arithmetic progression can sometimes be represented by an alternate form that simplifies the calculation. For example, the Gauss approach could be written as: $(number\ of\ pairs)\cdot(sum\ of\ each\ pair) = \frac{n}{2}(n +1)$ The algebraic portion after the “=” sign is called a “closed-form.” The discovery of closed forms is nontrivial and outside our scope in this lesson. Simply put, the sum of the first one hundred values looks like this: $\sum\limits_{i=1}^{n}i=\frac{n(n+1)}{2}=\frac{100(101)}{2}=5050$ We can always sum an arithmetic sequence using the summation notation and working through the arithmetic. ### Instructions 1. Given: $\sum\limits_{i=1}^{7}i$ What is the sum of the seven values? Assign your answer to checkpoint_1 in the code editor. 2. What is the partial sum of this arithmetic sequence? $\sum\limits_{i=1}^{5}(2i+1)$ Assign your answer to checkpoint_2 in the code editor.
2022-01-28 17:18:32
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 4, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.909915030002594, "perplexity": 506.8519806002634}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-05/segments/1642320306301.52/warc/CC-MAIN-20220128152530-20220128182530-00571.warc.gz"}
https://read.dukeupress.edu/environmental-humanities/article/2/1/1/61632/Jane-Smiley-s-A-Thousand-Acres-1991-and-Archival?searchresult=1
## Abstract This article blurs the boundaries of literature, agriculture, public history, grassroots political activism, and public policymaking in order to problematize the current eco-cosmopolitan trajectory of ecocritical theory, a trajectory promulgated by Ursula K. Heise in important essays and books. Foregrounding the voices of grassroots environmentalists as well as the public-relations campaigns of multinational agribusiness trade groups, materials collected in the special collections of Iowa State University, the article resituates Smiley's prizewinning novel and offers a complication of current conceptualizations of eco-cosmopolitanism. The article aims to show the struggles of rural people to embrace a planetary consciousness—a global awareness that can paradoxically foreground as well as participate in the continued ecological devastation of the landscapes these activists hold dear. These local voices underscore the challenges human subjects face in articulating and narrating environmental relationships—even despite their intimate proximity to these landscapes. Just as Thousand Acres's mastery of a complex environmentalist voice is hard won, so too is that of dozens of rural people across the world. The challenges they face demand the close attention of the environmental humanities, not only to deeply engage appropriate texts, but to engage them with a framework that expands the orchestra and zeros in on the critical problems of global agriculture, planetary health, and human rights. Jane Smiley notes that her novel A Thousand Acres was “precipitated” by “a few accidents.” Not an accident like the one that befalls the novel's Harold Clark, a neighbor farmer who is blinded by anhydrous ammonia, or Pete Lewis, who gets drunk, drives his truck into a nearby quarry, and drowns, but an accident that involved “a visit to McDonald's in Delhi, New York, in the summer of 1987.” This particular golden arches was “decorated with pictures of the Midwest,” Smiley remembers, “and the one in the booth we sat in had a man standing in a barn in what seemed to be wheat country.” She began telling her husband her idea for her new work, a rewriting of Shakespeare's King Lear. “He said, ‘You could set it on a farm in Kansas,’ and I said, ‘I don't know anything about Kansas.’ Pooh. Dismissing him!”1 Dismissing him indeed: Smiley's reimagination of King Lear, transported to Iowa, topped bestseller lists, snagged Pulitzer and National Book Critics Circle prizes, and became a Touchstone Pictures film starring Jessica Lange, Michelle Pfeiffer, and Colin Firth. Within two decades, this story of an Iowa family's disintegration in the late 1970s over the division of its patriarch's estate and subsequent revelations of sexual abuse had spawned a rich critical apparatus inquisitive of the novel's treatment of gender, violence and trauma, adaptation, and human and ecological interpenetrations.2 This uncanny moment at the novel's genesis is more provocative than it might seem at first glance—and certainly so as ecocriticism with renewed vigor rethinks an ethic of proximity through the prism of eco-cosmopolitanism. This eco-cosmopolitan impulse is what Ursula K. Heise defines as “environmental world citizenship” that “attempt[s] to envision individuals and groups as part of planetary ‘imagined communities’ of both human and nonhuman kinds.”3 That Smiley imagines setting King Lear in the U.S. Midwest as she and her husband sit between bucolic images of Midwestern farm country and the metamorphosis of its Agricultural productivity into Big Macs and Coke plants her in a similarly liminal threshold. It is a threshold that paradoxically entrenches yet undermines U.S. imperialism at home and abroad and confronts omnivores-always-on-the-move a thousand miles away with the ecological impact and carbon footprint of their meat, potatoes, and liquid corn diet and the social injustice of the impoverishment of farmers and food-service workers alike. What's more, it exposes the insufficiency of environmental world citizenship, constrained as it is by the extranational power of global trade organizations, the futility of political and environmentalist expression through individual purchase-power, and the absence of functioning global environmental regulation. Grounded in my reading of the novel alongside archives of Agricultural history, science, and economics, this essay aims to show how A Thousand Acres reveals both the possibilities and limitations of eco-cosmopolitanism—and consequently to underscore the primacy of environmental humanities in enriching environmental discourse. In profound ways, the novel born in this upstate New York McDonalds exemplifies an eco-cosmopolitan narrative, centered on the eco-cosmopolitan consciousness of its central character, Ginny Cook Smith. Ginny elasticizes her sensual comprehension of her immediate natural surroundings with her capacity to incorporate within her known world an unseen complexity of human and nonhuman systems, just as Heise proposes that “[e]co-cosmopolitanism reaches toward ... the ‘more-than-human world’—the realm of nonhuman species, but also that of connectedness with both animate and inanimate networks of influence and exchange.”4 In an oft-quoted passage, Smiley describes Ginny's capacious imagination of the landscape she inhabits, a landscape crosshatched by complex geological and biological interactions across an infinite swath of planetary history: For millennia, water lay over the land. Untold generations of water plants, birds, animals, insects, lived, shed bits of themselves, and died. I used to imagine how it all drifted down, lazily, in the warm, soupy water—leaves, seeds, feathers, scales, flesh, bones, petals, pollen—then mixed with the saturated soil below and became, itself, soil. I used to like to imagine the millions of birds darkening the sunset, settling the sloughs for the night, or a breeding season, the riot of their cries and chirps, the rushing hough-shhh of twice millions of wings, the swish of their twiglike legs or paddling feet in the water, sounds barely audible until amplified by millions. And the sloughs would be teeming with fish: shiners, suckers, pumpkinseeds, sunfish, minnows, nothing special, but millions or billions of them. I liked to imagine them because they were the soil, and the soil was the treasure, thicker, richer, more alive with a past and future abundance of life than any soil anywhere.5 The richness of this passage—from Ginny's celebration of generation, creation, and teeming plenitude to its felicity of intermixture, coupling, and partnership, paradoxically in a novel that traffics otherwise in destruction, decoupling, and alienation—epitomizes the potential richness in an eco-cosmopolitan worldview. As Heise writes, such a perspective has the capacity of transcending the ‘ethic of proximity’ so as to investigate by what means individuals and groups in specific cultural contexts have succeeded in envisioning themselves in similarly concrete fashion as part of the global biosphere, or by what means they might be enabled to do so; at the same time, as the work of Vandana Shiva, among others, highlights, such a perspective needs to be attentive to the political frameworks in which communities begin to see themselves as part of a planetary community, and what power struggles such visions might be designed to hide or legitimate.6 As Heise's and Ginny's declarations suggest, they possess the suppleness of perspective to celebrate both the local as well as the global, the individual and the communal, and the human and the more-than-human. Their words become beacons in an increasingly urgent search for global solutions to planetary ruin; they offer, as Heise hopes, a revitalizing sense of a “thorough understanding of the cultural as well as the ecological frameworks”7 that will guide future environmental policymaking in surviving a future dominated by increasingly dire forecasts of planetary health. However, this communion of A Thousand Acres and contemporary ecocritical theory fractures when it is considered alongside the archival remainders of grassroots environmentalist coalitions and agribusiness trade groups operating in Iowa during the 1970s and 1980s.8 Would that it weren't so, I say. I cannot help but think that tracing new histories of an eco-cosmopolitan aesthetic, finding in those literary and extraliterary histories an eco-cosmopolitan resonance within specifically agrarian texts, and unveiling in those U.S. agrarian narratives a conduit between myopic provinciality and transnational consciousness would achieve the pinnacle of Heise's eco-cosmopolitan project. Furthermore, I cannot help but think, the identification and nourishment of “the stories and images of a new kind of eco-cosmopolitan environmentalism that might be able effectively to engage with steadily increasing patterns of global connectivity”9 could inspire a just, sustainable, inclusive, planetarily bioregionalist Green Revolution. As the archival materials considered in this essay demonstrate, however, we can presume neither a functioning environmentalist “ethic of proximity” nor an ecologically sustainable cosmopolitanism. The letters and pamphlets of grassroots organizing against roadbuilding, industrial pollution, and agribusiness reveal the a priori fallibility of the “ethic of proximity” Heise takes for granted: grassroots environmentalism might seek to celebrate local places, that is, but its spokespeople seem unable to conceptualize their resistance except in terms that accede to the impoverishment of those places. What's more, the trade journals and mission statements of industry lobbies designed to oppose such resistance—materials collected alongside those of pro-farmer, pro-environment groups at Iowa State University—reveal the canny means by which an eco-cosmopolitan ethos might be co-opted to legitimate continued capitalist petro-industrial exploitations of the environment. Put simply, Smiley's highly-regarded novel and the discursive context of its late 1970s/early 1980s setting explore profound challenges to the dissemination and operationalization of a global eco-cosmopolitian perspective. The “middle ground” of agriculture that the novel and archives problematize confronts us with the ostensible simplicity of local ecologies and cultures at the same time that they document the thorny complexities of global capital and human migrations. What's more, they wrestle with the allure of an autochthonous myth of agrarian identity and the siren's song of a cosmopolitan escape, an escape better managed by more technocrats and fewer farmers. In the end, A Thousand Acres and its archival contexts compel a renewed introspection of U.S. ecocriticism and its quickening eco-cosmopolitan trajectory. They show us that looking outside our usual aesthetic and critical discourses for fresh voices and alternative praxes can help us see the paradoxes we must confront and untangle. More important, they can lead us to inspired perspectives that might help us realize and refine the eco-cosmopolitan vision Heise and others now imagine. ## Negotiations of Agriculture and Environmentalism at the Grassroots The most famous iteration of Midwestern Agricultural activism might be country singer Willie Nelson's Farm Aid, which held its first concert in 1985, It represents an obvious context for A Thousand Acres, borne out in copious materials archived at Iowa State University. Both Farm Aid and A Thousand Acres focus on family farms: the former aims to celebrate family farmers and fundraise to help them stay on the land, while the latter lays bare the violence that can happen at the heart of a farm family. What's more, both couple family farms with environmental sustainability. Letters sent by Farm Aid to Midwestern homes in the months and years surrounding the publication of A Thousand Acres declare that not only do “[f]amily farmers hold the rural economy of our country together”; “[t]hey also protect the quality of our food from the dangers of chemicals and pollutants used in large factory farms. We need family farmers for more then [sic] nostalgia.”10 In another letter, Nelson reiterates the environmental consequences of industrial agriculture: “Factory farms put family farmers out of business. ... Factory farms pollute our rivers and streams and pollute our water supplies. ... Factory farms treat animals inhumanely. ... This is not a good way to grow our food!11 In letter after letter, handout after handout, Farm Aid extols the harmony of the family farm—in all its clichéd glory, with all its masculinist underpinnings—as a bulwark against continued ecological devastation, just as A Thousand Acres slams the trauma of the family farm and the environmental degradation it has spawned. Indeed, despite the untenability of the family at its center, A Thousand Acres underscores that families who cannot hold together their own domestic and familial economies cannot create jobs, protect the quality of foods, or reject the false promises of agribusiness. The Heartland Corporation buys out the Cooks thanks to pressure exerted by the agribusiness corporate order's collusion with banks and machinery companies, for example, and the women's cancers originate in a polluted water supply. Ginny, whose environmental awareness makes visible and contemptible the pesticides and herbicides that have ravaged her body and her landscape, can and will speak for the environment. But thanks to her history of sexual abuse, remembered at a time of economic instability, she becomes instead a rural refugee, her knowledge and values unutilized and unappreciated in the anonymity of St. Paul, Minnesota—precisely the result that Farm Aid's powerful advocacy seeks to prevent. But Smiley's novel and Iowa State's archives offer the opportunity to explore new, more granular perspectives from the grassroots of environmental and social justice activism. Ginny's capacious imagination of agriculture as a meditation on nature resonates with the political reform efforts spearheaded by members of the Farm Land Preservation Association, an Iowa group formed in 1976 to oppose the construction of diagonal roads, roads cut across farms and fields in order to reduce travel times and distances for shippers and tourists. During a time of increased Agricultural consolidation, industrialization, and chemicalization, the association's environmental consciousness, much like Ginny's, challenges the tendency of these forces to render “nature” invisible and archaic. In its lobbying efforts against Interstate 380, the association produced and distributed numerous accounts of the preservation of nature through the preservation of farming. Carl H. Munn, for example, complains that the state Department of Transportation's plans will endanger the wild birds and animals he enjoys watching: On any given summer day, I can see probably every bird pictured in the AUDUBON SOCIETY HANDBOOK. From the red-headed woodpecker to the Goldfinch [sic], from the barn swallow to the hawk. ... [C]an the DOT guarantee me the continued tranquility of those birds? From where I live, I can see the rare sight of the wild as it was once and for what existence it still has, is yet today. I see the deer as they graze, the coyotes as they hunt and even the mother fox and her young as they bathe in the sun. Can the Iowa DOT tell me that they have researched and found that this environment will not be damaged or destroyed?12 Similarly, Laura Mae Hicks celebrates the sanctity of a creek threatened by the highway plan: We moved here in 1941 when I was eleven and I have always loved it. The creek is one of my most treasured parts. It is quiet by the creek. The fields spread out on all sides. One is alone with nature, the grass, the little bugs, the bubbling creek, the vast sky overhead. Far away one can see one's own house & barn, and one can faintly hear the hum of cars on the distant road. A bird flies overhead. How can I describe perfect happiness?13 Munn's vision celebrates wildness, harmony, and order, a vision threatened by the state bureaucracy's incapacity for environmental appreciation and valuation. In a similar way, Hicks links nostalgia and family history with appreciation of nature, finding in the creek a retreat where modernity is a distant “hum.” Both, like Ginny, see the long creation of the ecology they inhabit as threatened with instantaneous, irrevocable change. As their reflections on the ecological richness of their landscapes suggest, Ginny, Munn, and Hicks seek resolution through narrative: by telling their stories, by speaking what they see and hear, they seek nuance in the unfolding of technological modernity. Moreover, they speak a common language of the landscape they share—a language that inscribes delocalizing industrialization (tile lines, transportation departments, automobiles) in the landscape even as it seeks some compromise. Even if outright rejection of the highway is impossible, perhaps recognition of their landscapes and the relationships they have created to their landscapes will assuage them, exemplifying Lawrence Buell's powerful theorization of the pastoral as both institutionally sponsored and counterinstitutional, a trope that can strengthen the status quo even as it assails it.14 To be sure, the Association's conflation of Agricultural preservation as ecological preservation is troubling, especially in that members cannot imagine an ecology of Iowa that is not a monoculture of corn, pigs, or cattle. Nonetheless, members articulate concern for the wellbeing of people the world over. “Farm land is our most valuable resource,” Ross L. Wiley writes. “To my knowledge, it is the only natural resource we have which we can use year after year, and still have, and continue to use. One acre of prime Iowa land will produce more than any number of acres in the deserts or mountains that cover much of the earth.”15 He contends that the loss of 2,000 acres—the area the interstate would cover—would translate into annual losses of 250,000 bushels of corn, which equals $575,000 in crop production, which reverberates into$1.86 million of pork or $5 million of beef.16 These consequences entail global responsibilities and obligations. By the count of one unnamed writer, a corn grower and hog producer, the highway “would deprive 2100 people of pork.”17 Preserving the soil joins feeding the hungry in rhetorical and ethical importance: “World shortages of food are awaking us to the importance for the preservation of the good soil of the United States. The state of Iowa has about ¼ of the top producing soil in our nation,” writes Glenn J. Burrows, district soil commissioner.18 Clifford R. Schildmeier concurs: “At present we have plenty of food, but according to predictions, the time is coming where we will run short of food[—i]f we [d]on't start to conserve and save the Black soil of IOWA[.] How can we save soil, if we are to waste it in roads, etc.?”19 In these objections, pastoral values—the preservation and celebration of rural culture, the sanctification of rural nature and work—become global ethical considerations, both between humans and between humans and the environment. Not only do the association's members articulate farmland as a “natural resource”; they frame it as a pastoral obligation they owe human beings across the crowded planet. Thus Smiley's novel signifies in the primacy of its appellation: the 1,000 acres it represents entails not just the possession and dispossession of the Cook family—but in the bushels of crops and tons of meat it might produce, calculations yoked directly to acreages. Yet this counterinstitutional application of the pastoral nonetheless demonstrates the tendency of such appropriations to strengthen and fortify the status quo, for the Association's characterization of farmland as a “natural resource” (with reverberations of “national resource”) belies the inefficacy of employing agriculture as a means of articulating nature. Such a formulation undermines Munn's appreciation of “wild” predator-prey relations because they have no “use value,” while it haunts Hicks's appreciation for a sentimental retreat from modernity: absent the environmental ravages of contemporary agriculture, that is, the creek would not stand out for its symbolization of harmony, wildness, or purity. Despite the rhetoric of feeding a starving world and saving precious soil, these objections elide the role of the technologies, practices, and property consolidations in producing these very problems to begin with, for these projections of production depend on the historical and material reconfigurations of the landscape into grids amenable to disbursement and modern cultivation. As the group notes, diagonal highways “leave triangular tracts not adaptable to modern row crop farming.”20 Categorizing farmland as a “natural resource” becomes both counterhegemonic and interpellated: the frame of farmland as a “natural resource” guarantees more of the same, preserving not the soil but the status quo of industrialization and chemicalization. The ecocentric but untenable possibilities of the Farm Land Preservation Association Inc. mirror the ecocentric but untenable vision of A Thousand Acres. Just as it channels a discourse of place, so too it channels a troubling tolerance for the interventions that devastate its place. Preserving farmland because it manifests nature fails when, at base, nature itself is feared and subjugated—a phenomenon exemplified by the pathological terror of wildness displayed by the novel's men. Rose notes: “‘Daddy's not much for untamed nature. You know, he's deathly afraid of wasps and hornets. It's a real phobia with him.’”21 As a representative of conventional agriculture, Larry's spheksophobia signifies the alienation of nature from farming on which industrial-commercial practices of cultivation depend. Ginny seconds Rose's assessment: “However much these acres [Larry's land] looked like a gift of nature, they were not,” she states. “We went to church to pay our respects, not to give thanks,” a view buttressed by their pastor's annual message of the importance of farmers.22 In many ways, the Cooks' religious perspective mirrors their environmentalist perspective: the notion of the earth as hostile to human beings' designs, a hostility evidenced in and surmounted by the strategic reengineering of the land by tile lines, the liberal application and valorization of pesticides, and the nonstop injection of fossil fuel. Where Ginny sees contingency and uncertainty, farmers like her father, her husband, and their neighbors the Clarks, find a landscape ostensibly pining for human (synonymously, masculine) intervention and alteration. Iowa State's records documenting the grassroots organizing of Louise McEachern and Citizens Against River Pollution (CARP) further reveal and illuminate A Thousand Acres's conundrum of sustaining an environmentalist vision using an Agricultural referent. While FLPA sought to prevent the construction of an interstate, CARP protested in the late 1980s and early 1990s an Iowa Beef Packers Inc. (IBP) pork processing facility in Columbus Junction, Iowa, that they charged had polluted the Iowa and Cedar rivers. Like FLPA, CARP's vision of resistance embraces conventional agriculture and environmentalism, sustainability and technology, nation and community, even as members strain to reimagine these dichotomies under the pressure of globalization. Judging by the volume of her letters and notes contained in the group's archives, McEachern is CARP's most prolific and vocal torchbearer. A self-described “53 year old lady, housewife, grandmother and Monsanto employee” who “loves [her] environment,23 McEachern's complex and contradictory subjectivity—environmentalist and Monsanto employee, homemaker and worker—metonymizes the challenges of environmental, Agricultural, and cultural reform that A Thousand Acres thematizes. For CARP members, conventional farming and environmentalism, as well as technology and sustainability, go hand in hand, despite their inherent contradictions. Industrial agriculture holds forth the promise of feeding the planet's increasing population, while technology offers the hope of environmentally-friendly, sustainable methods of preserving the ecological health of the planet. McEachern writes: Regarding our environment I believe it simply is not enough to ‘unpollute’ our World, our state and counties. While we must provide enough food and energy for our growing population, we must rectify the mistakes of past years yet continue to develop and introduce new technologies which will provide the essentials for mankind in the future. And we must ensure, as we know we can, that these new technologies will not create new environmental problems to be dealt with by our children and our great-grandchildren. I believe sustainable development in our state must also mean sustaining our natural resources too!24 McEachern presents a complex vision of Agricultural production and environmental action. Feeding and powering the world represent moral obligations inextricably but disjunctively linked to the recuperation of ecological health; the means she cites to accomplish this goal—“technology”—are too often the very means that have created the problems she wishes to fix. Nonetheless, McEachern urges profound reforms in the paradigms of conventional agriculture and environmentalism: a pragmatic duty toward sustaining the world's peoples undergirded by renewed activity in environmentally sustainable technological research and development. In concert with the holistic character of her vision of agriculture, environmentalism, and technology, McEachern probes crises in national identity—crises that lead her to a global awareness. For McEachern and other CARP members, environmental problems such as IBP's discharge of ammonia into the local rivers signify as violations of national identity, incursions heightened because they take place in the nation's heartland and threaten the bald eagle. The fragility of this endangered species within environments of toxicity and pollution encapsulates, for McEachern, the intrinsic connections and palpable urgency of environmental protection and patriotism: “I know I was pleased that we could show our two little grandchildren the Bald Eagles,” she writes Iowa state Rep. Mark Shearer. “If the Governor is allowed to dictate to Iowa favoring big business will you have the chance/pleasure of showing your grandchildren our nation's symbol right by Fredonia? I truly doubt it!”25 As McEachern's regard for the Fredonia, Iowa, bald eagles demonstrate, violations of the environment equal violations of the nation. Paradoxically, her appeal to nationalism and patriotism leads her to see unavoidable connections between environmental abuse and transnational corporate globalization. Originally my major concern was IBP's pollution of our two beautiful Class A rivers .... As I watched IBP come to our quiet, peaceful, low crime area, I knew it was not only their pollution that angered me, it was their entire way. Why should I an Iowa citizen who has an excellent job (I am a Monsanto employee)[,] excellent wages, excellent safety, excellent benefits want less than that for my fellow man, regardless of race, sex or religion?26 McEachern's evolution from the position of a Sierra Club-style environmentalist concerned about the degradation of an ostensibly pristine environment, an environment in large part conceptualized in terms of human recreation (evidenced by her reference to the rivers as Class A), to that of environmental justice, the counterhegemonic movement that rejects the collusion of environmental risk and toxicity with marginal social status, signals likewise her move from local and national consciousness to global awareness. Complicating the work of the novel and these activist groups to forge and sustain an environmentalist ethos through and despite the agriculture that defines their landscape are the transformations of globalization that define their historical, economic, and political moment. As McEachern and CARP, FLPA, and A Thousand Acres demonstrate, the category of the nation—its juridical power, its legislative capacity, and its executive efficacy—fails to shield its citizens and communities from the ravages of globalization, just as McEachern and CARP show that it fails to live up to its creed of inclusivity, equality, and fairness. Rather, for these writers and texts, the United States colludes in sanctioning and expanding a revamped transnational corporate order that trades on its efficacy in translating the Midwest into an underdeveloped colony.27A Thousand Acres concedes this point in its violent and nihilistic historicization of the family farm crisis as a crisis of dispossession, exodus, and diminished economic power, sanctioned and powered by the creation of a national Agricultural policy tailored to increasingly global corporate interests. Worse, the economic changes brought on by globalization not only challenge Midwesterners' attempts to create an environmentalist vision, but work to destroy the ecosystems they are trying to preserve, a consequence that even the proponents of global industrial agriculture cannot deny. To quote the circular illogic of the father of the ill-named Green Revolution, Norman H. Borlaug, “[L]arge rural-to-urban migrations in many countries are creating huge mega-cities with major pollution problems. Thus, it is clear that Agricultural development is the key to poverty reduction and environmental protection, both in rural areas and to help stem the tide of urban migrations.”28 In too many ways, the Green Revolution has been neither green nor revolutionary, and A Thousand Acres's engagement with globalization helps deconstruct such a frame of mind. ## Global Pig: Hogs and Globalization in A Thousand Acres and the U.S. Midwest It is A Thousand Acres's turn to hogs that exposes the problematic workings of globalization and localization in the Midwest. As Smiley's literary precursors Willa Cather, Hamlin Garland, and O.E. Rølvaag attest, the Midwest is a region always already globalized. Yet where these writers depict the entry of “pioneers” from across the world, Smiley describes the egress of the region's ecological and human resources: Larry Cook's corn has to be shipped out, and Ty's hogs go to a distant market (a nod to the responsibility Iowans like the members of FLPA claimed with honor and dedication); Ty and Ginny take jobs out of state; and Rose dies of cancer that results from groundwater pollution. In Smiley's novel, a porcine arms race—“get big or get out,” to quote former U.S. Secretary of Agriculture Earl Butz (or, in Smiley's canon, the experimental hog fattened in secret in Moo's Moo U [1995])—pressures the Cook family into mortgaging and borrowing themselves into bankruptcy. At the same time, the hog as commodity operates to reframe the Midwest not as an “American” heartland, but a global heartland, whose regional and national safeguards fall in fealty to trade liberalization and “decoupling.” In A Thousand Acres, the thematization of hog production frames the larger problems of shifting power, capital, and decision making championed and operationalized by industry lobbyists. To that end, Smiley's novel encapsulates the inability of agrarian myths to account for or construct new narratives for or against the environmental and cultural problems of late twentieth-century U.S. agriculture in a global world—thus broaching complex questions of the productivity of an eco-cosmopolitan local/global consciousnesses within a political-economic system governed by transnational corporate capitalism. Hog farming propels A Thousand Acres even when it contradicts sound economic rationalism and domestic harmony and effects the radical reconfiguration of farm, hearth, and region. Despite Larry's increasing instability and unpredictability—he purchases a new set of kitchen cabinets but leaves them to rot outside in the driveway; he wrecks his truck while drinking and driving—Ty perseveres in the family's original plans to increase the farm's hog operations, taking out a$300,000 loan to lay the foundation for a 4,000-hog facility: The plan was to convert what remained of the old dairy barn to enlarge the farrowing and nursery rooms, add a gestation building, a grower building, and a finisher, to build a big Slurrystore for waste, and put up two small Harvestores for the corn that would serve the hogs for feed. ... [T]he new buildings were what would save us, the marvelous new silos, the new hogs, the new order, epitomized by the Slurrystore, where all the waste from the hogs would be saved until it could be returned to the ground—no runoff, no smell, no waste, a closed loop.29 This conversion narrative operates through the logic of progress and commodification, coveting the eradication of the past in favor of the brand-name siren's song of the present and future. Technology promises liberation from the constraints of farming—of handling wastes, of claiming responsibility for pollution—while brand identification promises capitalist nirvana, of “new hogs” in “the new order.” The “closed loop,” from “farrowing and nursery” to the disposal of wastes, mirrors the drive of agribusiness toward vertical integration, in whose service corporate interpenetration and consolidation likewise encode a seductive sort of efficiency and closure. The family's prostration—its need to be “saved”—grooms it to clamor for the bombastic claims of progress and commodification, despite an underlying unease toward, and superficial Midwestern conservatism for, expropriation and indebtedness. In many ways, the architectures of containment in the planned hog facilities, supplemented by consumerist faith in progress, corporate branding, and closed loops, serve as an analogy for the family's ensnarement in economic and political orders beyond their control. On one hand, the novel's lengthy descriptions of the design and construction of Ty's new hog operations point to the fundamental human impositions of order and systematization that express the farmers' repudiations of wildness. On the other hand, their literal and figurative enclosure within the tragic narrative of subjection and dispossession serves more poignantly to illustrate the characters' similarly commodifiable disindividuation and abjection. As hogs are far more inquisitive and destructive than dairy cattle, the plan was to install concrete partitions to about five feet, then wood frame walls above that. Eventually every hog in the building would reside in an aluminum alloy pen with hot water heat in the floors, automatic feeders and nipple waterers for the shoats. There would be, as the brochure said, ‘several comfort zones to accommodate varying sizes of hogs.’30 The novel's detailed attention to the means and mechanisms of containment and surveillance bely the characters' yearning for simplification. This desire for bottom-line reductionism activates the characters' blind faith in and reliance on commercial speech—“as the brochure said”—for authority and legitimacy. Likewise, the “comfort zones” promised by the corporate broadside function to unfit its consumers for resistance to corporate incursions and reconfiguration, and they mirror the structures of the bank, courtroom, and church in enforcing social and cultural conformity. These architectures, in sum, are designed for global trade, for interchangeability and exchangeability, networkability, synchronicity, and standardization. This tightening spiral of capitalist incursion and individual subjection plays out against farmers' archetypal sensibility of resignation and prototypical Midwestern conservatism. Ty's hog expansion falls victim to declining prices and rising interest rates, the beginnings of the 1980s farm crisis. Banker Marv Carson supervises the inflation of land values and the increase in debt, convincing Ty to increase his hog operations. “‘Marv Carson says hogs are going to make the difference between turning a good profit and just getting by in the eighties,’” Ty reports, and his design to double the hog operation to 1,000 hogs instead quadruples, again on Carson's advice: “Four thousand was a number Marv Carson liked, for one thing .... Pretty soon, four thousand hogs became our plan, and Marv Carson gave us a $300,000 line of credit.”31 In court, Carson pontificates: “‘The idea of being debt-free is a very old-fashioned one. A family can be debt-free, that's one's thing. A business is different. You've got to grasp that a farm is a business first and foremost.’”32 In keeping with Carson's pronouncement that they are a “business,” not a “family,” they triumph in court because they prove themselves to be business-oriented and amenable to the constantly shifting character of Agricultural economics: “Got to have capital improvements in a business. Economy of scale. All that.’”33 What Carson calls “investment” paradoxically becomes divestment: the entrance of capital extracts the human, natural, and social resources of the Cook farm in order to create surplus, a phenomenon Walter Rodney and other scholars of globalization have explored in terms of European colonization.34 Indebtedness thus signifies loss of agency and facilitates the reconfiguration of the farm into human and animal “comfort zones” that enable the global commodification and circulation of its producers and products—just as “indebtedness” in developing countries compels “structural adjustments” imposed by overdeveloped countries that further diminish the former's autonomy and power. In much the same way, A Thousand Acres narrates the cession of Ty's hog operations to the Heartland Corporation—the ironically-named predator that feeds off the inability of small producers to make it in the get-big-or-get-out world of agriculture. In minimizing U.S. public policy, political leadership, and shared commons, A Thousand Acres underscores that democratic national governance offers little resistance to the predations of global capital and industry. Save an allusion to President Jimmy Carter (whose body becomes a symbol of joking emasculation35) and its depiction of the family's battle in court, the novel excludes references to public spaces and structures: readers do not meet local law enforcement, codes inspectors, teachers, or social workers—significant elisions given the characters' sexual abuse and environmentally destructive Agricultural expansions. Instead, power resides in the machinations of industry fronts such as the Agricultural Policy Working Group, National Pork Producers Council, and U.S. Meat Export Federation. Iowa State's archives of these groups make clear how grassroots and literary imaginations of environmentalism—of local decisions for development and environmental use, factoring in community and ecological considerations—are circumscribed by a triumph of neoliberalism. While local actors might seek a planetary consciousness, their attempts become co-opted by transnational affinity groups that attenuate local agencies. Whereas William Cronon contends that Chicago constructs Iowa and the rest of the U.S. heartland in the nineteenth and early twentieth centuries,36 the “world”—prismed by the U.S. Meat Export Federation (MEF) in particular, given the emphasis on hogs of A Thousand Acres—constructs the U.S. Midwest in the late twentieth century. Founded in 1976, the Federation aims “to promote U.S. red meats overseas” through marketing and international lobbying,37 thus functioning as a quasigovernmental, transnational interface between multinational corporations and national governments. In particular, it commodifies U.S. Midwestern pork for circulation in a global economy. By the late 1980s, a decade after the setting of A Thousand Acres and several years before its publication, the Federation celebrated its success in expanding demand for U.S. pork, thanks in large part to Mexico and Japan, the destinations of 83 percent of all exports38; in 1988 alone, industry lobbyists commemorated a 94 percent increase in the value of pork exports: The figures show that if we get aggressive, the industry can improve pork exports by working with the MEF,’ Russ Sanders, National Pork Producers, said. ‘The aggressive and effective promotional programs MEF is putting together overseas and the increased amount of TEA [Targeted Export Assistance] allocations made a great impression in markets and helped us reach that critical mass overseas.’39 From industry's perspective, the increase in exports reflects the success of the synergy of industry, government, and consumers, for the Federation's cheerleaders envision a sort of new world order for U.S. farmers, in which American pork feeds the world. To my mind, this marketeerism and boosterism serve as well to drum up an audience for a novel like A Thousand Acres. Not only does Made-in-the-U.S.A. pork sell; so too do the cultural narratives that envelop it—even cultural narratives that might shine a negative light on the brand or the storyline, as Smiley's certainly does.40 That the default position in which to introduce U.S. pork to international consumers—a “critical mass,” an explicitly undifferentiated and implicitly undifferentiable amalgamation—is one of “aggression” conveys the collision of power, conflict, and resistance, not only in the sale and distribution of flesh but of narratives like Smiley's too. If proponents of globalization cite its potential to break down economic, cultural, and social barriers as its fundamental positive, its critics point to its reification of the structures and modes of imperial power as its fundamental negative, in which powerful nations and their corporate affiliates, in pursuit of profit, economically subjugate less powerful ones. For the MEF, its efforts to increase pork exports rely on creating an economy of desire for pork as a signifier of America, transubstantiating U.S. pork into U.S. culture; the consumption of food represents the consumption of place, region, and nation—a sort of perverse eco-cosmopolitanism.41 To drum up interest in U.S. pork in Singapore, for example, MEF used “‘Gone With The Wind’ [sic] and Western style radio jingles [to tempt] consumers to try juicy and tender U.S. pork at Swensen's,” a United States-based restaurant, and showcased Virginia baked ham, Chicago cut pork loin, and Carolina barbecued ribs.42 Meanwhile, Ginny in fiction and Hicks, McEachern, Munn, Nelson, and Wiley in Iowa contend with the state's rise to the top of the rankings in the production of hogs thanks to the national and international marketing efforts of trade associations like MEF—a dubious achievement that brings to local communities rampant groundwater pollution, noxious odors, outbreaks of human and animal disease, dangerous work, community disintegration, and low wages. To quote her brother-in-law, Pete, who tells Ginny about the construction of a concentrated animal feeding operation upriver from their property, “Shit rolls downhill.’” “‘I'd imagine that the bacteria level's pretty high,’” he continues. “‘Mmm. Slurp slurp.’”43 As these lines portend, resignation cannot inspire action—only self-annihilation, crystallized in Pete's subsequent drowning. Despite their recognition of the devastation of their human and ecological community, the novel's characters reject activism and intervention. In dodging the Vietnam War draft, Jess Clark's character offers the promise of resistance, a promise that falls flat by novel's end. On the surface, Jess symbolizes the lobby's linkage of the military to agriculture and oil. While this “vegetarian stranger” (to borrow Steven G. Kellman's appellation44) fails as a credible spokesman for the causes of organic agriculture he champions, his refusal to serve in the U.S. military (in particular in a war premised on the indiscriminate application of a fatal, highly carcinogenic pesticide, Agent Orange) nonetheless repudiates the paradoxical impoverishment and enrichment of the Midwest via U.S. military power. Framed as well by the characters' consistent concern over oil—during the oil embargo of the novel's late 1970s setting, the farmers watch gas prices as carefully as they watch the weather—A Thousand Acres yokes Agricultural productivity to global martial conquest. After all, “food is oil” in a world where eroded and less fertile soils demand greater inputs of petroleum-based fertilizers, worked by heavier, more compacting, more gas-guzzling machinery.45 More profoundly, Jess's expulsion, whatever readers think of him, signifies the loss of counterhegemonic perspectives—a loss compounded by the statistical disproportion of rural Americans enlisting in the nation's armed forces. In this world of heedless marketing, where farmers' decisions are made for them by powerfully connected global lobbyists, and where these decisions entail major alterations in landscapes and ecosystems, the end of A Thousand Acres cannot be anything but tragedy. Neither the Cook women nor the grassroots groups explored in this essay can match the global military-industrial-Agricultural complex's power to effect wholesale environmental and societal alteration, even as they try to salvage a tenable environmentalist vision. The keystone of this power system, the fusion of the interconnections of military, corporate, and national power across national borders, is the Agricultural Policy Working Group, comprised of Cargill, Central Soya Company, Continental Grain Company, International Minerals and Chemical Corporation, Monsanto, Nabisco, and Pillsbury. In contrast to the failed agrarian vision that dooms the characters and community of A Thousand Acres, this neoliberal organization envisions “a new Agricultural era, one in which farmers and Agricultural industries operate more freely, with more competitive opportunities and fewer government interventions or constraints.” The APWG advocates the ushering in of this “new Agricultural era” through “trade liberalization” and “decoupling”: With freer Agricultural markets, there is no doubt that the U.S. and rural America will prosper. ... U.S. steps to decouple farmer assistance programs from the market would not only allow American agriculture to use its comparative advantage to capture increased market share, but would also encourage other nations toward reform—and improve the climate for commitments to liberalize Agricultural trade.46 Couched in the rhetoric of freedom and agency, the vision of the APWG paradoxically delineates the transformation of farm owners into Agricultural free agents. This vision does not agitate for international reform but instead for an opening of the American system—one in which the nation's heartland is reduced to poverty, migration, and dispossession, woes chronicled and excoriated by A Thousand Acres. ## Conclusions: New Perspectives for a Renewed Ecocritical Praxis As the texts in this essay demonstrate, novels like A Thousand Acres, grassroots resistance movements, and industry problematize conceptualizations of eco-cosmopolitanism and shift the terms of ecocritical theory. For starters, they muddy the waters of bioregionalist approaches to place, complicated as all Agricultural landscapes are by the application of technologies in genetic engineering, hydration, and cultivation that struggle to respond to changing climatic conditions across disjunctively developing nation-states.47 At the same time, they complicate ecocosmopolitanist approaches, for they show the overt and covert permeations of global capital in defining and endorsing certain locales, environmental relationships, and cultural values—definitions and endorsements that have the power of shifting individual or communal understandings of environmental consciousness and knowledge. Last, the voices recorded in this essay compel us to commit ecocriticism to intervening in agroecological crisis, through the collection and dissemination of textual materials produced by, for, and about farm communities; the subsequent fusion of academic and non-academic producers of knowledge; and the ongoing theorization and retheorization of the binaries at the heart of any ecocritical intervention in agroecology—local and global, culture and science, theory and practice. Thus, to my mind, what's less important is the mode of intervention—novels like Smiley's A Thousand Acres; nonfiction like Michael Pollan's The Omnivore's Dilemma: A Natural History of Four Meals (2006) or Eric Schlosser's Fast Food Nation: The Dark Side of the All-American Meal (2001); or documentaries like Food Inc. (2009). Instead, what is important is redoubled effort toward critical intertextuality, full inclusion, wide dissemination, and critical self-reflection. Indeed, just as grassroots groups such as FLPA, CARP, and others can offer ecocritics new perspectives for analysis, so too can grassroots groups offer new visions of ecocritical praxis. Take, for example, the Practical Farmers of Iowa (PFI), a nonprofit group “to research, develop and promote profitable, ecologically sound, and community enhancing approaches to agriculture.” Formed in 1985 against the “twin crises” of Iowa agriculture—“the negative ecological consequences of conventional farming” and “the collapse of commodity prices and the demise of thousands of farms”—the organization aimed for the promotion of “a new paradigm”: “sustainable agriculture.”48 Today, PFI partners with Iowa State University's Aldo Leopold Center for Sustainable Agriculture, and together they help farmers research practical innovative solutions to contemporary problems.49 The organization posits a compelling conceptualization of individual, communal, and environmental negotiations of local and global subjectivity. In its poetic “Vision for Iowa,” the group calls for “Food that is celebrated / ... / Farms that are prized / ... / Communities that are alive.”50 In its progression from food to farm to community, this vision statement-in-verse reincorporates individuals into their communities—and, equally important, their communities into their local landscapes as well as their global commons. Put simply, the organization envisions human reinhabitation and communal enrichment as the keys to facing the region's conjoined socioeconomic and ecological threats—an Agricultural ecocosmopolitanism, as it were. The conclusion of A Thousand Acres lacks the Practical Farmers' optimism, sadly. The novel's characters have little choice but to sell the farm and drift away, incapable of participating in a world of vertical integration and transnational corporate conglomeration, premised on the ideals of “trade liberalization” and “decoupling.” Rather than launch a final assault on the forces that have decimated the country's agrarian mythos, the novel's end shows that the Cook descendents (like counterhegemonic groups such as CARP and FLPA) present no match for the confluence of the U.S. military and global trade organizations, such as USMEF and APWG. Instead of a final screed lamenting a broken trust, the novel dissolves into a puddle of resignation and submission, befuddled by environmental contamination, familial disintegration, social and communal alienation, and national-transnational economic collapse. In profound ways, the novel and the archives considered in this essay capture these contradictory discourses. Most crucially, these discourses engage and reconceptualize ecocriticism and offer new energies to an ongoing eco-cosmopolitan trajectory. Just as these grassroots environmentalists and capitalist elites have sought to consolidate and disseminate divergent though urgent national campaigns on behalf of small-scale farmers, so too does A Thousand Acres function to crystallize and deploy a consciousness-shifting national reimagination of U.S. agriculture, patriarchy, and the Midwest. Yet Iowa State's archives highlight that these carefully crafted visions depend on the granular accumulation of images, symbols, and ideologies expressed in the texts of grassroots environmental activists. As Heise suggests, the success of eco-cosmopolitanism shall depend on “finer-grained distinctions,” the complications of recognizing yet questioning national cultures and the impacts of such constructions on diverse environmentalisms.51 These texts bring to the forefront the challenges human subjects face in articulating and narrating environmental relationships—even despite their intimate proximity to these landscapes. What's more, they expose the struggles of rural people to embrace a planetary consciousness—a global awareness that, as this essay shows, can involve the continued ecological devastation of the landscapes these activists hold dear. Indeed, Ginny's hard-won mastery of a complex environmentalist voice mirrors that by dozens of rural Iowans, whose voices and activism are captured in Iowa State's archives, a convergence that encapsulates great promise as well as great liability. These challenges demand the close attention of ecocritics, not only to deeply engage appropriate texts, but to engage them with a framework that expands the orchestra and zeros in on the critical problems of global agriculture, planetary health, and human rights. In sum, if we are to make good on Heise's call for “effective aesthetic templates by means of which to convey such a dual vision of the earth as a whole and of the different earths that are shaped by varying cultural contexts” (210), we must turn not only to novels, poems, and plays and the theories we already understand, but to local people in their roles as workers, thinkers, organizers, and creators as well, for both the conundrums they confront as well as the critical insights they bring to the deeply eco-cosmopolitan project we seek. ## Afterword A reviewer of this essay raised a provocative set of questions: [T]he thrust and the beauty of the paper is that ecocritics must take into account the voices of the folk. And yet it is written almost without exception from sentence to sentence in a language that the folk would not be able to penetrate without a graduate degree (I realize many folk do have university degrees), even though they would understand as well or better than most academics everything that is at stake and outlined so well in the paper. The crux of my question is this: why should academic ecocritics take the voices of the folk into account in our practice, on the one hand, and yet effectively withhold our voices from them on the other? Is that a dialogic or a monologic act? Is it an act of appropriation of voices? The reviewer's questions fascinate me for several reasons. First, they compel me to question my own training, training that has formed my scholarly voice—yet training that might not be up to the task of making my scholarly writing accessible and meaningful to a broader audience. Second, they compel me to reiterate my own perspective of literary studies as being inclusive of all texts, as being more rightly the study of narrative, produced by individuals inside and outside the academy, in infinite forms. (My preparation in graduate school inculcated this broad conception of literariness—indeed, my graduate school made possible my time in Iowa State University's archives—a conception I have found that many of my current colleagues do not share, exemplified when I recently proposed including Silent Spring [1962] in a senior seminar in literature: “But it's not literature!” several of my colleagues scoffed.) Last, these questions underscore the urgency of maintaining the clear relevance of the humanities, best accomplished, in my opinion, by more, better, and richer collaboration and commingling among all of society's institutions, from higher education to local grassroots groups to scholarly presses to governmental agencies. In this conviction, I second Martha Nussbaum in Not for Profit: Why Democracy Needs the Humanities (2010), where she writes that without the humanities, “nations all over the world will soon be producing generations of useful machines, rather than complete citizens who can think for themselves, criticize tradition, and understand the significance of another person's sufferings and achievements.”52 We need more dialogical acts, that is, because we must recognize the completeness of all human beings—and because all human beings, in their completeness, have something to teach us as we work to teach each ensuing generation. In sum, I will continue to strive to develop a scholarly prose that can signify inside and outside the circle of literary critics. In the meantime, I hope we in the environmental humanities will continue to identify ways we can share as broadly as possible what we do, whether it is by disseminating our work online, hosting community reading and writing groups, and using service-learning in our classes, just to name a few. ## Acknowledgments I wish to thank, first, Michael Kreyling, Cecelia Tichi, Vereen Bell, Sheila Smith McKoy, Tamika Carey, Teagan Decker, and Jane Haladay for their guidance and suggestions in the composition of this essay; second, Vanderbilt University for financing my research in Iowa State University's special collections; and third, the anonymous readers of my essay and Thom van Dooren for their patient, careful, and incisive reading and editing of this essay. ## Bibliography Bibliography Agricultural Policy Working Group Records, 1987-1988 . Special Collections Department, Parks Library, Iowa State University , Ames, IA . Barbas-Rhoden Laura . “ Toward an Inclusive Eco-Cosmopolitanism: Bilingual Children's Literature in the United States .” ISLE: Interdisciplinary Studies in Literature and Environment 18 , no. 2 ( 2011 ): 359 - 376 . Buell Lawrence . The Environmental Imagination: Thoreau, Nature Writing, and the Formation of American Culture . Cambridge, MA : Belknap Press of Harvard University Press , 1995 . Carr Glynis . “ Persephone's Daughters: Jane Smiley's A Thousand Acres and Classical Myth .” Bucknell Review 44 , no. 1 ( 2000 ): 120 - 136 . Citizens Against River Pollution Records , 1988-1992 . Special Collections Department, Parks Library, Iowa State University , Ames, IA . Conlogue William . Working the Garden: American Writers and the Industrialization of Agriculture . Chapel Hill : University of North Carolina Press , 2001 . Cronon William . Nature's Metropolis: Chicago and the Great West . New York : W.W. Norton , 1991 . Farm Aid Collected Materials , 1987-1995 . Special Collections Department, Parks Library, Iowa State University , Ames, IA . Farm Land Preservation Association Inc. Records , 1976-1979 . Special Collections Department, Parks Library, Iowa State University , Ames, IA . Heise Ursula K. The Hitchhiker's Guide to Ecocriticism .” PMLA 121 , no. 2 ( 2006 ): 503 - 516 . Heise Ursula K. Sense of Place, Sense of Planet: The Environmental Imagination of the Global . New York : Oxford University Press , 2008 . Herr Cheryl Temple . Critical Regionalism and Cultural Studies: From Ireland to the American Midwest . Gainesville : University Press of Florida , 1996 . Kellman Steven G. Food Fights in Iowa: The Vegetarian Stranger in Recent Midwest Fiction .” Virginia Quarterly Review 71 , no. 3 ( 1995 ): 435 - 447 . Levin Amy . “ Familiar Terrain: Domestic Ideology and Farm Policy in Three Women's Novels about the 1980s .” NWSA Journal 11 , no. 1 ( 1999 ): 21ff. Manning Richard . “ The Oil We Eat: Following the Food Chain Back to Iraq .” Harper's Magazine ( February 2004 ): 37 - 45 . Mathieson Barbara . “ The Polluted Quarry: Nature and Body in A Thousand Acres. ” In Transforming Shakespeare: Contemporary Women's Re-Visions in Literature and Performance , edited by Novy Marianne , 127 - 144 . New York : St. Martin's , 1999 . Norman H. Borlaug Papers, 1941-1997. Special Collections Department, Iowa State University , Ames, IA . Nussbaum Martha . Not for Profit: Why Democracy Needs the Humanities . Princeton, NJ : Princeton University Press , 2010 . O'Dair Sharon . “ Horror or Realism?: Filming ‘Toxic Discourse’ in Jane Smiley's A Thousand Acres. Textual Practice 19 , no. 2 ( 2005 ): 263 - 282 . Practical Farmers of Iowa Records , 1985-1992 . Special Collections Department, Parks Library, Iowa State University , Ames, IA . Rahman Shazia . “ Karachi, Turtles, and the Materiality of Place: Pakistani Eco-cosmopolitanism in Uzma Aslam Khan's Trespassing. ISLE: Interdisciplinary Studies in Literature and Environment 18 , no. 2 ( 2011 ): 261 - 282 . Robbins Bruce . “ Commodity Histories .” PMLA 120 , no. 2 ( 2005 ): 454 - 463 . Rodney Walter . How Europe Underdeveloped Africa . Washington, DC : Howard University Press , 1972 . Slicer Deborah . “ Toward an Ecofeminist Standpoint Theory: Bodies as Grounds .” In Ecofeminist Literary Criticism: Theory, Interpretation, Pedagogy , edited by Gaard Greta and Murphy Patritck D. , 48 - 73 . Urbana : University of Illinois Press , 1998 . Slovic Scott . “ Editor's Note .” ISLE: Interdisciplinary Studies in Literature and Environment 18 , no. 2 ( 2011 ): 257 - 260 . Smiley Jane . “ Shakespeare in Iceland .” In Shakespeare and the Twentieth Century: The Selected Proceedings of the International Shakespeare Association World Congress, Los Angeles , 1996 , edited by Bate Jonathan , Levenson Jill L. , and Mehl Dieter , 41 - 59 . Newark : University of Delaware Press , 1998 . Smiley Jane . A Thousand Acres . New York : Anchor Books , 1991 . Tichi Cecelia . “ Canonizing Economic Crisis: Jack London's The Road. American Literary History 23 , no. 1 ( 2011 ): 19 - 31 . U.S. Meat Export Federation, Collected Materials , 1985-1992 . Special Collections Department, Parks Library, Iowa State University , Ames, IA . 1 Jane Smiley, “Shakespeare in Iceland,” in Shakespeare and the Twentieth Century: The Selected Proceedings of the International Shakespeare Association World Congress, Los Angeles, 1996, ed. Jonathan Bate, Jill L. Levenson, and Dieter Mehl (Newark, Del.: University of Delaware Press, 1998), 51. 2 See esp. Slicer, “Toward an Ecofeminist Standpoint Theory: Bodies as Grounds,” in Ecofeminist Literary Criticism: Theory, Interpretation, Pedagogy, ed. Greta Gaard and Patritck D. Murphy (Urbana: University of Illinois Press, 1998). See also Glynis Carr, “Persephone's Daughters: Jane Smiley's A Thousand Acres and Classical Myth,” Bucknell Review 44, no. 1 (2000); William Conlogue, Working the Garden: American Writers and the Industrialization of Agriculture (Chapel Hill: University of North Carolina Press, 2001); Amy Levin, “Familiar Terrain: Domestic Ideology and Farm Policy in Three Women's Novels about the 1980s,” NWSA Journal 11, no. 1 (1999); Barbara Mathieson, “The Polluted Quarry: Nature and Body in A Thousand Acres,” in Transforming Shakespeare: Contemporary Women's Re-Visions in Literature and Performance, ed. Marianne Novy (New York: St. Martin's, 1999); and Sharon O'Dair, “Horror or Realism?: Filming ‘Toxic Discourse’ in Jane Smiley's A Thousand Acres,Textual Practice 19, no. 2 (2005). Ceclia Tichi, in “Canonizing Economic Crisis: Jack London's The Road (American Literary History 23, no. 1 [2011]), sets a provocative agenda for future interpretations of the novel: as an expose of industrial agriculture's systemic predatiousness. 3 Ursula Heise, Sense of Place, Sense of Planet: The Environmental Imagination of the Global (New York: Oxford University Press, 2008), 10, 61. In “The Hitchhiker's Guide to Ecocriticism” (PMLA 121, no. 2 [2006]), Heise calls on ecocritics to engage theories of globalization, to complicate its simplistic rejection of economic globalization while celebrating intercultural coalitions (513-514). Scott Slovic, editor of ISLE: Interdisciplinary Studies in Literature and Environment, responds to Heise's call by dedicating the journal's spring 2011 issue to “manifest[ing] the global energy in the fields of ecocriticism and environmental literature” (257). In this special ISLE issue, Shazia Rahman offers an interpretation of Uzma Aslam Khan's Trespassing (2003) that seeks to reposition Heisian eco-cosmopolitanism as “neither an extension of nationalism nor an opposition to a nationalism that can be co-opted by US imperialism” (262), and Laura Barbas-Rhoden showcases Latino children's literatures as exemplary of a “threshold point in which place, identities, and traditions are in flux as a result of the process of deterritorialization associated with crossing borders and merging cultures” (373). 4 Heise, Sense of Place, 60-61. 5 Jane Smiley, A Thousand Acres (New York: Anchor Books, 1991), 131-132. 6 Heise, Sense of Place, 62. 7 Ibid., 61. 8 These records are housed in the Special Collections Department of the Iowa State University Library, which maintains a comprehensive archive of materials illuminating Agricultural history, science, and economics in Iowa and the Midwest. I am grateful to Special Collections head Tanya Zanish-Belcher for her leadership in stewarding this important collection and making it available to researchers. 9 Heise, Sense of Place, 210. 10 Willie Nelson, John Mellencamp, and Neil Young to Farm Aid friend, April 3, 1993, Farm Aid Collected Materials, 1987-1995, Special Collections Department, Parks Library, Iowa State University, Ames, IA. 11 Willie Nelson E. Rogers, Dec. 12 1995, Farm Aid Collected Materials, 1987-1995, (emphasis original). 12 Carl H. Munn, letter, n.d., Farm Land Preservation Association Inc. Records, 1976-1979, Special Collections Department, Parks Library, Iowa State University, Ames, IA, 1. 13 Laura Mae Hicks, letter, March 27, 1978, Farm Land Preservation Association Inc. Records, 1976-1979, 1. 14 Lawrence Buell, The Environmental Imagination: Thoreau, Nature Writing, and the Formation of American Culture (Cambridge, MA: Belknap Press of Harvard University Press, 1995), 50. 15 Ross L. Wiley, letter, n.d., Farm Land Preservation Association Inc. Records, 1976-1979, 2. 16 Ibid. 17 Letter, n.d., Farm Land Preservation Association Inc. Records, 1976-1979, 1. 18 Glenn J. Burrows to Bruce J. Terris, May 13, 1978, Farm Land Preservation Association Inc. Records, 1976-1979, 1. 19 Clifford R. Schildmeier, letter, n.d., Farm Land Preservation Association Inc. Records, 1976-1979, 2. 20 Farm Land Preservation Association Inc., How to Save the Taxpayers$96,000,000 and Preserve 450 Acres of Pristine Farm Land, (Linn County et al., IA: Farm Land Preservation Association Inc., n.d.), Farm Land Preservation Association Inc. Records, 1976-1979, 2. The Association disbanded in 1984, following an 8th U.S. Circuit Court of Appeals decision in 1979 to permit the Iowa Department of Transportation to construct the highway; I-380 was completed in 1985. See Farm Land Preservation Association Inc. et al. v. Neil Goldschmidt et al., 611 F. 2d 233 (8th Cir. 1979). 21 Smiley, A Thousand Acres, 123. 22 Ibid., 15, 33. 23 Louise McEachern to PrimeTime Live, Nov. 24, 1989, Citizens Against River Pollution Records, 1988-1992, Special Collections Department, Parks Library, Iowa State University, Ames, IA, 1 (emphasis original). 24 McEachern, letter, Feb. 20, 1990, Citizens Against River Pollution Records 1988-1992, 1. 25 McEachern to Iowa state Rep. Mark Shearer, Jan. 13, 1991, Citizens Against River Pollution Records, 1988-1992, 1. 26 McEachern to Jan Mickelson, April 6, 1990, Citizens Against River Pollution Records, 1988-1992, 1. 27 See Cheryl Temple Herr, Critical Regionalism and Cultural Studies: From Ireland to the American Midwest (Gainesville: University Press of Florida, 1996), for a trenchant comparative transnationalist/transregionalist analysis of the Midwest. 28 Norman H. Borlaug, “Sustainable Agriculture: For How Many, at What Standard of Living, and Over What Period of Time?” (lecture, Texas A&M University, College Station, TX, Oct. 25, 1990), Norman H. Borlaug Papers, 1941-1997, Special Collections Department, Parks Library, Iowa State University, Ames, IA, 8. 29 Smiley, A Thousand Acres, 168. 30 Ibid., 254. 31 Ibid., 48, 167. 32 Ibid., 325 (emphasis original). 33 Ibid. 34 See Walter Rodney, How Europe Underdeveloped Africa (Washington, D.C.: Howard University Press, 1972). 35 Smiley, A Thousand Acres, 71. 36 See William Cronon, Nature's Metropolis: Chicago and the Great West (New York: W.W. Norton, 1991). 37 U.S. Meat Export Federation, Meat Nutri-Facts: U.S. Pork (U.S. Meat Export Federation, n.d.), U.S. Meat Export Federation, Collected Materials, 1985-1992, Special Collections Department, Parks Library, Iowa State University, Ames, IA), 2. 38 “Pork,” Action, April 1989, U.S. Meat Export Federation, Collected Materials, 1985-1992, 5. 39 “Pork Exports,” Action, March 1989, U.S. Meat Export Federation, Collected Materials, 1985-1992, 4. 40 See Bruce Robbins, “Commodity Histories” (PMLA 120, no. 2 [2005]), who underscores and explains the contemporary seduction of commodity narratives. 41 Such appropriations of globalization reiterate Heise's caution: “This argument for an increased emphasis on a sense of planet ... should be understood not as a claim that environmentalism should welcome globalization in every form ... or as a refusal to acknowledge that appeals to indigenous traditions, local knowledge, or national law are in some cases appropriate and effective strategies” (Sense of Place 59). 42 “U.S. Pork Check-off/MEF Promotion Extended at Swensen's in Singapore,” Action, January 1988, U.S. Meat Export Federation, Collected Materials, 1985-1992, 1. 43 Smiley, A Thousand Acres, 251, 249. 44 See Steven G. Kellman, “Food Fights in Iowa: The Vegetarian Stranger in Recent Midwest Fiction” (Virginia Quarternly Review 71, no. 3 [1995]). 45 Richard Manning, “The Oil We Eat: Following the Food Chain Back to Iraq,” Harper's Magazine (February 2004), 42. 46 Agricultural Policy Working Group, Decoupling: A New Direction in Global Farm Policy (Washington, D.C.: Agricultural Policy Working Group, 1988), Agricultural Policy Working Group Records, 1987-1988, Special Collections Department, Parks Library, Iowa State University, Ames, IA, 19-20. 47 See Justin Gillis, “A Warming Planet Struggles to Feed Itself” (New York Times, 4 June 2011). 48 Practical Farmers of Iowa, Annual Report (Ames, Iowa: Practical Farmers of Iowa, 2002), Practical Farmers of Iowa Records, 1985-1992, Special Collections Department, Parks Library, Iowa State University, Ames, IA, 2. 49 For example, the group has investigated niche markets for pork in Sweden and Denmark (Annual Report 11), and they have sponsored “All-Iowa Meals” which tell the stories of the foods and farmers who grew them (Annual Report 12). 50 Ibid., 1. 51 Heise, Sense of Place, 60. 52 Martha Nussbaum, Not for Profit: Why Democracy Needs the Humanities (Princeton, N.J.: Princeton University Press, 2010), 2. This is an open access article distributed under the terms of a Creative Commons License (CC BY-NC-ND 3.0). This license permits use and distribution of the article for non-commercial purposes, provided the original work is cited and is not altered or transformed.
2019-10-23 11:10:16
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.17435738444328308, "perplexity": 13434.227765233718}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": false}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-43/segments/1570987833089.90/warc/CC-MAIN-20191023094558-20191023122058-00352.warc.gz"}
https://cs.stackexchange.com/questions/32245/lamport-timestamps-when-to-update-counters
# Lamport Timestamps: When to Update Counters In the timepiece (excuse the pun) that is Time, Clocks and the Ordering of Events, Lamport describes the logical clock algorithm as the following: 1. Each process $Pi$ increments $Ci$ between any two successive events. 2. If event a is the sending of a message m by process $Pi$, then the message $m$ contains a timestamp $Tm = Ci(a)$. 3. Upon receiving a message $m$, process $Pi$ sets $Ci$ greater than or equal to its present value and greater than $Tm$. However, the algorithm as it is described on Wikipedia (and other websites) is a little different: 1. A process increments its counter before each event in that process. 2. When a process sends a message, it includes its counter value with the message. 3. On receiving a message, the receiver process sets its counter to be greater than the maximum of its own value and the received value before it considers the message received. This leaves me with the following questions: 1. Should be increment the counter before sending a message, as the sending of a message is itself an event. This incremented timestamp is the value that is sent with the message. 2. When a message is received by process $Pi$ Lamport states that $Pi$ logical clock should be set to $max(Tm + 1, Ci)$. However, the Wikipedia article says that this should be $max(Tm, Ci) + 1$. Is Wikipedia wrong? Considering that any local action (e.g. increasing a counter) done by a process is an event, the Wikipedia sentence "A process increments its counter before each event in that process." does not make any sense to me. Let me try to answer your questions: 1. Should we increment the counter before sending a message, as the sending of a message is itself an event. This incremented timestamp is the value that is sent with the message. Both actions (i.e. increasing the counter and sending the message) happen atomically in the same event. The same is true when a message is received: The receive event already includes the counter update. 1. When a message is received by process Pi Lamport states that Pi logical clock should be set to max(Tm+1,Ci). However, the Wikipedia article says that this should be max(Tm,Ci)+1. Is Wikipedia wrong? Note that, according to Lamport's paper, the logical clocks must satisfy the following property: If $a$ happens before $b$ then $C(a) < C(b)$. In particular, this means that clock values (of events) at the same process must be strictly increasing. Therefore, the correct update rule is $\max(Tm,Ci)+1$, as otherwise two subsequent events at the same process might have the same value.
2019-06-25 07:39:33
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.7463129162788391, "perplexity": 719.3864675735674}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-26/segments/1560627999814.77/warc/CC-MAIN-20190625072148-20190625094148-00156.warc.gz"}
http://mathhelpforum.com/discrete-math/53848-how-many-bit-strings-there-length-6-less.html
# Thread: how many bit strings are there of length 6 or less 1. ## how many bit strings are there of length 6 or less thank you in advance 2. For each n, there are $2^n$ bitstrings of length n. So add them up!
2013-12-08 21:49:49
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 1, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.5315974354743958, "perplexity": 495.1096648414092}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2013-48/segments/1386163818502/warc/CC-MAIN-20131204133018-00088-ip-10-33-133-15.ec2.internal.warc.gz"}
https://codereview.stackexchange.com/questions/31451/craps-game-rules-and-code/31452
# Craps game rules and code I'm developing a small craps game in C++, and my C++ skills are a bit rusty. I really need someone to review my code to ensure that I have correctly implemented the game according to the rules. Game rules: 1. The player or shooter rolls a pair of standard dice 1. If the sum is 7 or 11 the game is won 2. If the sum is 2, 3 or 12 the game is lost 3. If the sum is any other value, this value is called the shooter’s point and he continues rolling until he rolls a 7 and loses or he rolls the point again in which case he wins 2. If a game is won the shooter plays another game and continues playing until he loses a game, at which time the next player around the Craps table becomes the shooter My code: #include <iostream> #include <ctime> using namespace std; bool checkWinning(int roll); int main(int argc, const char * argv[]) { //create player aka shooter //create pair of dice unsigned int dice1=0; unsigned int dice2 = 0; //create roll unsigned int roll = 0; //create game loop while(checkWinning(roll) == true) { dice1 = rand() % 6 + 1; dice2 = rand() % 6 + 1; roll = dice1 + dice2; cout<< dice1<<" +"<< dice2<<" = "<< roll << endl; //cout<< checkWinning(2) <<endl; } return 0; } bool checkWinning(int roll) { bool winner = true; if( roll == 2 || roll == 3 || roll == 12) return winner= false; else if(roll == 7 || roll == 11 ) return winner; else return winner; }; • The shooter loses his turn if a 7 is rolled after there is a point. – dbasnett Sep 18 '13 at 14:13 • The parameters in main() are only necessary if you're executing from the command line. • You're not calling std::srand() nor including <cstdlib>. However, if you're using C++11, both std::srand and std::rand are not recommended due to certain computational complications (the C++11 pseudo-random number generators can be found under <random>). But, for a simple program, it may not matter. In general, here's how to call std::srand(): // casting may just be necessary if warnings are generated // that will alert you if there's a possible loss of data // prefer nullptr to NULL if using C++11 std::srand(static_cast<unsigned int>(std::time(NULL))); Only include this once, preferably at the top of main(). This is preferred because 1. It'll help you keep track of it, especially if it'll need to be removed at some point. 2. If called repeatedly, you'll receive the "same random number" each time. • It's best to keep variables as close in scope as possible. Here, dice1 and dice2 can be initialized in the while-loop: unsigned int dice1 = rand() % 6 + 1; unsigned int dice2 = rand() % 6 + 1; roll, however, will need to stay where it is so that the loop will work. • The bool-checking can be shortened: // these are similar while (checkWinning(roll) == true) while (checkWinning(roll)) // these are also similar while (checkWinning(roll) == false) while (!checkWinning(roll)) • checkWinning() takes int roll, but roll is already unsigned int. They should match. • checkWinning()'s closing curly brace shouldn't have a ;. It's not a type. • bool winner seems redundant; just return true or false. Also, the conditions seem a little unclear. If the sum constitutes a win or a re-roll, how do you specifically distinguish the two? They both return true. I'd at least rename the function for clarification. There's also a Boolean enum, but that may be overkill here (or even unnecessary as there are only two ending outcomes). • There should be a final outcome message, indicating a win or a loss. Also, you're not giving the player the option to play another game if victorious (and until loss). • Good points, but two nit picks: The parameters to main have nothing to do with if you're compiling from the command line, but rather how you're running (creating) it. And for if using C++11, use nullptr instead... You should also mention that if you're using C++11, run the hell away from srand()/rand(). rand() has terrible range, is a pain in the ass to clamp to a range without bias, and a 32 bit seed limit is rather meh (half of std::time() gets truncated for example). Doesn't matter for a basic card game program, but CR brings out my inner pedant :). – Corbin Sep 18 '13 at 5:42 • @Corbin: Good points. I'll put those in with my current edits. – Jamal Sep 18 '13 at 5:43 • @Corbin: Also, I had no idea about std::srand() in this case. I did happen to come across std::uniform_int_distribution not too long ago. Would this be a valid solution for here? – Jamal Sep 18 '13 at 6:33 • Yeah. You would use a std::uniform_int_distribution<int>(1, 6) to achieve the same functionality as what he's done. A full (conviently dice oriented) example is here: en.cppreference.com/w/cpp/numeric/random/… – Corbin Sep 19 '13 at 0:47 • @Corbin: Awesome! It appears to work with my old compiler, too. Looks like I can toss out rand() from my own stuff. – Jamal Sep 19 '13 at 1:15 Using % with rand is wrong. Assuming your RAND_MAX is the same as mine 2147483647 then the probabilities for each number are: dice1 = rand() % 6 + 1; 1: 357913942/2147483647 Notice a slightly higher probability for a 1. 2: 357913941/2147483647 3: 357913941/2147483647 4: 357913941/2147483647 5: 357913941/2147483647 6: 357913941/2147483647 The solution use C++11 random functionality. Correct for the skew in C++03's rand() Unfortunately I can't find a correct answer on SO for using rand(). int dieRoll() // return a number between 1 and 6 { static int maxRange = RAND_MAX / 6 * 6; // note static so calculated once. int result; do { result = rand(); } while(result > maxRange); // Anything outside the range will skew the result return result % 6 + 1; // So throw away the answer and try again. } Note: int result = rand() * 1.0 / range; // does not help with distribution In addition to what @Jamal said… The singular for "dice" is "die", so name your variables accordingly. It's C++, so Die deserves to be a class, with a Die.roll() method. The constructor could call std::srand(). You should use a do-while loop. Then you could avoid having to artificially initialize all of your values to illegal 0 values. Your checkWinning() function could just be a switch statement. It doesn't need a winner variable, and can just return the result immediately. I see where you implemented rules 1.1 and 1.2, but I don't see any of your other game rules expressed anywhere in your code.
2020-08-07 12:38:38
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.26034945249557495, "perplexity": 3587.113686534465}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-34/segments/1596439737178.6/warc/CC-MAIN-20200807113613-20200807143613-00519.warc.gz"}
https://www.acmicpc.net/problem/3122
시간 제한메모리 제한제출정답맞힌 사람정답 비율 1 초 128 MB444100.000% 문제 A new computer game in the Star Expertise series is about to be released. Gamers have lined up in front of one store waiting for it to open. While waiting, gamers exchange recent experiences from games, occasionally drifting into real-world subjects. One unavoidable topic is their computer setups. A special source of pride and arrogance is the amount of memory on the graphics card inside a computer. Such discussions often become unpleasant so the store decided to disallow access to gamers they consider too arrogant. To ensure objectivity, they devised a simple mathematical model. When a new gamer tries to enter the line, his graphics card is compared with the cards of players already in line, using a simple criterion: the more memory his graphics card has, the better it is. Each of the quotients (the memory of the new gamer divided by the memory of one already in line) is rounded down, and then the total arrogance of the new gamer is estimated to be the sum of all these quotients. For example, suppose there already three players in line, with 3, 1 and 2 megabytes on their graphics cards. A new gamer with 3 megabytes will have arrogance 1+3+1=5. The store management will not allow a gamer to enter the line if his arrogance is greater than the number of people already in line. In the above example, the new gamer with 3 megabytes of memory and arrogance 5 would be denied access, but a gamer with 2 megabytes would be allowed, since his arrogance would be 0+2+1=3 (which is less than or equal to 3, the number of people already in line). Write a program that, given the calculated arrogances of gamers in line, finds one possible sequence of memory amounts on their graphics cards. 입력 The first line contains an integer N (1 ≤ N ≤ 100 000), the number of players in line. The second line contains N non-negative integers, the arrogances of players in line. The arrogances will be given in order in which the gamers arrived. Additionally, the arrogance of gamer k (starting from 1) will be less than k. 출력 Output N integers on a single line, for each gamer the amount of memory on his graphics card, in the same order in which the gamers' arrogances were given. The amount of memory must be an integer less than 109. The solution may not be unique, but will always exist. 예제 입력 1 4 0 0 1 3 예제 출력 1 6 2 3 4 예제 입력 2 6 0 1 0 2 0 3 예제 출력 2 7 7 3 6 2 5
2023-01-28 17:46:42
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.18590331077575684, "perplexity": 1918.5554243526321}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2023-06/segments/1674764499646.23/warc/CC-MAIN-20230128153513-20230128183513-00573.warc.gz"}
https://www.thevalueinitiative.org/network-meta-analysis-of-parametric-survival-curves/
To inform health-care decision-making, treatments are often compared with synthesizing results from a number of randomized controlled trials. The meta-analysis may not only be focused on a particular pairwise comparison but can also include multiple treatment comparisons by means of network meta-analysis. For time-to-event outcomes such as survival, pooling is typically based on the hazard ratio (HR). The proportional hazards assumption that underlies current approaches of evidence synthesis is not only often implausible but can also have a huge impact on decisions based on differences in expected outcomes, such as cost-effectiveness analysis. The application of a constant HR implies the assumption that the treatment only has an effect on one characteristic of the survival distribution, while commonly used survival distributions, like the Weibull distribution, have both a shape and a scale parameter. Instead of using constant HRs, this paper proposes meta-analysis of treatment effects based on the shape and scale parameters of parametric survival curves. Source: Research Synthesis Methods
2022-06-28 21:16:22
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8558865189552307, "perplexity": 1015.9199801638758}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-27/segments/1656103617931.31/warc/CC-MAIN-20220628203615-20220628233615-00529.warc.gz"}
https://competitive-exam.in/questions/discuss/to-calculate-the-elasticity-of-demand-which-of
# To calculate the elasticity of demand, which of the following formula is used?: Percentage change in demand Original demand Proportionate change in demand Proportionate change in price Change in demand Change in price None of the above Please do not use chat terms. Example: avoid using "grt" instead of "great".
2021-03-06 13:44:37
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8061245679855347, "perplexity": 4737.925125061549}, "config": {"markdown_headings": true, "markdown_code": false, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-10/segments/1614178375096.65/warc/CC-MAIN-20210306131539-20210306161539-00588.warc.gz"}
https://socratic.org/questions/5770e36c11ef6b0185ce79ae
# How do you use Hess's law to find the enthalpy of reaction for these reactions? ## a) $\text{NaOH"(g) + "CO"_2(g) -> "Na"_2"CO"_3(s) + "H"_2"O} \left(g\right)$ b) ${\text{C"_2"H"_2(g) + "H"_2(g) -> "C"_2"H}}_{4} \left(g\right)$ c) ${\text{NO"_2(g) rightleftharpoons "N"_2"O}}_{4} \left(g\right)$ Jul 28, 2016 I'm actually going to go out of order for this, since it seems that (c) is easier than (b), which is easier than (a). The general idea is, enthalpy is a state function, so we only need to think about the initial and final states of the reaction. Therefore, we can treat the standard enthalpies of formation of each reactant and product stoichiometrically as initial and final states. $\setminus m a t h b f \left(\Delta {H}_{\text{rxn}}^{\circ} = {\sum}_{P} {\nu}_{P} \Delta {H}_{f , P}^{\circ} - {\sum}_{R} {\nu}_{R} \Delta {H}_{f , R}^{\circ}\right)$ $= \stackrel{\text{Sum of Products' Enthalpies of Formation")overbrace([nu_(P_1)DeltaH_(f,P_1) + nu_(P_2)DeltaH_(f,P_2) + . . . + nu_(P_n)DeltaH_(f,P_n)]) - stackrel("Sum of Reactants' Enthalpies of Formation}}{\overbrace{\left[{\nu}_{{R}_{1}} \Delta {H}_{f , {R}_{1}} + {\nu}_{{R}_{2}} \Delta {H}_{f , {R}_{2}} + . . . + {\nu}_{{R}_{n}} \Delta {H}_{f , {R}_{n}}\right]}}$ where $\nu$ is the stoichiometric coefficient in the balanced chemical reaction. $R$ / $P$ means reactants / products. ENTHALPIES OF REACTION: SINGLE REACTANT/PRODUCT c) For the reaction $\setminus m a t h b f \left(\textcolor{red}{2} \text{NO"_2(g) -> "N"_2"O"_4} \left(g\right)\right)$, which must be balanced first, it's important to realize that ${\text{N"_2"O}}_{4}$ can either be a gas or liquid at standard conditions, so you have to make sure you look at the right number. My textbook lists: DeltaH_(f,"NO"_2(g))^@ = "33.2 kJ/mol" DeltaH_(f,"N"_2"O"_4(g))^@ = "9.16 kJ/mol" So you just have: color(blue)(DeltaH_"rxn"^@) = [nu_("N"_2"O"_4(g))DeltaH_(f,"N"_2"O"_4(g))^@] - [nu_("NO"_2(g))DeltaH_(f,"NO"_2(g))^@] $= \left[\left(1\right) \left(\text{9.16 kJ/mol")] - [(color(red)(2))("33.2 kJ/mol}\right)\right]$ $= \textcolor{b l u e}{- \text{57.2 kJ/mol}}$ ENTHALPIES OF REACTION: ELEMENTAL STATES b) For elements in their elemental form, $\Delta {H}_{f}^{\circ} = 0$, like for ${\text{H}}_{2} \left(g\right)$ (and for ${\text{O}}_{2} \left(g\right)$, ${\text{F}}_{2} \left(g\right)$, ${\text{Br}}_{2} \left(l\right)$, ${\text{I}}_{2} \left(s\right)$, ${\text{N}}_{2} \left(g\right)$, and ${\text{Cl}}_{2} \left(g\right)$). After all, they naturally formed that way. Besides that, my textbook lists: DeltaH_(f,"C"_2"H"_2(g))^@ = "227.4 kJ/mol" DeltaH_(f,"C"_2"H"_4(g))^@ = "52.4 kJ/mol" for the already-balanced reaction written as $\setminus m a t h b f \left({\text{C"_2"H"_2(g) + "H"_2(g) -> "C"_2"H}}_{4} \left(g\right)\right)$. Same deal as before. Just note that you can ignore ${\text{H}}_{2} \left(g\right)$ because its standard enthalpy of formation is 0. color(blue)(DeltaH_"rxn"^@) = [nu_("C"_2"H"_4(g))DeltaH_(f,"C"_2"H"_4(g))^@] - [nu_("C"_2"H"_2(g))DeltaH_(f,"C"_2"H"_2(g))^@ + nu_("H"_2(g))DeltaH_(f,"H"_2(g))^@] $= \left[\left(1\right) \left(\text{52.4 kJ/mol")] - [(1)("227.4 kJ/mol") + (1)("0 kJ/mol}\right)\right]$ $= \textcolor{b l u e}{- \text{175 kJ/mol}}$ ENTHALPIES OF REACTION: MULTIPLE REACTANTS/PRODUCTS a) This one would be the one where you could possibly mess up, since there is more than one reactant/product, and so, you could mess up your signs. $\setminus m a t h b f \left(\textcolor{red}{2} \text{NaOH"(s) + "CO"_2(g) -> "Na"_2"CO"_3(s) + "H"_2"O} \left(g\right)\right)$, my textbook lists: DeltaH_(f,"NaOH"(s))^@ = -"425.8 kJ/mol" DeltaH_(f,"CO"_2(g))^@ = -"393.5 kJ/mol" DeltaH_(f,"Na"_2"CO"_3(s))^@ = -"1130.7 kJ/mol" DeltaH_(f,"H"_2"O"(g))^@ = -"241.8 kJ/mol" So, don't use the value for ${\text{CO}}_{2} \left(a q\right)$. Also, you should make sure you have parentheses: color(blue)(DeltaH_"rxn"^@) = [nu_("Na"_2"CO"_3(s))DeltaH_(f,"Na"_2"CO"_3(s))^@ + nu_("H"_2"O"(g))DeltaH_(f,"H"_2"O"(g))^@] - [nu_("NaOH"(s))DeltaH_(f,"NaOH"(s))^@ + nu_("CO"_2(g))DeltaH_(f,"CO"_2(g))^@] $= \left[\left(1\right) \left(- \text{1130.7 kJ/mol") + (1)(-"241.8 kJ/mol")] - [(color(red)(2))(-"425.8 kJ/mol") + (1)(-"393.5 kJ/mol}\right)\right]$ $= \textcolor{b l u e}{- \text{127.4 kJ/mol}}$
2019-11-20 00:05:00
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 42, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8709105253219604, "perplexity": 1623.740585749082}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-47/segments/1573496670268.0/warc/CC-MAIN-20191119222644-20191120010644-00188.warc.gz"}
https://homework.cpm.org/category/CCI_CT/textbook/pc3/chapter/11/lesson/11.2.5/problem/11-154
### Home > PC3 > Chapter 11 > Lesson 11.2.5 > Problem11-154 11-154. Suppose $g(x) = 3^x$. Approximate the instantaneous rate of change at $x = 2$ to the nearest $0.001$. $\lim\limits_{h\to 0} \frac{g(2+h)-g(2)}{(2+h)-2} =\frac{3^{2+h}-3^2}{h}$ Use a calculator to approximate the IROC.
2020-05-27 01:00:58
{"extraction_info": {"found_math": true, "script_math_tex": 4, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8186761140823364, "perplexity": 2701.013011476388}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-24/segments/1590347391923.3/warc/CC-MAIN-20200526222359-20200527012359-00067.warc.gz"}
https://brilliant.org/problems/is-the-momentum-conserved/
# Is the Momentum conserved? Classical Mechanics Level 2 A particle of mass 1 kg is projected at an angle of 30 degrees with horizontal with velocity v = 40 m/s. The change in linear momentum of the particle after t = 1s will be ? (g = 10 m/s^2) ×
2016-10-28 17:54:56
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8124624490737915, "perplexity": 354.1456715922148}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2016-44/segments/1476988725451.13/warc/CC-MAIN-20161020183845-00387-ip-10-171-6-4.ec2.internal.warc.gz"}
https://codegolf.stackexchange.com/questions/195592/average-two-letters
# Introduction Every letter in the English alphabet can be represented as an ASCII code. For example, a is 97, and S is 83. As we all know, the formula for averaging two numbers $$\x\$$ and $$\y\$$ is $$\\frac{x+y}{2}\$$. I'm pretty sure you can see where this is going. Your challenge is to average two letters. # Challenge Your program must take two letters as input, and output the average of the ASCII values in it. If the average is a decimal, you should truncate it. • Input will always be two ASCII letters. You can assume they will always be valid, but the case may vary. Basically, both letters will be in the range 97-122 or 65-90. The second letter will always have a greater ASCII value than the first. If your language has no method of input, you may take input from command line arguments or from a variable. • You must output the ASCII character signified by the average of the two numbers. As stated above, it should always be truncated to 0 decimal places. If your language has no method of output, you may store it in a variable. Exit codes and return values are considered valid output methods. # Example I/O • Input: A, C Output: B • Input: a, z Output: m • Input: d, j Output: g • Input: B, e Output: S • Input: Z, a Output: ] # Rules This is , so shortest answer in bytes wins! • Please specify whether, for example, B e is a valid input. – Greg Martin Nov 10 '19 at 19:53 • Why is the output of the last example U? The value of B is 66 and the value of e is 101, which averages to 83.5, truncated to 83, which corresponds to S – Matthew Jensen Nov 10 '19 at 23:05 • If that example is correct, it will invalidate all of the existing answers. – user85052 Nov 11 '19 at 10:05 • Sorry. I read an ASCII table wrong and got 69 for B, not 66. – sugarfi Nov 11 '19 at 14:00 • Could I enter for a non-ASCII compliant system? – Shaun Bebbers Nov 25 '19 at 17:28 # C# (Visual C# Interactive Compiler), 20 bytes a=>b=>(char)(a+b>>1) Try it online! # x86-16 machine code, IBM PC DOS, 1312 10 bytes Binary (xxd): 00000000: a182 0002 c4d0 e8cd 29c3 .......... Listing: A1 0082 MOV AX, [0082H] ; load two chars into AH and AL from command line 02 C4 ADD AL, AH ; AL = AL + AH D0 E8 SHR AL, 1 ; AL = AL / 2 CD 29 INT 29H ; write to console Standalone PC DOS executable. Input is via command line, output to console. Example: • Only 5 bytes if written as a function :-) – Cody Gray Nov 12 '19 at 18:57 • @CodyGray only 4 bytes as a snippet or a MACRO. Not sure where the "line" is though... :) – 640KB Nov 13 '19 at 17:22 • Functions are always permitted, as are full programs, per house code golfing rules. Not sure about macros, though; interesting suggestion. Although avoiding that 1-byte RET is probably not going to be enough to make a difference most of the time... – Cody Gray Nov 14 '19 at 0:53 # Jelly, 4 bytes OSHỌ Try it online! # Explanation OSHỌ Main Link: takes (a, b) O (ord(a), ord(b)) S sum; ord(a) + ord(b) H halve; (ord(a) + ord(b)) / 2 Ọ chr # J, 17 bytes (+/<.@%#)&.(3&u:) Try it online! • (+/<.@%#) truncated average... • &. "Under", which applies a transform, then the verb it modifies -- truncated avg in this case -- then the inverse transform.... • 3&u: convert to ascii byte integer. That is, it converts each letter to its ascii number, gets the truncated average of those, and applies the inverse of "convert to ascii number", which takes an ascii number and returns a letter. # Poetic, 163 bytes software inside a computer a robot+a man+a keypad+a plan=a PC still,P.C.this,P.C.that?i await a day i crush a PC i do Linux,i suppose Try it online! Poetic is an esolang I made in 2018 for a class project. It's basically brainfuck with word-lengths instead of symbols. (I actually use PC myself. 😉) • Love the idea of this language. – Jonah Nov 12 '19 at 21:55 # Ruby, 22 bytes ->a,b{""<<(a+b).sum/2} Try it online! • Huh. TIL the functionality of str << int. – Value Ink Nov 11 '19 at 22:54 # R, 56 37 bytes intToUtf8(mean(utf8ToInt(scan(,"")))) Try it online! Description • intToUtf8() converts the average into its ASCII character. • mean() takes the average which is automatically truncated. • utf8ToInt() converts the inputs into two ASCII numbers. • scan() allows inputs. • Welcome to CGCC! A few tips: 1. You can use an anonymous function, so don't need the f=; 2. You don't need print; 3. In this case, it is actually shorter to take input with scan that to define a function; 4. floor is not needed: if you feed a non-integer numeric to intToUtf8, R truncates it automatically before converting to character. Also, TIO must have been experiencing issues when you tried it; the base package is included. All in all, your solution can be shortened to 37 bytes. – Robin Ryder Nov 10 '19 at 17:43 • @RobinRyder Thanks for the reduction! – TheSimpliFire Nov 10 '19 at 19:15 # Keg-ir-oc, 5 2 bytes (SBCS) Works in all 3 test cases. +½ Try it online! # Explanation -ir will *not* try to evaluate the input ½ Halve the value -oc Output all as a character, if possible Implicit print. The output is print nice by default. $$$$ • Loading issue. I just tried the first test case and it didn't error.(I got to clone the most recent Keg interpreter.) – user85052 Nov 9 '19 at 14:36 • Using -ir and -oc will allow this two byter: +½ – Lyxal Nov 9 '19 at 21:12 • What a great solution. I think we cannot go shorter. – stephanmg Nov 11 '19 at 9:11 # K (oK), 8 bytes Solution: c$.5*+/ Try it online! Explanation: Sum, multiply by 0.5 and convert to ASCII. c$.5*+/ / the solution .5* / multiply by 0.5 c$/ convert to ASCII # Bash, 56 bytes printf \\x$(printf %x $[printf "(%d+%d)/2" \'$1 \'$2]) Try it online! • 1) The outer pair of quotes is needed only because the \x, so better escape just that single character; 2) The deprecated $[..] is shorter for arithmetic evaluation; 3) The old .. is shorter for subcommand, except when needs escaping; 4) The inner pairs of quotes are needed only because of ', so better escape just those characters. Try it online! – manatwork Nov 11 '19 at 11:29 • @manatwork: Do you want to post this as an answer? Or should I edit my post? – stephanmg Nov 11 '19 at 11:38 • Feel free to edit your post. Is mostly your work. I didn't had the patience this time to juggle with the printfs. – manatwork Nov 11 '19 at 11:47 • You have an error in the version posted here: the closing " should be on the left side of the nearby space to avoid touching the next argument's first character, the \. (The TIO code is correct though.) – manatwork Nov 11 '19 at 15:14 • @manatwork: Thanks for pointing me to this. – stephanmg Nov 11 '19 at 15:21 # Shakespeare Programming Language, 144 bytes Try it online! /.Ajax,.Puck,.Act I:.Scene I:.[Enter Ajax and Puck] Ajax:Open mind. Puck:Open mind. You is the quotient betweenthe sum ofyou I a big cat. Speak thy. Simple enough, just finds the average. ASCII characters and numbers are identical in SPL, so this language was ideal for the task. # dzaima/APL, 11 bytes (+/÷≢)⍢⎕UCS Try it online! dzaima/APLs ⎕UCS - convert to/from char currently ignores the fractional part of the given number, so no floor is necessary. # Befunge-98 (PyFunge), 7 bytes ~~+2/,@ Try it online! # Python 2, 31 bytes lambda*A:chr(sum(map(ord,A))/2) Try it online! # jq, 23 characters [explode|add/2]|implode Sample run: bash-5.0$jq -Rr '[explode|add/2]|implode' <<< 'AC' B Try it online! # Excel, 28 bytes =CHAR((CODE(A1)+CODE(B1))/2) # Lua, 42 41 bytes a=...print(a.char(a:byte()+a:byte(2)>>1)) Try it online! Removed 1 byte using ouflak's method of taking input as a single command line argument. Takes input as a single command line argument of two characters. Uses the convenient operator precedence of >>. Note that this is actually a full standalone Lua 5.3 program, because command line arguments are accessible as a top-level vararg. • You beat me: print(string.char((io.read():byte()+io.read():byte())/2)). – stephanmg Nov 11 '19 at 9:30 • Thanks to your tip on my answer, I think I just figured out how to save 1 byte on yours. – ouflak Nov 11 '19 at 14:46 # Julia 1.0, 26 bytes a\b=Char(sum(Int[a,b])÷2) TIO was timing out for me, so only tested at REPL. Try it online! • "You must output the ASCII character signified by the average of the two numbers". Shouldn't your output be a single ASCII character? – ouflak Nov 12 '19 at 16:58 • Thanks for catching that, fixed. – gggg Nov 12 '19 at 17:42 # Python 2, 33 32 bytes -1B from Embodiment of Ignorance using bit ops. Exactly as specified. For a Python 3 answer change the / into //. lambda a,b:chr(ord(a)+ord(b)>>1) Try it online! • This could be just /2 in Python 2. – Arnauld Nov 9 '19 at 14:18 • I mean, this way. – Arnauld Nov 9 '19 at 14:24 • I see, I thought the complete function body can be turned into /2. – user85052 Nov 9 '19 at 14:25 • You can save 2 bytes with lambda*s:chr(sum(map(ord,s))/2) – Uri Granta Nov 9 '19 at 15:42 • Nice golfing, but now it is an almost exact duplicate of this answer. (The names of the parameters are changed.) – user85052 Nov 9 '19 at 23:38 # Lua, 63 60 bytes s=io.read()print(s.char(math.floor((s:byte()+s:byte(2))/2))) Takes the two letters with no delimeters, i.e. AB, j$, |1, etc.... Try it online! Saved 3 bytes thanks to PhillipRoman • There are shorter Lua solutions. They look also interesting, @outflak. – stephanmg Nov 11 '19 at 9:45 • @ouflak I'd like to clarify that my solution is a full standalone Lua 5.3 program, because command line arguments are always accessible as a top-level vararg – PhilipRoman Nov 11 '19 at 13:26 • @PhilipRoman, So what TIO is doing is virtually running a command line Lua 'shell' and passing in the arguments appropriately? Interesting.... – ouflak Nov 11 '19 at 14:09 • @PhilipRoman, I assumed naturally that that's what you were doing. I was pondering how TIO did it. Somehow they have to emulate "lua myscript.lua a z". – ouflak Nov 11 '19 at 14:23 • @ouflak Ah, sorry, I misunderstood you. Yeah, the fact that TIO shows the time taken just like the linux "time" command, seems to indicate that there is indeed a real shell involved. – PhilipRoman Nov 11 '19 at 14:26 # MarioLANG, 63 bytes , ) , >[!(>[! "=#="=# - ( > ) ( + - !+< ) [ #=" !-<) #=". Try it online! Super golfable, I'm sure - not really able to think in MarioLANG yet. Calculates $$\\lfloor\frac{x+y}{2}\rfloor\$$. # Zsh, 33 31 characters a=({$1..$2} 0) echo ${a[$#a/2]} This one does no character code conversion. Sample run: manatwork ~ % set -- A C manatwork ~ % a=({$1..$2} 0);echo ${a[$#a/2]} B Try it online! • Very clever use of brace expansion! I managed 27 bytes in a more boring way :) – roblogic Dec 8 '20 at 14:47 # Forth (gforth), 15 bytes : f + 2/ emit ; Try it online! ### Code Explanation : f \ start a new word definition + \ add top two stack arguments 2/ \ divide top stack value by 2 emit \ output char of resulting ascii value ; \ end word definition • The input requires char, e.g. char a char c f would output b. – agc Nov 12 '19 at 21:46 • More usable version: : g char char + 2/ cr emit ;, run like g e B, (outputs S), etc. Note that it doesn't seem to matter which order the arguments are in, so g B e also outputs S. – agc Nov 12 '19 at 21:51 • @agc In gforth ' will also work, so 'a 'b f would also output b and looks much more similar to other languages – reffu Nov 13 '19 at 13:41 # K4, 9 8 bytes Solution: 10h$_avg Examples: q)k)10h$_avg"AC" "B" q)k)10h$_avg"az" "m" q)k)10h$_avg"dj" "g" Explanation: Unfortunately the space is needed. Turns out the space isn't necessary! 10h$_avg / the solution avg / calculate mean _ / floor 10h$ / cast to char Bonus: • 10h$-256+avg for a 12 byte Q version (more/less hacky than 10h$(_)avg for 10) # Red, 18 bytes func[a][average a] Try it online! Takes the input as a list of two letters. If this is not acceptable: # Red, 20 bytes func[a b][a + b / 2] Try it online! • So Red's default behavior is to do the conversion to numbers and then back to letter for the output? – Jonah Nov 9 '19 at 19:28 • @Jonah The type of the result is implied by the first argument. #"A" + 1 is #"B"; 1 + #"A" is 66 – Galen Ivanov Nov 9 '19 at 19:33 • @Jonah From the documentation: "The full range of mathematical functions can be used with char! values. A Math Error is raised if the result of the arithmetic falls outside of the range 00 - 10FFFF (hexadecimal)." – Galen Ivanov Nov 9 '19 at 19:39 # MATL, 3 bytes Ymc Try it online! ### Explanation Ym % Implicit input: string of two letters. Implicitly convert to ASCII, and take mean c % Implicitly round down, and convert to char % Implicit display • It's fun to stay at the... – Luis Mendo Nov 9 '19 at 16:02 # Runic Enchantments, 7 bytes ii+2,k@ Try it online! Input is space sepatrated. Use invalid inputs at your own peril. # PowerShell, 32 bytes $args|%{$s+=+$_} [char]($s-shr1) Try it online! # Pari/GP, 57 bytes fun(x,y)=Strchr(floor((Vecsmall(x)[1]+Vecsmall(y)[1])/2)) Try it online! Description • Vecsmall(x)[1] gives the ASCII number of x. • Vecsmall(y)[1] gives the ASCII number of y. • /2 gives the average. • floor() truncates the average. • Strchr() converts the average to its ASCII character. # PHP (7.4), 34 bytes fn($a,$b)=>chr(ord($a)+ord($b)>>1) Try it online!
2021-03-01 16:02:33
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 5, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.2767482101917267, "perplexity": 5955.167999171699}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-10/segments/1614178362741.28/warc/CC-MAIN-20210301151825-20210301181825-00272.warc.gz"}
https://codegolf.meta.stackexchange.com/questions/2140/sandbox-for-proposed-challenges/9650
# What is the Sandbox? This "Sandbox" is a place where Code Golf users can get feedback on prospective challenges they wish to post to the main page. This is useful because writing a clear and fully specified challenge on the first try can be difficult. There is a much better chance of your challenge being well received if you post it in the Sandbox first. See the Sandbox FAQ for more information on how to use the Sandbox. ## Get the Sandbox Viewer to view the sandbox more easily To add an inline tag to a proposal use shortcut link syntax with a prefix: [tag:king-of-the-hill] # Compute the mincut of a graph code-golf Given a graph, compute a division of the graph such that the edges stranded between the cut. Red line: a cut Green line: a mincut ## Input The first line will contain the number of nodes. The rest of the lines will contain pairs of positive integer IDs separated by spaces showing connectedness between the nodes with those IDs. Here's an example; for a graph where 1 is connected to 2 and 2 is connected to 3: 3 1 2 2 3 • You may assume that the nodes are numbered consecutively from one to the number of nodes. • However, you may not assume that the list of pairs of nodes is in any specific order. ## Output Simply output a comma-separated list of the IDs of the nodes of one of subgraphs created by the cut. • You cannot implement brute-force search. Other than that, feel free to use Karger's Algorithm* or another algorithm. Remember that Karger's algorithm is likely the easiest to implement. • Notice: you must run Karger's algorithm at least this many times to ensure a low chance of failure and a low chance of failure ## *Karger's algorithm For your convenience, I've included a simple description of Karger's algorithm. 1. find two adjacent nodes and merge them into one node (so that all nodes that where connected to the original two nodes are connected to the new node), concatenating the labels 2. repeat step one until there are only two nodes 3. the result is any label of one of the nodes 4. repeat steps 1-3 at least this many times to ensure a low probability of failure, and choose the result that occurs the most often • 1. Wouldn't it be better to take the graph as an adjacency matrix or list? 2. Your description of the minimum cut is somewhat confusing. 3. Karger's algorithm is probabilistic, which isn't allowed by our defaults (I don't think). Allowing probabilistic algorithms opens up a whole can of worms (for instance I could write a program that just returns a random cut) -- you should probably make it so that the algorithm must return the minimum cut two-thirds of the time or something similar if you want to allow them. – a spaghetto Jul 5 '16 at 0:07 • @quartata 1. it's an adjacency list 2. yeah I need help with that 3. I made sure you had to repeat it insert some math equation here amount of times – noɥʇʎԀʎzɐɹƆ Jul 5 '16 at 0:13 • Sorry I misunderstood the input. – a spaghetto Jul 5 '16 at 0:14 • Generally adjacency lists are done like [node1, connected_node1, connected_node2, ...] and not in pairs like you have it; this is more flexible and you don't have to specify the number of nodes (it is just the length of the input list) – a spaghetto Jul 5 '16 at 0:35 • "You cannot implement brute-force search" is too vague. What about a basically brute force search that shortcuts some obviously wrong possibilities? I think what you want is a running time bound. – xnor Jul 5 '16 at 1:55 • 1. The I/O description seems to assume that all answers will be programs taking input on stdin and writing output to stdout, but our defaults are more flexible. In particular, by default we allow answers to be functions which take arrays and return arrays. Comma-separating is also IMO unnecessarily constrained, especially as the input isn't comma-separated. – Peter Taylor Jul 5 '16 at 7:51 • 2. "Feel free to use Karger's algorithm or another algorithm". There's an implicit licence here to use another non-deterministic algorithm, but although you give an explicit iteration count for Karger's algorithm you don't for e.g. randomised Kruskal's algorithm, which it's based on. 3. Besides which, in general I don't think that questions should tell people which algorithm to use. Specify the task and constraints (e.g. "Randomised algorithms are allowed, but must find the correct answer with probability at least foo. All answers must be polynomial-time"). – Peter Taylor Jul 5 '16 at 7:54 • 4. But if you're going to include an algorithm description, be careful to get it right. Karger's algorithm is randomised, but in the description given there's no mention of where the random selection occurs or of what uniformity constraints are required to get the desired behaviour. – Peter Taylor Jul 5 '16 at 7:55 • Food for thought: outputting the value of the min cut instead might lend to more approaches. Also, any rules on min cut/max flow/possibly other optimisation builtins? – Sp3000 Jul 5 '16 at 10:34 • I'm going to add a story to this soon. – noɥʇʎԀʎzɐɹƆ Jul 5 '16 at 13:55 # Generate a random Vietnamese syllable tags: The Vietnamese syllable space is interesting, because it is huge. TODO: Describe the space and why it is interesting. Here's how such syllables are made: The onset matches the regex ^([bcdđghklmnprstvx]|qu|[cgkpt]h|ng|tr)$The vowel is one of the following massive list: a à a' ã á a. â â â' â~ â´ â. a. ă ă' ă~ ă´ ă. e è e' e~ é e. ê ê ê' ê~ ê´ ê. i ì i' ĩ í i. ia iê o ò o' õ ó o. oa oă oe ô ô ô' ô~ ô´ ô. o' ò' o" õ' ó' o'. u ù u' ũ ú u. ua uâ uê ui uô uo' uy u' ù' u" ũ' ú' u'. u'a u'o' y y y' y~ ý y. ya yê The coda matches the regex ^([iouycptmn]|ch|ng|nh)$ (thanks Peter Taylor!) The onset, vowel and coda are concatenated to make the result syllable. ## Objective The objective is to generate random Vietnamese syllables. Your program has to take no input and as output include only the syllable, with an optional trailing new line. ## Clarifications • Each syllable must be generated with a non zero probability. I think it's unclear. Contributions are so much welcome. • 1. I'm not sure what you mean by can be with. 2. You don't mention randomness anywhere excwpt the clarification. 3. Object should probably be Objective. – Dennis Jul 3 '16 at 18:44 • 1. c can also be with h means that h can follow c as the 2nd stage letter in the syllable. 2. Where should I also mention it? 3. Ah k :) – user48538 Jul 3 '16 at 18:46 • If I'm reading this correcting then it can be vastly simplified by saying that the onset matches the regex ^([bcdđghklmnprstvx]|qu|[cgkpt]h|ng|tr)$, the vowel is one of a massive set of options (I don't see any benefit to splitting that into "stage 2" and "stage 3"), and the coda matches the regex ^([iouycptmn]|ch|ng|nh)$ – Peter Taylor Jul 4 '16 at 16:36 # Let's play some Briscola Briscola is an Italian game, played with a deck of 40 cards, divided in 4 suits - coins (denari - D), swords (spade - S), cups (coppe - C) and clubs (bastoni - B). The values on the cards range numerically from one through seven, plus 3 special cards - knave (11), knight (12) and king (13). ## Gameplay: After the deck is shuffled, each player is dealt three cards. The next card is placed face up on the playing surface, and the remaining deck is placed face down. This card is the Briscola, and represents the trump suit for the game. First player starts by playing one card face up on the playing surface. Each player subsequently plays a card in turn, until all players have played one card. The winner of that hand is determined as follows: If any briscola (trump) has been played, the player who played the highest valued trump wins, else the player who played the highest card of the lead suit (suit of the first card played) wins. ## Ranking Briscola has a special type of ranking: 1 ace 3 three 13 king 12 knight 11 knave 7 6 5 4 2 ## Rules: Standard loopholes apply. ### Input: As an input, you must accept 5 values (cards), in a reasonable format, for example: briscola (trump card), 1. card, 2. card, 3. card, 4.card ### Output: You must output the winning card ### Example input and output: 4S 7D 12B 13B 2S -> 2S 5D 1D 5D 12S 3C -> 1D 3B 2C 4S 5S 7D -> 2C 12D 3S 11B 1B 7S -> 3S • As mentioned in chat, I think this is probably a duplicate of this challenge. Just adding so other people don't have to go looking. – FryAmTheEggman Jul 6 '16 at 21:15 # nth number that multiplies k equals its reverse Tags: , It's quite simple, given n and k, output the nth number such that, if the number is multiplied by k and its digits reversed, it equals the original number. Both input and output are positive numbers. The challenge originally is from Mego, posted on my broken challenge. Firstly, I used 4 instead of k, but based on my tests, only 1 and 4 values gives output, so I decided to put 4 instead of k, finally I put k back. But the challenge would be ruin with that putting "9"*(n-1) between 2178, so no loopholes will be permitted. I just posted here for further discussions, suggestions and improvements. • Those numbers are positive right? – Fatalize Jul 6 '16 at 7:52 • Please add some examples of expected outputs. – Fatalize Jul 6 '16 at 7:53 • Also you might want to prevent people from hardcoding 2178 in any fashion in their code so that they have to compute the numbers, because it seems they all are of the form 21X...X78 where X...X is a series of nines (except for the first one, which is 0). – Fatalize Jul 6 '16 at 7:56 • According to the community advises, I'm not allowed to prevent people use methods those work perfectly. – Ehsaan Jul 6 '16 at 8:23 • Let's wait to see what others think. I personally don't think it's very interesting if people are allowed to hardcode the "format" of those numbers. – Fatalize Jul 6 '16 at 8:26 • Me neither, I think the challenge isn't interesting at all. – Ehsaan Jul 6 '16 at 8:43 • I think there's no good way to prevent hardcoding. Maybe making "4" were an input parameter as well would make solutions actually search for an answer? – xnor Jul 6 '16 at 9:01 • @xnor You mean make 4 as k input? – Ehsaan Jul 6 '16 at 9:33 • @Ehsaan Yes, exactly. – xnor Jul 7 '16 at 9:09 • 9 works too: 1089 * 9 = 9801. – Neil Jul 10 '16 at 17:36 Write a program that can determine the median value of a read-only (static, const, immutable) sequence of unsorted numbers (array, list, stream) but minimises storage, without completely sacrificing speed. The basic bracket is that if we copied all the values into a sorted list and then picked the middle one (or average of the middle pair), it would require storage of the whole sequence, so the storage would be 'n', and the performance would be O(n log n). The score is the total cost of finding the median of 1 bn values, divided by 1 bn, at a cost of 8 per value stored, 1 per comparison or numerical operation and 1 per read, for the worst case. Thus if our insertion sort costs exactly n*log2(n), the for 1 bn values the total score is 1 for the read, 29.8 for the sort + 8 for the storage, for a total of 37.8. If instead we skimmed the whole range to get the average (costing 1 for the read and 1 for the summation), we could then only store some portion of the range to sort; but then we would need a second pass to be sure that there were an equal number of values above and below this median (at the cost of another 2). Lowest score wins, low-level languages (C/C++/D) only so that we can count the actual operations. • 1. It's not clear to me what counts as a "value stored" or a "read", and I think there are probably gray areas with "comparison or numerical operation" too. (E.g. in C is if (foo) a comparison?) 2. "The score is the total cost ... for the worst case." For any non-trivial algorithm, the full calculation of this score risks being longer than the code. There's a reason that complexity theorists deal with Landau notation rather than exact operation counts. – Peter Taylor Jul 7 '16 at 13:39 # Reinventing the Modularization Wheel In a language of your choice, implement a function or language construct that imports another file of the same language and executes it, making exported values from that file available to the calling file. If one already exists, you may not use it in your implementation. For example, in Node, you would have to implement require() without using require(), even indirectly. In C, you would implement a function or construct equivalent to #include without using #include in the implementation. In Python, you would implement import. In client-side JavaScript, I suppose the closest equivalent would be <script src="..."></script>. So JavaScript implementations would be restricted to AJAX calls only, since <script> tags would not be allowed in the implementation. This is not to say that you aren't allowed to use the built-in import at all, but only use them in the implementation. The intention here is to reinvent the wheel. ## Requirements • Do not include the built-in modularization in any way in your import implementation. • Standard libraries only. • Byte-count includes the implementation itself, and any special changes that need to exist on the file being imported, if any. • The function or construct accepts a relative file path. As long as this is satisfied, you may extend the functionality of your modularization to have global imports, or even remote imports (like using a URL as input). • The imported file must have a construct for denoting values that must be exported. Only these values should be directly accessible from the calling file. • Using the built-in export function or construct of your language is acceptable, and if it is a built-in, it does not need to be included in your byte-count. • If your language does not have modularization, then implementing a mechanic for exporting should be included in your byte-count. • Document the usage of your function or construct. ### This is code-golf and the shortest answer in bytes wins! • Perhaps just restrict this to languages which support modularization to avoid loopholes – Downgoat Jul 7 '16 at 18:45 • @Downgoat if people wanted to use a built-in for reading a plaintext file, and then use an eval()-like built-in to execute it in a way that exposes only denoted values (however you define that), I think it would be acceptable. What sort of loopholes do you foresee? – Patrick Roberts Jul 7 '16 at 18:49 # Print a Pilcrow Scarecrow Print the following ascii scarecrow using the pilcrow character ¶ ¶¶¶ ¶¶¶¶¶ ¶¶¶ ¶¶¶ ¶ ¶¶¶¶¶¶¶¶¶¶ ¶¶¶ ¶¶¶ ¶ ¶ ¶ ¶¶ ¶ ¶¶ ¶ ¶ ¶¶¶¶¶¶¶¶¶¶¶ • Padding must be with (space) and built with ¶ • Print to stdout • This is # It's time to unify! ## Introduction Wouldn't it be awesome if they whole world would be united and there would be no conflicts and disputes? Now while you can't unify nations, you certainly can unify expressions to resolve their unknown relation and conflicts. Your mission is simple: Unify the world (of expressions)! And of course, because you're lazy you want to do this with the least effort (read: code-length) possible. ## Specification ### Input Your input will be a unification problem. You can format it however you want and need, as long as you don't encode additional information to what is given in the standard / example format. Encoding the number of arguments per function into the input is allowed but not mandatory, you can also just derive this from the input. Example format: Your first input will be list of function symbols, which is represented as a list of pairs of strings and non-negative integers. Your second input will be a list of equalities (you may represent each as a string), which represent the unification problem. They will be represented as a list of strings as well. Anything which is not a parenthesis or an equality sign can be considered a variable. If the number of arguments is 0, parenthesis are omitted. Example input: [("f",1),("g",2),("h",3),("a",0)], [x=f((g(a,y)),y=h(g(f(a),z),f(z),a)] ### Output The output is either some falsy value or something representing a list of equalities. It is allowed to use the empty list to indicate a falsy value. ### What to do? You need to unify the inputs you got. In the end there must only be variables on the left side of the equality-signs if the you didn't encounter an error. If you did you need to report it (-> false or empty list). To do the unification, you can - but don't have to - use Martelli and Montanari's algorithm, which goes as follows: E is always the (complete) set of equalities except the current one x,y,z are variables, f,g,h are functions, t1,t2, ...,tn,s1,...,sn are arbitrary terms (compositions of functions and variables) {x=x} E => E, e.g. if you encounter two equivalent variables, discard {f(t1,...,tn)=f(s1,...,sn)} E => {t1=s1,t2=s2,...,tn=sn} E, e.g. if you encounter the same function on both sides, unify the arguments along with your rest {f(t1,...,tn}=g(s1,...,sn)} E => Error, if the symbols are different, you can't succeed {x=f(t1,...,tn)} E => {x=f(t1,...,tn)} E[x -> f(t1,...,tn)], e.g. if you see a variable equals a term, replace the variable with this term in all other expressions {x=f(t1,...,tn)} E => Error, e.g. if any of the t1,..,tn contain x at some point {f(t1,...,tn)=x} E => {x=f(t1,...,tn)} E, e.g. if you see a variable "naked" on the right side, swap the sides Two step-by-step examples are provided below additionally to the test cases. ### Corner Cases You can get an empty list of function symbols, this means you have exclusively variables in the second input. The input list of equalities will never be empty, your code does not need to handle this case. ### Who wins? This is code-golf so the shortest answer in bytes wins! Standard rules apply of course. ## Test-cases All these test cases use the functions [("a",0),("b",0),("f",1),("g",1),("h",2)] [x=b] -> [x=b] [a=x] -> [x=a] [a=b] -> [] [y=f(x)] -> [y=f(x)] [x=f(x)] -> [] [f(x)=f(y)] -> [x=y] [f(x)=g(y)] -> [] [h(x,y)=h(a,b)] -> [x=a,y=b] [x=f(z),y=f(a),x=y] -> [x=f(a),y=f(a),z=a] [h(x,f(y))=z,z=h(f(y),v)] -> [x=f(y),v=f(y),z=h(f(y),f(y))] ### Step-By-Step Example Example 1: Test Case 9 [x=f(z),y=f(a),x=y] => (replace x in third equation with first x) [x=f(z),y=f(a),f(z)=y] => (replace y in third equation) [x=f(z),y=f(a),f(z)=f(a)] => (remove f's in third equation) [x=f(z),y=f(a),z=a] => (replace the z in the first expression) [x=f(a),y=f(a),z=a] Example 2: [f(g(a,x),g(y,b)=f(x,g(v,w)),f(x,g(v,w))=f(g(x,a),g(v,b))] => (remove f in second equation) [f(g(a,x),g(y,b)=f(x,g(v,w)),x=g(x,a),g(v,w)=g(v,b))] => (function symbol missmatch in equation 2) [] # create a golfed down regexp that matches all substrings inspired by Determine the "Luck" of a string where I found a way to golf almost 30 bytes at once (with a falling trick for that challenge, but I still like the idea). The word "lucky" contains 15 different substrings: • lucky • luck, ucky • luc, uck, cky • lu, uc, ck, ky • l, u, c, k, y Challenge Create a program or function that, for a given string s, creates the shortest possible regexp using basic PCRE syntax that matches and returns all substrings of s and nothing else. • code needs not to be case sensible • basic syntax means: alternatives, quantifiers, grouping and custom character classes (e.g. [abc]) • other features (assertions, backreferences, recursion etc.) may be used, but are not required to qualify • the result may include delimiters and modifiers The result for lucky would be l?ucky?|l?uc?|c?ky?|l|c|y. • is the description sufficient? • the challenge not too easy, not too hard? • any other hints you might have? • I will add test cases that expose possible bugs (like silly and digdug) • not sure yet if I will go for shortest code yet # Write a Gopher Interpreter This code golf challenge will task you with writing an interpreter for an esolang I created a while back called Gopher, Details on the language can be found Here # Pass Conditions This challenge requires you to create an Interpreter (Or you could go a step ahead and create a Compile/Transpiler) however for the code to pass as correct it must meet the following criteria • Take in a single input being the Gopher Code • Output the result of the Interpreted code • Invalid code does not need to be handled, however you may do so if you wish • As this is code-golf the smallest byte size wins # Example Input and Output Input: &++<'×<&÷+<^-<<×-<#!+<$@-<&@<×-<@++<@<.!<= Output: Hello World • Thanks for using the Sandbox! Anyway, you should add the relevant information on Gopher to the body of this post, as if your github account/repo dies or is changed people still need to be able to answer this question. – FryAmTheEggman Jul 11 '16 at 17:15 Having had a look, it seems there isn't a challenge for "Given any date, output the day of the week". Is that a challenge worth having? Something like "Given an input date, in the form dd/mm/yyyy, output the day of the week" Shortest code wins What do we think? perhaps this already exists and I didn't find it. • Duplicate – AdmBorkBork Jul 11 '16 at 14:04 • Glad I checked! – Matt Jul 11 '16 at 14:05 # Golf your way from (inc|dec)rements to the basic math operations Write five different functions or programs that do addition, subtraction, multiplication, division and modulo with integers by only using increments, decrements, loops/recursion and comparisons. • Assume division & modulo will never receive 0 or negative integers as the divisor/modulus. • Modulo's result has the sign of its dividend. • Division truncates its quotient, e.g divide(11, 4) returns 2 and divide(-5, 3) returns -1. • Programs must print the result to STDOUT. Functions must return the result. • Your five functions/programs may invoke each other. • All functions/programs must support 32-bit signed integers, i.e everything between -231 and 231-1 (inclusive). Overflow is allowed, i.e it's OK if add(2147483647, 1) returns -2147483648. • Explicitly adding/subtracting 1 to/from numbers is allowed, in case you use a programming language that doesn't have built-ins for incrementing and decrementing. • Shortest program wins as long as it doesn't exploit standard loopholes! I seriously have no idea how to make test cases for this. • Why not one function that returns all of those? You should also specify what you mean by divisions and modulo as they differ slightly from language to language. (E.g. what is -2 mod 5? and what is -1/2?) And only doing increments/decrements, loops/recursion and compraisions is also quite vague. – flawr Jul 10 '16 at 16:37 • I don't think test cases would really be necessary since it's just basic arithmetic. You can easily tell if your output is correct or not. Also, I'm assuming that division will truncate the quotient since there isn't really any way to do decimals in this fashion, but that should probably be specified. – Business Cat Jul 11 '16 at 14:18 • I already did this because I was bored... – univalence Nov 19 '16 at 12:52 ## ASCII to Unicode equation beautifier You may well be used to typing equations in ASCII, but with the advent of Unicode we can spruce them up a bit. We can fix • Powers (numeric superscripts only) • Numeric subscripts • Mathematical signs (-, *, / ^ → -, ×, ÷, ↑) Examples: x^3 - 1 = (x - 1)(x^2 + x + 1) → x³ − 1 = (x − 1)(x² + x + 1) g_0 = 3^^3^^3 -= 3^(3^3) → g₀ = 3↑↑3↑↑3 = 3↑(3³) 800*600 → 800×600 1/x → 1÷x You may assume that all digits directly after a ^ or _ are meant to be super/subscripts (and the ^ or _ to be removed) and that all the mathematical signs are to be replaced wherever they appear. This is , so the shortest solution wins. • This seems to be two questions crammed into one. The first one is the superscript and subscript transformation, which is mildly interesting; and the second one is the straight substitution of various characters for others, which is completely boring apart from the ambiguity it introduces in the interpretation of ^. I suggest ditching the substitution of minus, times, and divide symbols and giving explicit lists (with copyable characters and Unicode code points in decimal and hex) of the superscript, subscript, and up-arrow characters. – Peter Taylor Jul 14 '16 at 6:39 # Autotune a chord Auto-Tune is a pitch correction program which alters the pitch without changing the length. It can be used to fix off-pitch chords in music, which is good because I have an out of tune piano. The goal of this challenge is given some input waveform which contains a single chord played on my piano, tune each note to the nearest equally tempered note found on a standard piano (see Input for more details). # Input The input is something which looks like a time-domain audio sample input containing a single chord being played. All data is sampled at 192kHz, with 16-bit PCM (little endian integer), mono channel. The input may come from any source desired (file io, stdio, function parameter, etc.). # Output The output of your code should be something which looks like a time-domain audio sample containing the tuned chord. It does not need to have the same sample rate or datapoint format as the input, but must be the same length in real time as the original sample (or as close as possible). The output may be to any source desired (file io, stdio, function parameter, etc.). # Examples See this github repo for various inputs and outputs. The provided examples have inputs/outputs in an uncompressed wav file. Feel free to re-encode/gut the data for your inputs. # Scoring This is code golf; shortest code wins. Standard loopholes apply. You may use any libraries/builtins so long as they were not designed specifically for performing pitch correction. Main concern: This challenge seems potentially too difficult, so one alternative I've been considering is changing the piano samples into sine waves at the fundamental frequencies (avoids issues with amplitude decay/harmonics). An even simpler challenge might be to give inputs in the frequency domain (list of fundamental frequencies), though I'm not sure that would make for an interesting challenge as it seems almost too easy at that point. • It seems very difficult to determine what outputs are considered correct. – feersum Jul 14 '16 at 7:44 • yeah, that thought had crossed my mind as well. I've considered measures based on the delta of the FFT of user output/expected output, but I'm not sure this is necessarily a good measure of "in tune". – helloworld922 Jul 14 '16 at 7:47 • I suspect that the biggest technical challenge would be phase. The harmonics of each string in isolation should be in phase, because they all derive from a single hammer strike, but the keys of the chord are probably not all struck at exactly the same time, and there will be resonant driving interactions between them which will complicate the signal. I suggest that you explicitly state that people can ignore this issue. – Peter Taylor Jul 14 '16 at 13:39 This will be a challenge. Additional tags are , and . # How fast is your Stack Exchange community? tl;dr Your task is to find how fast a Stack Exchange community reacts. "How fast" is here the average of the time elapsed until the first answer or the closing of the question. Input • the Stack Exchange site's name, e.g. stackoverflow, codegolf, codereview etc. • optionally the Stack Exchange API URL: https://api.stackexchange.com/2.2/ Requirements • Calculate the average time it takes until the first answer or closing of the question. • Take the 1000 latest questions into account, e.q. ten API requests with 100 items each. Output • Output the average time in minutes and seconds, like 01:23 or 1:23. • Run your program at least against stackoverflow, codegolf and code review and show the results. • Feel free to add results for your other favorite communities as well. Boilerplate • You can write a program or a function. If it is an anonymous function, please include an example of how to invoke it. • This is so shortest answer in bytes wins. • Standard loopholes are disallowed. • Leading/trailing whitespaces/newlines are fine. • How do you count unanswered and unclosed questions? Also, I don't know about the API, but there might be problems with deleted answers. I think you should probably write a reference implementation before posting this. – FryAmTheEggman Jul 14 '16 at 13:10 • @FryAmTheEggman Thanks a lot – all good points. Didn't think that there might be questions that are unclosed and unanswered. Will check the API whether deleted even will be send. Good point with the reference implementation – maybe in JavaScript that it can be run as a stack snippet. What do you think in general about the challenge idea? Boring? Interesting? Too complicated? – insertusernamehere Jul 14 '16 at 13:15 • It's about doing basically one task, so I don't think it is complicated. I think the results are probably more interesting than the challenge (there are only so many ways to average something and to parse html), but it makes sense and isn't trivial, so I wouldn't say it's boring. Seems fine overall. Also note internet, date and, I suppose, math. – FryAmTheEggman Jul 14 '16 at 13:24 • @FryAmTheEggman Thanks again for your feedback and the tag suggestions. I also think that the results are the interesting part. I wanted to try a popularity contest in the first place because of that. But I couldn't come up with the necessary criteria. :) – insertusernamehere Jul 14 '16 at 13:30 # Convert a BMP image to grayscale The images manipulation is a great way to exercise and increase your skills. In my opinion it's also very interesting. # What you must do? The objective of exercise is much easy: convert an image bmp colorful in an image grey. You can use every language, the question most appreciated will be that don't use library. Image stock: http://www.mediafire.com/convkey/c491/p7aya9cxafvfc91zg.jpg Image converted: http://www.mediafire.com/convkey/3903/rcigd79pkwd12qczg.jpg?size_id=3 • What is the winning criterion? It is code-golf, popularity-contest, other criterion? – TuxCrafting Jul 15 '16 at 10:30 • Also, the You can use any language is unnecessary, it's implied here. And you can use ![](<image url>) to show the images. – TuxCrafting Jul 15 '16 at 10:33 • This is very underspecified at present. 1. What weights should be used in the conversion from RGB to greyscale? 2. What bit depths should be supported? 3. Is it required to support all of BMP's features (e.g. ICC colour profiles, CMYK, JPEG, PNG)? If not, what is the minimum feature set which must be supported? – Peter Taylor Jul 15 '16 at 10:53 • @Blind To mention someone, you can use @<username>, and please add the tags to your post ([tag:<tag name>]) – TuxCrafting Jul 15 '16 at 12:43 • @TùxCräftîñg , It's a code-golf. @ Peter Taylor , It's equal, you can use that weight you want. 2.see 1st. 3.just support BMP. – Blind Jul 15 '16 at 16:27 • That doesn't actually answer questions 2 or 3. – Peter Taylor Jul 15 '16 at 19:32 • Just to let you know, you can only @mention one person per comment, and it won't work with a space between the @ and the name. – trichoplax Jul 20 '16 at 16:20 # Do I have an emoji? Given an input string in your language, return truthy/falsey if the input contains an valid Unicode emoji character. ## What is an Emoji? The word emoji comes from the Japanese: 絵 (e ≅ picture) 文 (mo ≅ writing) 字 (ji ≅ character). Emojis are pictorial symbols used to represent feelings, actions, or objects. For this challenge, use the Full Emoji Data list provided by Unicode as a reference to determine which characters are valid Emojis. Sample test cases: "" -> 0 "💩" -> 1 "hello💩" -> 1 "hello" -> 0 "!±≡𩸽" -> 0 Discussion: This seems trivial, but I noticed we didn't have an emoji detection challenge. There might be a concern about the encoding of the input string, but reading the linked meta posts about Strings I feel that this challenge can use whatever String format the language used in the answer supports. The acceptable output for booleans is also up for discussion. Do we have a meta post on what output formats are acceptable for booleans? • One question: What exactly is an emoji? I think it should be specified in the challenge. – user48538 Jul 18 '16 at 16:38 • See meta.ppcg.lol/q/2190, just say truthy/falsey. – LegionMammal978 Jul 18 '16 at 16:39 • @zyabin101 can I use Unicode's emoji list as a list of valid emoji characters for this challenge? – JAL Jul 18 '16 at 16:44 • Up to you. [filler text] – user48538 Jul 18 '16 at 16:51 • I've attempted to clarify what an emoji is, at least for this challenge. Hopefully this will make this question more clear and a better fit for the site. – JAL Jul 19 '16 at 2:39 • A source which gives actual ranges would be more convenient for people writing answers, although unicode.org/Public/emoji/3.0//emoji-data.txt isn't entirely consistent with the other lists. – Peter Taylor Jul 19 '16 at 9:52 ## Trim trailing spaces in less than O(n²) time Since s/\s+$// runs in O(n²) time, Stack Overflow needs to replace it with something faster. Please write a code snippet for them. Your score will be the number of bytes in your submission, multiplied by the time taken to process a string of 1000 non-spaces with 1,000,000 leading and trailing spaces, divided by 1000 times the time taken to process a string of 1-non space with 1,000 leading and trailing spaces. (In other words, if your code runs in O(n) time then this should cancel out.) • The fancy scoring seems like it might be confusing/hard to implement. Why not just restrict the complexity to be less than O(n²) like you suggest in the title? – FryAmTheEggman Jul 21 '16 at 13:18 ## Test Cases The test cases given below are the output for a program/function using the shortest "wrapping" version of the constants at http://esolangs.org/wiki/Brainfuck_constants. Input Output Brainfuck program's output 72.>105.>33. -[>+<-------]>-.>+[->-[<]>--]>.>>-[-[-<]>>+<]>. Hi! 255>10++>65>255<+[-<+]->[-+[->+]-<.+[-<+]->] ->++++++++++++>>+[+[<]>>+<+]>>-<+[-<+]->[-+[->+]-<.+[-<+]->] AAAAAAAAAAAA >0>48-->255<[>>86.[-]+[-<+]-<-] >>-[>+<-----]>----->-<[>>-[>+<---]>+.[-]+[-<+]-<-] VVVVVVVVVVVVVVVVVVVVVVVVVVVVVVVVVVVVVVVVVVVVVV >,[>,]<[<]>[.>] >,[>,]<[<]>[.>] (cat - outputs the input) ## Scoring Your score is your byte count plus the average length of your program/function's output for each number. For example, if a 30 byte program's output had an average length of 13.5, its score would be 30 + 13.5 = 43.5. ## Sandbox Questions Is it tagged correctly? Should this be instead of ? • To prevent hardcoding, add the length of the output to their score – Nathan Merrill Jul 31 '16 at 13:52 • I don't understand "the code you generate must also be as small as possible." The spec requires using the "shortest Brainfuck representation according to esolangs.org/wiki/Brainfuck_constants", so the code generated should be identical for every valid answer, surely? – Peter Taylor Aug 1 '16 at 14:21 • @PeterTaylor Oops! Forgot to remove all references to that URL. I'll fix that now. It's meant to be optional to use it, a previous version of the spec required it. – Copper Aug 1 '16 at 14:37 # Convert hexagonal coordinates to index Your job is to, given the size of the hexagon and a pair of axial coordinates, return the index as if all the rows were laid out side by side. Here's an example mapping for size 3: (q,r), 3 (0,0) (1,0) (2,0) (-1,1) (0,1) (1,1) (2,1) (-2,2) (-1,2) (0,2) (1,2) (2,2) (-2,3) (-1,3) (0,3) (1,3) (-2,4) (-1,4) (0,4) maps to 00 01 02 03 04 05 06 07 08 09 10 11 12 13 14 15 16 17 18 Here's the formula I found (could be improved): i=index s=size i = q + sum( ( 2 * s - 0.5 - abs( x - s + 0.5 ) ) for x in 1..r ) Test cases (0-based indexing): (q, r, s) -> i (0, 0, 1) -> 0 (0, 0, 50) -> 0 (0, 3, 3) -> 14 (-3, 5, 4) -> 28 (5, 2, 12) -> 18 ` Meta notes: • Should I include links to axial coordinates and centered hexagonal numbers? • Or, instead, should I explain axial coordinates better? • Should I include the formula I came up with? • More test cases, or are those fine? • More exposition? • I'm also planning to do a challenge the other way around, is that ok? • This technically isn't related to Hexagony (though, you can keep the reference if you'd like). I personally wouldn't include the formula, but that's my opinion. The reverse challenge seems like a good one as well. – Nathan Merrill Aug 3 '16 at 15:58 # Rules • Your program must take no input and print this text. • You can have trailing newlines, and spaces after lines. • You must not use a builtin or load the text for an external resource. # Score This is , shortest answer in bytes wins. Did you guess what was the text? • 10/10 very creative and interesting – Leaky Nun Aug 4 '16 at 11:17 • What is special about the text and means that the answers won't use the exact same techniques as previous kolmogorov-complexity questions? – Peter Taylor Aug 4 '16 at 13:35
2020-11-28 03:14:36
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 2, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.4027879536151886, "perplexity": 1948.7525574999613}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-50/segments/1606141194982.45/warc/CC-MAIN-20201128011115-20201128041115-00065.warc.gz"}
https://mathoverflow.net/questions/49651/what-kind-of-colimits-are-preserved-by-a-certain-yoneda-embedding/49667
# What kind of colimits are preserved by a certain Yoneda embedding? (This question is related to this one) Let $k$ be a field and consider the category $Sch/k$ of schemes over $k$, say also separable and of finite type. The Yoneda embedding $$Y:Sch/k \to Pre(Sch/k)$$ does not respect colimits but if you factorize $Y$ through $$Y:Sch/k \to Shv(Sch/k)$$ with respect to the Zariski Grothendieck topology (given by the Zariski open immersions), it does respect some colimits (if you want you may replace $Sch/k$ by the category of commutative algebras over $k$ of finite type). In particular a pushout of the form $$U~\xleftarrow{f}~ U\cap V ~\xrightarrow{g}~ V$$ in $Sch/k$ where $f$ and $g$ are open immersions is also a pushout in $Shv(Sch/k)$. The pushout of two closed immersions $$B\leftarrow A \rightarrow C$$ in $Sch/k$ exists in general but let's consider the situation where $A,B,C$ are affine because in this case the existence is 'immediate' (whatever that is) by the antiequivalence to $k$-algebras. For example the coordinate cross $Spec k[X,Y]/(XY)$ is the pushout of $\mathbb{A^1}\leftarrow Spec k\to \mathbb{A^1}$. My question is What happens to these 'closed' pushouts under the Yoneda embedding into Shv(Sch/k)? Are they preserved or is there an affine counterexample? Edit: What happens if one takes sheaves with respect to the etale topology? - I think the title is a little too general, since the Yoneda embedding is defined much more generally than for sheaves. –  arsmath Dec 16 '10 at 16:19 Ok, I've edited it but it sounds even worse now. Maybe someone likes to change it. –  roger123 Dec 16 '10 at 17:01 I think Johnathan's argument still works without change if you specify $k$ to be separably closed. –  S. Carnahan Mar 25 '11 at 8:25 No, the embedding of schemes in the big Zariski site does not preserve colimits. It is possible to see this in the example you suggest by computing the "tangent space" at the origin. Let $X$ be the colimit in the category of schemes; the tangent space of $X$ at the origin is a $2$-dimensional vector space. Let $Y$ be the colimit in the category of sheaves; I claim that the "tangent space" of $Y$ at the origin is the union of the coordinate axes inside of the tangent space of $X$. The tangent space of $Y$ is the space of sections of $Y$ over $Z := \mathrm{Spec}\: k[\epsilon] / \epsilon^2$. The Zariski topology is trivial on $Z$, so sections of $Y$ over $Z$ are the same as sections of the presheaf colimit over $Z$. The "tangent space" of the presheaf colimit is the union of the tangent spaces of the two copies of $\mathbf{A}^1$.
2015-03-27 13:48:15
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9201897382736206, "perplexity": 110.05737378896927}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2015-14/segments/1427131296456.82/warc/CC-MAIN-20150323172136-00101-ip-10-168-14-71.ec2.internal.warc.gz"}
https://chem.libretexts.org/Bookshelves/Physical_and_Theoretical_Chemistry_Textbook_Maps/Statistical_Thermodynamics_(Jeschke)/05%3A_Quantum_Ensembles/5.02%3A_Quantum_and_Classical_Statistics
# 5.2: Quantum and Classical Statistics ## Types of Permutation Symmetry Classical particles are either distinguishable or non-distinguishable, a difference that influences the relation between the system partition function and the molecular partition function (Section [s_from_z]). Quantum particles are special. They are always indistinguishable, but there exist two types that behave differently when two particles are permuted. For bosons, the wavefunction is unchanged on such permutation, whereas for fermions the wavefunction changes sign. This sign change does not make the particles distinguishable, as absolute phase of the wavefunction does not correspond to an observable. However, it has important consequences for the population of microstates. Two (or more) bosons can occupy the same energy level. In the limit $$T \rightarrow 0$$ they will all occupy the ground state and form a Bose-Einstein condensate. Bosons are particles with integer spin, with the composite boson $$^{4}\mathrm{He}$$ (two protons, two neutrons, two electrons) probably being the most famous example. In contrast, two fermions (particles with half-integer spin) cannot occupy the same state, a fact that is known as Pauli exclusion principle. Protons, neutrons, and electrons are fermions (spin 1/2), whereas photons are bosons (spin 1). This difference in permutation symmetry influences the distribution of particles over energy levels. The simplest example is the distribution of two particles to two energy levels $$\epsilon_\mathrm{l}$$ (for ’left’) and $$\epsilon_\mathrm{r}$$ (for ’right’) . For distinguishable classical particles four possible configurations exist: 1. $$\epsilon_\mathrm{l}$$ is doubly occupied 2. $$\epsilon_\mathrm{l}$$ is occupied by particle A and $$\epsilon_\mathrm{r}$$ is occupied by particle B 3. $$\epsilon_\mathrm{l}$$ is occupied by particle B and $$\epsilon_\mathrm{r}$$ is occupied by particle A 4. $$\epsilon_\mathrm{r}$$ is doubly occupied. For bosons and for indistinguishable classical particles as well, the second and third configuration above cannot be distinguished. Only three configurations exist: 1. $$\epsilon_\mathrm{l}$$ is doubly occupied 2. $$\epsilon_\mathrm{l}$$ is occupied by one particle and $$\epsilon_\mathrm{r}$$ is occupied by one particle 3. $$\epsilon_\mathrm{r}$$ is doubly occupied. For fermions, the first and third configuration of the boson case are excluded by the Pauli principle. Only one configuration is left: 1. $$\epsilon_\mathrm{l}$$ is occupied by one particle and $$\epsilon_\mathrm{r}$$ is occupied by one particle. Since the number of configurations enters into all probability considerations, we shall find different probability distributions for systems composed of bosons, fermions, or distinguishable classical particles. The situation is most transparent for an ideal gas, i.e. $$N$$ non-interacting point particles that have only translational degrees of freedom . For such a system the spectrum of energy levels is continuous. ## Bose-Einstein Statistics We want to derive the probability distribution for the occupation of energy levels by bosons. To that end, we first pose the question how many configurations exist for distributing $$N_i$$ particles to $$A_i$$ energy levels in the interval between $$\epsilon_i$$ and $$\epsilon_i + \mathrm{d}\epsilon$$. Each level can be occupied by an arbitrary number of particles. We picture the problem as a common set of particles $$P_k \ (k = 1 \ldots N_i)$$ and levels $$L_k \ (k = 1 \ldots A_i)$$ that has $$N_i+A_i$$ elements. Now we consider all permutations in this set and use the convention that particles that stand left from a level are assigned to this level. For instance, the permutation $$\{P_1,P_2,L_1,P_3,L_2,L_3\}$$ for three particles and three levels denotes a state where level $$L_1$$ s occupied by particles $$P_1$$ and $$P_2$$, level $$L_2$$ is occupied by particle $$P_3$$ and level $$A_3$$ is empty. With this convention the last energy level is necessarily the last element of the set (any particle standing right from it would not have an associated level), hence only $$(N_i+A_i-1)!$$ such permutations exist. Each permutation also encodes a sequence of particles, but the particles are indistinguishable. Thus we have to divide by $$N_i!$$ in order to not double count configurations that we cannot distinguish. It also does not matter in which sequence we order the levels with their associated subsets of particles. Without losing generality, we can thus consider only the sequence with increasing level energy, so that the level standing right (not included in the number of permutations $$(N_i+A_i-1)!$$) is the level with the highest energy. For the remaining $$A_i-1$$ lower levels we have counted $$(A_i-1)!$$ permutations, but should have counted only the properly ordered one. Hence, we also have to divide by $$(A_i-1)!$$. Therefore, the number of configurations and thus the number of microstates in the interval between $$\epsilon_i$$ and $$\epsilon_i + \mathrm{d}\epsilon$$ is $C_i = \frac{\left( N_i + A_i - 1 \right)!}{N_i!\left(A_i-1\right)!} \ .$ The configurations in energy intervals with different indices $$i$$ are independent of each other. Hence, the statistical weight of a macrostate is $\Omega = \prod_i \frac{\left( N_i + A_i - 1 \right)!}{N_i!\left(A_i-1\right)!}$ As the number of energy levels is, in practice, infinite, we can choose the $$A_i$$ sufficiently large for neglecting the 1 in $$A_i - 1$$. In an exceedingly good approximation we can thus write $\Omega = \prod_i \frac{\left( N_i + A_i\right)!}{N_i! A_i!} \ .$ The next part of the derivation is the same as for the Boltzmann distribution in Section [subsection:Boltzmann], i.e., it relies on maximization of $$\ln \Omega$$ using the Stirling formula and considering the constraints of conserved total particle number $$N = \sum_i N_i$$ and conserved total energy of the system . The initial result is of the form $\frac{N_i}{A_i} = \frac{1}{B e^{-\beta \epsilon_i} - 1} \ ,$ where $$B$$ is related to the Lagrange multiplier $$\alpha$$ by $$B = e^{-\alpha}$$ and thus to the chemical potential by $$B = e^{-\mu/(k_\mathrm{B} T)}$$. After a rather tedious derivation using the definitions of Boltzmann entropy and $$(\partial u/ \partial s)_V = T$$ we can identify $$\beta$$ with $$-1/k_\mathrm{B} T$$. We refrain from reproducing this derivation here, as the argument is circular: It uses the identification of $$k$$ with $$k_\mathrm{B}$$ in the definition of Boltzmann entropy that we had made earlier on somewhat shaky grounds. We accept the identification of $$|\beta|$$ with $$1/k_\mathrm{B} T$$ as general for this type of derivations, so that we finally have $\frac{N_i}{A_i} = \frac{1}{B e^{\epsilon_i/k_\mathrm{B} T} - 1} \ . \label{eq:Bose_Einstein_stat}$ Up to this point we have supposed nothing else than a continuous, or at least sufficiently dense, energy spectrum and identical bosons. To identify $$B$$ we must have information on this energy spectrum and thus specify a concrete physical problem. When using the density of states for an ideal gas consisting of quantum particles with mass $$m$$ in a box with volume $$V$$ (see Section [section:gas_translation] for derivation), $D(\epsilon) = 4 \sqrt{2} \pi \frac{V}{h^3} m^{3/2} \epsilon^{1/2} \ , \label{eq:density_of_states_ideal_quantum_gas}$ we find, for the special case $$B e^{\epsilon_i/k_\mathrm{B} T} \gg 1$$, $B = \frac{\left( 2 \pi m k_\mathrm{B} T \right)^{3/2}}{h^3} \cdot \frac{V}{N} \ . \label{eq:B_quantum_gas}$ ## Fermi-Dirac Statistics The number $$N_i$$ of fermions in an energy interval with $$A_i$$ levels cannot exceed $$A_i$$. The number of allowed configurations is now given by the number of possibilities to select $$N_i$$ out of $$A_i$$ levels that are populated, whereas the remaining levels remain empty. As each level can exist in only one of two conditions, populated or empty, this is a binomial distribution problem as we have solved in Section [binomial_distribution]. In Equation \ref{eq:N_over_n}) we need to substitute $$N$$ by $$A_i$$ and $$n$$ by $$N_i$$. Hence, the number of allowed configurations in the energy interval between $$\epsilon_i$$ and $$\epsilon_i + \Delta \epsilon_i$$ is given by $C_i = \frac{A_i!}{N_i! \left(A_i - N_i \right)!}$ and, considering mutual independence of the configurations in the individual energy intervals, the statistical weight of a macrostate for fermions is $\Omega = \prod_i \frac{A_i!}{N_i! \left(A_i - N_i \right)!} \ .$ Again, the next step of the derivation is analogous to derivation of the Boltzmann distribution in Section [subsection:Boltzmann] . We find $\frac{N_i}{A_i} = \frac{1}{B e^{\epsilon_i/k_\mathrm{B}T} + 1} \ . \label{eq:Fermi_Dirac_stat}$ For the special case $$B e^{\epsilon_i/k_\mathrm{B} T} \gg 1$$, $$B$$ is again given by Equation \ref{eq:B_quantum_gas}. Comparison of Equation \ref{eq:Fermi_Dirac_stat} with Equation \ref{eq:Bose_Einstein_stat} reveals as the only difference the sign of the additional number 1 in the denominator on the right-hand side of the equations. In the regime $$B e^{\epsilon_i/k_\mathrm{B} T} \gg 1$$, for which we have specified $$B$$, this difference is negligible. It is therefore of interest when this regime applies. As $$\epsilon_i \ge 0$$ in the ideal gas problem, we have $$e^{\epsilon_i/k_\mathrm{B} T} \ge 1$$, so that $$B \gg 1$$ is sufficient for the regime to apply. Wedler and Freund have computed values of $$B$$ according to Equation \ref{eq:B_quantum_gas} for the lightest ideal gas, H$$_2$$, and have found $$B \gg 1$$ for $$p = 1$$ bar down to $$T = 20$$ K and at ambient temperature for pressures up to $$p = 100$$ bar. For heavier molecules, $$B$$ is larger under otherwise identical conditions. Whether a gas atom or molecule is a composite boson or fermion thus does not matter, except at very low temperatures and very high pressures. However, if conduction electrons in a metal, for instance in sodium, are considered as a gas, their much lower mass and higher number density $$N/V$$ leads to $$B \ll 1$$ at ambient temperature and even at temperatures as high as 1000 K. Therefore, a gas model for conduction electrons (spin 1/2) must be set up with Fermi-Dirac statistics. ## Maxwell-Boltzmann Statistics In principle, atoms and molecules are quantum objects and not classical particles. This would suggest that the kinetic theory of gases developed by Maxwell before the advent of quantum mechanics is deficient. However, we have already seen that for particles as heavy as atoms and molecules and number densities as low as in gases at atmospheric pressure or a bit higher, the difference between Bose-Einstein and Fermi-Dirac statistics vanishes, unless temperature is very low. This suggests that, perhaps, classical Maxwell-Boltzmann statistics is indeed adequate for describing gases under common experimental conditions. We assume distinguishable particles. Each of the $$N_i$$ particles can be freely assigned to one of the $$A_i$$ energy levels. All these configurations can be distinguished from each other, as we can picture each of the particles to have an individual tag. Therefore, $C_i = (A_i)^{N_i}$ configurations can be distinguished in the energy interval between $$\epsilon_i$$ and $$\epsilon_i + \Delta \epsilon_i$$. Because the particles are distinguishable (’tagged’), the configurations in the individual intervals are generally not independent from each other, i.e. the total number of microstates does not factorize into the individual numbers of microstates in the intervals. We obtain more configurations than that because we have the additional choice of distributing the $$N$$ ’tagged’ particles to $$r$$ intervals. We have already solved this problem in Section [subsection:Boltzmann], the solution is Equation \ref{eq:N_onto_r}). By considering the additional number of choices, which enters multiplicatively, we find for the statistical weight of a macrostate \begin{align} \Omega & = \frac{N!}{N_0! N_1! \ldots N_{r-1}!}\cdot A_0^{N_0} \cdot A_1^{N_1} \cdot \ldots A_{r-1}^{N_{r-1}} \\ & = N! \prod_i \frac{A_i^{N_i}}{N_i!} \ .\end{align} It appears that we have assumed a countable number $$r$$ of intervals, but as in the derivations for the Bose-Einstein and Fermi-Dirac statistics, nothing prevents us from making the intervals arbitrarily narrow and their number arbitrarily large. Again, the next step in the derivation is analogous to derivation of the Boltzmann distribution in Section [subsection:Boltzmann] . All the different statistics differ only in the expressions for $$\Omega$$, constrained maximization of $$\ln \Omega$$ uses the same Lagrange ansatz. We end up with $\frac{N_i}{A_i} = \frac{1}{B e^{\epsilon_i/ k_\mathrm{B} T}} . \label{eq:Maxwell_Boltzmann_stat}$ Comparison of Equation \ref{eq:Maxwell_Boltzmann_stat} with Equation \ref{eq:Bose_Einstein_stat} and \ref{eq:Fermi_Dirac_stat} reveals that, again, only the 1 in the denominator on the right-hand side makes the difference, now it is missing. In the regime, where Bose-Einstein and Fermi-Dirac statistics coincide to a good approximation, both of them also coincide with Maxwell-Boltzmann statistics. There exist two caveats. First, we already know that the assumption of distinguishable particles leads to an artificial mixing entropy for two subsystems consisting of the same ideal gas or, in other words, to entropy not being extensive. This problem does not, however, influence the probability distribution, it only influences scaling of entropy with system size. We can solve it by an ad hoc correction when computing the system partition function from the molecular partition function. Second, to be consistent we should not use the previous expression for $$B$$, because it was derived under explicit consideration of quantization of momentum.17 However, for Maxwell-Boltzmann statistics $$B$$ can be eliminated easily. With $$\sum_i N_i = N$$ we have from Equation \ref{eq:Maxwell_Boltzmann_stat} $N = \frac{1}{B} \sum_i A_i e^{-\epsilon_i/k_\mathrm{B} T} \ ,$ which gives $\frac{1}{B} = \frac{N}{\sum_i A_i e^{-\epsilon_i/k_\mathrm{B} T}} \ .$ With this, we can express the distribution function as $P_i = \frac{N_i}{N} = \frac{A_i e^{-\epsilon_i/k_\mathrm{B} T}}{\sum_i A_i e^{-\epsilon_i/k_\mathrm{B} T}} \ . \label{eq:Maxwell_Boltzmann}$ Comparison of Equation \ref{eq:Maxwell_Boltzmann} with the Boltzmann distribution given by Equation \ref{eq:Boltzmann_distribution} reveals the factors $$A_i$$ as the only difference. Thus, the probability distribution for Maxwell-Boltzmann statistics deviates from the most common form by the degree of degeneracy $$A_i$$ of the individual levels. This degeneracy entered the derivation because we assumed that within the intervals between $$\epsilon_i$$ and $$\epsilon_i + \Delta \epsilon_i$$ several levels exist. If $$\Delta \epsilon_i$$ is finite, we speak of near degeneracy. For quantum systems, degeneracy of energy levels is a quite common phenomenon even in small systems where the energy spectrum is discrete. In order to describe such systems, the influence of degeneracy on the probability distribution must be taken into account. ##### Concept $$\PageIndex{1}$$: Degeneracy In quantum systems with discrete energy levels there may exist $$g_i$$ quantum states with the same energy $$\epsilon_i$$ that do not coincide in all their quantum numbers. This phenomenon is called degeneracy and $$g_i$$ the degree of degeneracy. A set of $$g_i$$ degenerate levels can be populated by up to $$g_i$$ fermions. In the regime, where Boltzmann statistics is applicable to the quantum system, the probability distribution considering such degeneracy is given by \begin{align} & P_i = \frac{N_i}{N} = \frac{g_i e^{-\epsilon_i/k_\mathrm{B}T}}{\sum_i g_i e^{-\epsilon_i/k_\mathrm{B}T}} \label{eq:Boltzmann_with_degeneracy}\end{align} and the molecular partition function by \begin{align} & Z = \sum_i g_i e^{-\epsilon_i/k_\mathrm{B}T} \ .\end{align} The condition that degenerate levels do not coincide in all quantum numbers makes sure that the Pauli exclusion principle does not prevent their simultaneous population with fermions. At this point we can summarize the expected number of particles with chemical potential $$\mu$$ at level $$i$$ with energy $$\epsilon_i$$ and arbitrary degeneracy $$g_i$$ for Bose-Einstein, Fermi-Dirac, and Boltzmann statistics: \begin{align} N_i & = \frac{g_i}{e^{(\epsilon_i - \mu)/(k_\mathrm{B} T)} - 1} \ & \mathrm{Bose-Einstein} \ \mathrm{statistics} \\ N_i & = \frac{g_i}{e^{(\epsilon_i - \mu)/(k_\mathrm{B} T)} + 1} \ & \mathrm{Fermi-Dirac} \ \mathrm{statistics} \\ N_i & = \frac{g_i}{e^{(\epsilon_i - \mu)/(k_\mathrm{B} T)}} \ & \mathrm{Boltzmann} \ \mathrm{statistics} \ .\end{align} Note that the chemical potential $$\mu$$ in these equations is determined by the condition $$N = \sum_i N_i$$. The constant $$B$$ in the derivations above is given by $$B = e^{-\mu/(k_\mathrm{B} T)}$$. If $$N$$ is not constant, we have $$\mu = 0$$ and thus $$B=1$$. 5.2: Quantum and Classical Statistics is shared under a CC BY-NC 3.0 license and was authored, remixed, and/or curated by Gunnar Jeschke via source content that was edited to conform to the style and standards of the LibreTexts platform; a detailed edit history is available upon request.
2022-05-23 07:47:48
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 4, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.907008945941925, "perplexity": 346.07772857250484}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-21/segments/1652662556725.76/warc/CC-MAIN-20220523071517-20220523101517-00036.warc.gz"}
https://publikationen.bibliothek.kit.edu/1000054003
# Observation of top quark pairs produced in association with a vector boson in pp collisions at s=8 √s=8TeV CMS Collaboration; Khachatryan, V.; Sirunyan, A. M.; Tumasyan, A.; Adam, W.; Asilar, E.; Bergauer, T.; Brandstetter, J.; Brondolin, E.; Dragicevic, M.; Eroe, J.; Flechl, M.; Friedl, M.; Fruehwirth, R.; Ghete, V. M.; Hartl, C.; Hoermann, N.; Hrubec, J.; Jeitler, M.; Knuenz, V.; ... mehr Abstract (englisch): Measurements of the cross sections for top quark pairs produced in association with a W or Z boson are presented, using 8 TeV pp collision data corresponding to an integrated luminosity of 19.5 fb −1 , collected by the CMS experiment at the LHC. Final states are selected in which the associated W boson decays to a charged lepton and a neutrino or the Z boson decays to two charged leptons. Signal events are identified by matching reconstructed objects in the detector to specific final state particles from t t ¯ W tt¯W or t t ¯ Z tt¯Z decays. The t t ¯ W tt¯W cross section is measured to be 382 − 102 + 117 fb with a significance of 4.8 standard deviations from the background-only hypothesis. The t t ¯ Z tt¯Z cross section is measured to be 242 − 55 + 65 fb with a significance of 6.4 standard deviations from the background-only hypothesis. These measurements are used to set bounds on five anomalous dimension-six operators that would affect the t t ¯ W tt¯W and t t ¯ Z tt¯Z cross sections. Zugehörige Institution(en) am KIT Institut für Experimentelle Kernphysik (IEKP) Publikationstyp Zeitschriftenaufsatz Jahr 2016 Sprache Englisch Identifikator DOI: 10.1007/JHEP01(2016)096 ISSN: 1029-8479, 1126-6708 URN: urn:nbn:de:swb:90-540036 KITopen ID: 1000054003 Erschienen in Journal of high energy physics Band 2016 Heft 1 Seiten 96 Bemerkung zur Veröffentlichung Gefördert durch SCOAP3 KIT – Die Forschungsuniversität in der Helmholtz-Gemeinschaft KITopen Landing Page
2017-03-29 21:07:57
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9300895929336548, "perplexity": 11238.713518211061}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-13/segments/1490218191396.90/warc/CC-MAIN-20170322212951-00070-ip-10-233-31-227.ec2.internal.warc.gz"}
http://www.r-bloggers.com/2012/05/page/5/
# Monthly Archives: May 2012 ## ABC+EL=no D(ata) May 27, 2012 By It took us a loooong while but we finally ended up completing a paper on ABC using empirical likelihood (EL) that was started by me listening to Brunero Liseo’s tutorial in O’Bayes-2011 in Shanghai… Brunero mentioned empirical likelihood as a semi-parametric technique w/o much Bayesian connections and this got me thinking ## The aesthetics of error bars May 27, 2012 By This blog and my other main blog (the companion blog for my book) are now syndicated via R-bloggers (posts tagged R only) and statsblogs.com. The latter is a relatively new blog aggregator but looks to have some interesting content. R-bloggers it quite... ## Ben Schmid took ship’s log data (previously visualized in… May 27, 2012 By Ben Schmid took ship’s log data (previously visualized in static form on the the Spatial Analysis blog), and used ggplot and ffmpeg to animate the paths of individual voyages from 1750-1850. The images above come from the animation that combines all ... ## Project Euler — problem 3 May 27, 2012 By The third problem: The prime factors of 13195 are 5, 7, 13 and 29. What is the largest prime factor of the number 600851475143 ? My solvement is straightforward: firstly to identify all the prime numbers between 2 and sqrt(n); secondly … Continue reading → ## Tweets Analysis about Himpuan Jutaan Belia PutraJaya (Malaysia Youth Day 2012) May 27, 2012 By I’m using the Twitter Listening Robot to know what people is talking Najib Razak, Malaysia Prime Minister. Apparently,  23-27 are the Malaysia Youth Day 2012.  There were many funny retweets (more than 50 times) by the public: RT @Faizrawrr: Be... ## Updating to R 2.15, warnings in R and an updated function list for Serious Stats May 27, 2012 By Whilst writing the book  the latest version of R changed several times. Although I started on an earlier version, the bulk of the book was written with 2.11 and it was finished under R 2.12. The final version of the R scripts were therefore run and checked using R 2.12 and, in the main, the most recent ## PLoS computational biology meets wikipedia May 26, 2012 By Robin Ryder pointed out to me this new experiment run by PLoS since March 2012, namely the introduction of a new article type, “called “Topic Pages” and written in the style of a Wikipedia article“. Not only this terrific idea gives more credence to Wikipedia biology pages, at least in their early stage, but also ## Cross-valitation variability example, part I May 26, 2012 By Recently I had a discussion with a student about variability of results obtained from cross-validation procedure. While the subject is well known there are not many examples on the web showing it, so I have written its simple presentation.Results from ... ## Automating repetitive plot elements May 26, 2012 By The syntax of ggplot2 emphasizes constructing plots by adding components, or layers, using +. Possibly one of the most useful, but least remarked upon, consequences of this syntax is that it allows for an incredible degree of flexibility in saving and... ## MathJax Syntax Change May 25, 2012 By We’ve just a made a change to the syntax for embedding MathJax equations in R Markdown documents. The change was made to eliminate some parsing ambiguities and to support future extensibility to additional formats. The revised syntax adds a “latex” qualifier to the $or$\$ equation begin delimiter. It looks like this: This change
2016-02-14 21:07:52
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.2834661304950714, "perplexity": 6419.965249385465}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2016-07/segments/1454702032759.79/warc/CC-MAIN-20160205195352-00216-ip-10-236-182-209.ec2.internal.warc.gz"}
https://crypto.stackexchange.com/questions/70380/how-to-determine-if-n-cdot-ga-mod-p-and-m-cdot-ga-mod-p-genera
# How to determine if $\{n \cdot g^a \mod P\}$ and $\{m \cdot g^a \mod P\}$ generate the same sets? (set size < $P-1$) given some examples $$k_{n_i},k_{m_i}$$ out of each value set: $$k_{n_i} \in \{n \cdot g^a \mod P\, \forall a \in \mathbb{N} $$k_{m_i} \in \{m \cdot g^a \mod P, \forall a \in \mathbb{N} Each set has size of $$S$$ which is a prime and known. Value $$P$$ is also a prime with $$P = 2 \cdot S \cdot f+1$$. Factor $$f$$ is (product of) prime(s) which is known as well. The Generator $$g$$ is known too. For a given $$k$$ the factors $$n,m$$ and related exponent $$a$$ is unknown. As shown here for each $$k$$ multiple value pairs $$(n,a)$$ can be computed very fast (pick an $$a$$ and compute $$n=kg^{-a} \mod P$$). That means those sets can be equal with $$n\not=m$$. Now is there a way to check if they generate the same sets (without computing all combinations?) • For what set is $g$ a generator? – SEJPM May 7 '19 at 18:12 • same $g$ used in both sets, only the factor is different. $g^S = 1 \mod P$ and $P=2Sf+1$. So $g$ is not a prime root of $P$. It can only generate a subgroup of size $S$. With two different factors $m,n$ it generates two sets with all elements equal or 0 of them. With all possible factors $n'$ a total of $2 \cdot f$ sets can get generated, which don't contain equal elements and all numbers from $1$ to $P-1$ – J. Doe May 7 '19 at 19:18 $$G_n = G_m$$ iff $$n^S \equiv m^S \pmod P$$ Proof: If $$n^S \not\equiv m^S \pmod P$$, then $$\forall e \in G_n : e^S = n^S$$ (as $$e^S = n^S \cdot (g^a)^s = n^S$$); and similarly $$\forall f \in G_m : f^S = m^S$$. Hence $$\forall e \in G_n, f \in G_m: e \ne f$$, and hence $$G_n \ne G_m$$ (and actually the two sets are disjoint). Other direction (needed because we're asserting equivalence): If $$n^S \equiv m^S \pmod P$$, then $$(nm^{-1})^S = 1$$, that is $$nm^{-1}$$ is in the subgroup generated by $$g$$, that is, $$g^c = nm^{-1}$$ for some integer $$c$$. Then, for any member $$e \in G_n$$, we have $$e = n \cdot g^a$$ (for some $$a$$); we have $$n \cdot g^a = n \cdot g^{-c} \cdot g^{a+c} = n \cdot n^{-1}m \cdot g^{a+c} = m \cdot g^{a+c}$$, and hence $$e \in G_m$$. Similarly, we can show that all elements $$f \in G_m$$ are also in $$G_n$$ and hence $$G_n = G_m$$ Extra credit for the reader: find the step where I implicitly assumed that $$P$$ was prime... • Thanks again. you are my hero answering that many questions. Some hint, those $k_{m_i}, k_{n_i}$ should only be some random elements out of the set and not the sets themselves (edited top post, named them $G_n,G_m$). But that don't change anything. This finally destroyed my use case problem solving idea (link). For that case with 3 generators it should be $n^{QRS} \equiv m^{QRS} \mod P$ – J. Doe May 7 '19 at 22:51
2021-03-05 23:42:57
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 39, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9889682531356812, "perplexity": 326.7567079856209}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-10/segments/1614178373761.80/warc/CC-MAIN-20210305214044-20210306004044-00367.warc.gz"}
https://www.projecteuclid.org/euclid.gt/1513799608
## Geometry & Topology ### Symplectomorphism groups and isotropic skeletons Joseph Coffey #### Abstract The symplectomorphism group of a 2–dimensional surface is homotopy equivalent to the orbit of a filling system of curves. We give a generalization of this statement to dimension 4. The filling system of curves is replaced by a decomposition of the symplectic 4–manifold $(M,ω)$ into a disjoint union of an isotropic 2–complex $L$ and a disc bundle over a symplectic surface $Σ$ which is Poincare dual to a multiple of the form $ω$. We show that then one can recover the homotopy type of the symplectomorphism group of $M$ from the orbit of the pair $(L,Σ)$. This allows us to compute the homotopy type of certain spaces of Lagrangian submanifolds, for example the space of Lagrangian $ℝℙ2⊂ℂℙ2$ isotopic to the standard one. #### Article information Source Geom. Topol., Volume 9, Number 2 (2005), 935-970. Dates Revised: 24 September 2004 Accepted: 18 January 2005 First available in Project Euclid: 20 December 2017 https://projecteuclid.org/euclid.gt/1513799608 Digital Object Identifier doi:10.2140/gt.2005.9.935 Mathematical Reviews number (MathSciNet) MR2140995 Zentralblatt MATH identifier 1083.57034 #### Citation Coffey, Joseph. Symplectomorphism groups and isotropic skeletons. Geom. Topol. 9 (2005), no. 2, 935--970. doi:10.2140/gt.2005.9.935. https://projecteuclid.org/euclid.gt/1513799608 #### References • Miguel Abreu, Topology of symplectomorphism groups of $S\sp 2\times S\sp 2$, Invent. Math. 131 (1998) 1–23 • Miguel Abreu, Dusa McDuff, Topology of symplectomorphism groups of rational ruled surfaces, J. Amer. Math. Soc. 13 (2000) 971–1009 • P Biran, Lagrangian barriers and symplectic embeddings, Geom. Funct. Anal. 11 (2001) 407–464 • Joseph Coffey, A Symplectic Alexander Trick and Spaces of Symplectic Sections, Ph.D. thesis, State University of New York, Stony Brook, Stony Brook, New York (2003) • S K Donaldson, Symplectic submanifolds and almost-complex geometry, J. Differential Geom. 44 (1996) 666–705 • Y Eliashberg, L Polterovich, Local Lagrangian $2$–knots are trivial, Ann. of Math. (2) 144 (1996) 61–76 • M Gromov, Pseudoholomorphic curves in symplectic manifolds, Invent. Math. 82 (1985) 307–347 • Richard Hind, Lagrangian spheres in $S^2 \times S^2$. • François Lalonde, Dusa McDuff, The classification of ruled symplectic $4$–manifolds, Math. Res. Lett. 3 (1996) 769–778 • Eugene Lerman, Symplectic cuts, Math. Res. Lett. 2 (1995) 247–258 • J Peter May, Simplicial objects in algebraic topology, Chicago Lectures in Mathematics, University of Chicago Press, Chicago, IL (1992), reprint of the 1967 original • Dusa McDuff, The structure of rational and ruled symplectic $4$–manifolds, J. Amer. Math. Soc. 3 (1990) 679–712 • Dusa McDuff, Symplectomorphism groups and almost complex structures, from: “Essays on geometry and related topics, Vol. 1, 2”, Monogr. Enseign. Math. 38, Enseignement Math., Geneva (2001) 527–556 • Edwin H Spanier, Algebraic topology, Springer–Verlag, New York (1981) • W P Thurston, Some simple examples of symplectic manifolds, Proc. Amer. Math. Soc. 55 (1976) 467–468
2020-02-24 05:05:20
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 7, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.6489690542221069, "perplexity": 1537.2098658766115}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-10/segments/1581875145897.19/warc/CC-MAIN-20200224040929-20200224070929-00056.warc.gz"}
https://oxfordre.com/politics/view/10.1093/acrefore/9780190228637.001.0001/acrefore-9780190228637-e-93
Show Summary Details Page of Printed from Oxford Research Encyclopedias, Politics. Under the terms of the licence agreement, an individual user may print out a single article for personal use (for details see Privacy Policy and Legal Notice). date: 05 December 2022 Emergencies and the Rule of Law • Clement FatovicClement FatovicFlorida International University Summary Despite scholarly disagreements over the meanings of both the rule of law and emergency, there is broad agreement that emergencies often invite and justify departures from the formal requirements and substantive values identified with the rule of law as a normative ideal. It is often argued that strict adherence to existing laws, which are typically enacted during periods of normalcy in order to prevent arbitrary forms of rule associated with tyranny, could inhibit the government’s ability to respond quickly and effectively to the often unexpected and extraordinary challenges posed by an emergency such as war or natural disaster. Consequently, the temporary use of extraordinary measures outside the law has been widely accepted both in theory and in practice as long as such measures aim to restore the normal legal and political order. However, understandings of the tension between emergency and the rule of law have undergone a significant shift during the 20th century as emergency powers increasingly get codified into law. The use of extralegal measures that violate the formal and procedural requirements of the rule of law is still considered a dangerous possibility. However, as governments have come to rely increasingly on expansions of power that technically comport with standards of legality to deal with a growing list of situations characterized as emergencies, there is concern that extraordinary exercises of power intended to be temporary are becoming part of the permanent legal and political order. Subjects • Governance/Political Change • History and Politics Introduction The rule of law has become a nearly universal standard of political legitimacy. The basic principle that the power of the state ought to be exercised in accordance with relatively stable and general rules has become so widely accepted that even authoritarian regimes often pay lip service to this ideal (Tamanaha, 2004, pp. 2–3). However, the requirement that the government’s powers be defined and constrained by law tends to yield in times of emergency to calls for government to respond in ways that exceed its regular legal powers. Whether those powers are exercised through laws that would not apply in ordinary circumstances or without formal legal authorization altogether, emergencies test commitments to the rule of law at all levels of government and society. The question that has dominated both empirical and normative scholarship is whether the rule of law must give way, in whole or in part, to the exigencies of an emergency. What makes emergencies so dangerous to the rule of law is they appear to invite and justify departures not just from the formal requirements of legality basic to any conception of the rule of law but from the substantive ideals and values expressed in those formal requirements, as well. Suspensions and violations of domestic and international law, including ordinary statutes, constitutional provisions, and treaty obligations, have long been perceived as the most direct and obvious threats to the rule of law. However, as governments have come to rely increasingly on various legal tools to handle emergencies, scholars have devoted more attention to the ways that lawful but extraordinary exercises of power also threaten the rule of law. The Rule of Law Ideal Despite widespread acceptance of the normative desirability of the rule of law, there is no consensus on its precise meaning, constitutive elements, and practical requirements. Over the course of the 20th century, the rule of law has come to be associated—and occasionally identified—with a diverse range of values, including individual rights, social justice, democratic self-determination, free markets, judicial independence, and good governance. One indication of how much disagreement exists over the basic meaning and entailments of the rule of law is that organizations such as Freedom House, Global Integrity, and others that monitor its implementation around the globe employ such different measures that there is little to no correlation between some of their estimates of how well countries live up to this ideal (Skaaning, 2010). Disagreement about how to define and measure the rule of law should not obscure the fact that there is broad agreement that its essential function is to guard against tyranny. From its earliest articulations by the ancients to its most recent conceptualizations by scholars, public officials, and non-governmental organizations, the rule of law has served as a normative ideal opposed to arbitrary forms of rule. It seeks to substitute the potentially volatile and capricious rule of individuals with the supposed predictability and regularity of impersonal rules, or what John Adams proclaimed “an empire of laws, and not of men” (Adams, 2000, p. 288). This depiction of law as a constraint on power has not gone unchallenged. Critical legal theorists, postcolonial scholars, and legal realists have contested the assumption that the rule of law is necessarily antithetical to discretionary power or the so-called rule of men. In addition to various leftist critiques of law as an instrument of power that reflects and promotes the interests of the privileged, scholars have noted that the ideology of the rule of law has been used as an instrument of imperial power in colonial settings (Unger, 1976, pp. 176–181). Nasser Hussain points out that the rule of law has been used to legitimize despotic forms of rule in Jamaica, India, and other former British colonies. Indeed, the establishment of the rule of law in British colonies often went hand in hand with the exercise of extraordinary emergency powers understood to be in tension with that ideal (Hussain, 2003). Despite such criticisms, scholarly and political discourse on the rule of law is dominated by the notion that it is an indispensable check against abuses of power. It stipulates that “government shall be ruled by the law and subject to it” (Raz, 1979, p. 212). The idea that government must operate according to legal rules is opposed to rule by fiat, diktat, ukase, and other ad hoc modes of discretionary action that expose individuals to arbitrary exercises of power. Although it is impossible to eliminate discretion altogether, the aim is to ensure that any discretion that is exercised is not “legally unfettered” (Bingham, 2011, p. 54). Friedrich Hayek draws a particularly sharp distinction between law and command to underscore the distinctive virtues of the rule of law. In contrast to “commands,” which are typically directed to particular individuals and oriented toward the achievement of specific goals in concrete circumstances, laws establish general and abstract rules that anonymous individuals can apply in an unforeseeable range of circumstances. “Stripped of all technicalities,” Hayek contends, “this means that government in all its actions is bound by rules fixed and announced beforehand—rules which make it possible to foresee with fair certainty how the authority will use its coercive powers in given circumstances and to plan one’s individual affairs on the basis of this knowledge” (Hayek, 1944, p. 72). Formalistic and Substantive Conceptions of the Rule of Law Beyond the rather abstract stipulation that government act according to or through established law, theories of the rule of law can generally be divided into thin conceptions, which hold that laws must exhibit certain formal qualities, and thick conceptions, which maintain that law must also embody or uphold certain substantive values (Craig, 1997). Thin or formalistic conceptions of the rule of law require political power to be exercised in accordance with positive laws that exhibit certain rule-like qualities. According to H. L. A. Hart, this reflects the basic understanding of law as a system of rules (Hart, 1961). It is not sufficient, according to this conception, for the coercive powers of the state to be exercised through laws duly enacted by proper lawmaking institutions and procedures. Those laws must also conform to certain formal requirements or internal criteria (Raz, 1979, pp. 210–229). Three formal qualities are singled out as minimal criteria in virtually every theory of the rule of law, thin and thick alike. First, law must be promulgated. That is, legal enactments must be made public and knowable to their addressees. Laws made in secret or difficult to access by those concerned can make it impossible for them to refrain from prohibited activities or fulfill their legal duties. Second, law must be prospective (at least as far as criminal law is concerned). A retroactive or ex post facto law is considered “truly a monstrosity” because it exposes individuals to punishments for actions that were lawful at the time they were committed (Fuller, 1969, p. 53). Third, law must be general. The requirement of generality prohibits legal enactments that make distinctions between or discriminate on the basis of arbitrary personal characteristics such as race, class, sex, and religion. No individual or group is to be singled out for special or exceptional treatment—positive or negative. From at least the time of the ancient Greeks, this requirement has been understood as a demand for equality under the law that is applicable to rulers and ruled alike.1 Some legal and political theorists have identified additional formal qualities that law must possess if arbitrariness is to be avoided. One of the most comprehensive and influential accounts of the formal features necessary for the rule of law is offered by Lon Fuller, who identified five additional requirements that constitute what he calls the “inner morality of law”: clarity (the law must be intelligible), non-contradiction (a system of rules must not contain contradictory requirements), possibility of compliance (the law may not demand performance of the impossible), constancy (the law should be relatively stable over time), and congruence between declared rule and official action (public officials must actually enforce the law on the books). According to Fuller, a system of rules that lacks any one these features or fails substantially to live up to several of them does not count as a legal system at all (Fuller, 1969). Regardless of any particular enumeration of formal features a legal system must possess to satisfy the demands of the rule of law, these features are not pursued for their own sake. They are championed on the grounds that they help the law achieve justice, freedom, or some other significant substantive ideal. However, many argue that adherence to legal forms is insufficient to satisfy the demands of the rule of law or to achieve its goals. Scholars who advocate thick or substantive conceptions of the rule of law contend that legal enactments must also conform to an external standard of legitimacy. In their view, law must express a particular content, such as the ideals contained in some conception of natural law, human rights, or personal dignity. Several scholars have gone as far as to proclaim that protection of human rights is one of the core principles of the rule of law (Bingham, 2011, pp. 66–84; Tamanaha, 2004, pp. 102–113). The tendency of governments to use measures that employ legal forms to deal with emergencies has led many critics to emphasize these and other substantive ideals when warning about threats to the rule of law. Scholars and watchdog groups have also identified a number of institutional and procedural guarantees as essential to the preservation and enforcement of the rule of law. Chief among these is access to an independent judiciary. At a minimum, there must be impartial judicial bodies responsible for interpreting and applying the law without intimidation or interference either from the government or outside groups. Another important requirement is meaningful access to independent courts (see, e.g., Raz, 1979, pp. 216–217; Bingham, 2011, pp. 25, 91–92). Related to this is access to independent legal counsel and “the presence of a well-established legal profession” committed to upholding legality (Tamanaha, 2004, pp. 58–59). Although such procedural and institutional requirements are singled out by many scholars and organizations as independent criteria, it might be best to think of these as logical entailments of the formal qualities essential to the rule of law, rather than external criteria. Regardless of which conception of the rule of law is used, no political system ever fully lives up to this ideal. If for no other reason than that some degree of vagueness and indeterminacy in the law is “inescapable,” scholars have argued that it is unrealistic to expect “complete conformity” to the rule of law (Raz, 1979, p. 222; see also Tamanaha, 2004, pp. 86–90). A legal system could satisfy some requirements of the rule of law ideal but fall short on other dimensions. Some scholars have also argued that the distinction between the rule of law and the so-called rule of men is “overdrawn and misleading” inasmuch as any legal system depends on discretionary exercises of “judicial and administrative power” that require acts of individual judgment, interpretation, and application (Honig, 2009, p. 84). For these reasons, it is better to think of the rule of law as a political ideal achieved by degrees along a continuum rather than a categorical standard a government either does or does not meet (Raz, 1979, p. 211). Despite the insistence that adherence to the rule of law is an essential condition of legitimacy and is necessary (if not sufficient) to prevent tyranny, there is broad acceptance both in theory and in practice that some deviation from the law is permissible in times of emergency. In fact, it is often argued that strict adherence to existing laws, which are typically enacted during periods of calm and designed to deal with regular or ordinary occurrences, could inhibit the government’s ability to deal capably with the often unexpected and extraordinary challenges posed by an emergency. Exactly how much deviation, what kind, and for how long, though, depends on how both the rule of law and emergency are understood. The Expanding Concept of Emergency There is no widely accepted definition of emergency either in theory or in law, but there is a common and long-standing assumption that it is categorically distinct from a “normal” situation. Whatever specific criteria are used to define a state of emergency, the prevailing view of legal and political scholars is that an emergency is a significant departure from a state of normalcy, triggered by an extreme event that is highly disruptive or threatening to the established order. For example, Edward S. Corwin defined emergency in terms of conditions that “have not attained enough of stability or recurrency to admit of their being dealt with according to rule” (Corwin, 1957, p. 3). The aberrational quality of emergency is also emphasized in the International Law Association’s definition of an emergency as “an exceptional situation of crisis or public danger, actual or imminent” (quoted in Chowdhury, 1989, p. 11). In theory, an emergency also differs from the norm in terms of its duration: it is a temporary disruption of the status quo that arises from some triggering event and is expected to come to a definite end. In addition, an emergency is generally distinguished by its scale. As the International Law Association claims, an emergency “affects the whole population or the whole population of the area to which the declaration applies and constitutes a threat to the organized life of the community of which the state is composed” (quoted in Chowdhury, 1989, p. 11). What counts as an emergency is largely in the eye of the beholder. Whether a situation is classified as an emergency, and the specific kind of emergency it is considered to be, frequently dictates how government responds—and whether it follows ordinary laws. For instance, in the view of many U.S. officials, the drop in the stock market during the financial crisis that erupted in 2008 was an emergency that justified the use of extraordinary measures, but the collapse in housing prices that affected millions of ordinary mortgage holders was not. Similarly, some chronic and persistent challenges, such as the threat of terrorism, are treated as emergencies, whereas others, including poverty and homelessness, generally are not.2 Whether an event or a condition is framed as an emergency can also have profound consequences for how—or even if—government responds; levels of public support for any measures it takes; whether courts approve; and which laws, if any, are applicable.3 Emergencies are usually classified in scholarship and in law under three broad headings: violent situations, natural disasters, and economic crises. Emergencies arising from violence include wars (whether from foreign invasion or civil war), terrorism, domestic insurrections or rebellions, and civil strife.4 Natural disasters include hurricanes, tornadoes, tsunamis, earthquakes, landslides, volcanic eruptions, and other extreme events covered in municipal and international law by the concept of force majeure (Chowdhury, 1989, p. 16). (Famines, pandemics, and related threats to public health are usually treated like natural disasters.5) Economic crises include the collapse of the financial system, hyperinflation, and economic depressions. Although there are no neat lines around these categories, and nothing necessarily prevents one kind of emergency from turning into (or taking on the characteristics of) another kind of emergency, there is a common assumption that violent emergencies (like one resulting from a terrorist attack) require and justify different sorts of responses than those appropriate for nonviolent emergencies (such as severe food shortages). Despite the expanding conceptual elasticity of emergency, the state of war continues to serve as the prototypical example of emergency, exerting a profound influence on thinking about the urgency, legality, and legitimacy of government’s response. The notion that the law must give way before the exigencies of war is best captured in an aphorism attributed to Cicero: “Silent leges inter arma” (“In times of war, the law falls silent”). The association of emergency with the existential threat posed by large-scale military conflict serves to mobilize public support for the government’s response and justify the use of otherwise prohibited measures. Tensions Between Emergencies and the Rule of Law Perceived emergencies threaten rule of law values most directly by serving as a pretext for governments to ignore or circumvent constraints that ordinarily prevent or minimize arbitrary exercises of power. Institutions charged with upholding the rule of law may see their powers weakened or displaced; regular processes of lawmaking may be bypassed; and ordinary protections for civil liberties and civil rights may be suspended. In the most extreme cases, individuals are subjected to summary judgments—including executions—by military commanders and security forces. Emergency governments sometimes try to maintain a veneer of legality by setting up exceptional adjudicative bodies such as military tribunals and military commissions that operate according to relaxed standards. But even when regular courts continue to operate, they tend to show unusual deference to government. Just as courts tend to “rally ’round the flag” in times of war, they are also prone to side with government in times of emergency. Among other things, they may decline to review government actions by claiming that certain matters are non-justiciable; give governments that invoke state secrets doctrines the benefit of the doubt; and accept claims of national security as valid reasons to circumvent the separation of powers, exceed legal restrictions, and curtail the rights and liberties of citizens. The historical pattern of judicial deference to government in times of crisis has led many scholars to conclude that courts cannot be counted on to protect basic human rights during an emergency (Alexander, 1984). Emergencies have been cited to justify a wide variety of actions that would be legally prohibited and morally intolerable during periods of normalcy. Even many of those who consider respect for human rights one of the core principles of the rule of law concede that emergencies of sufficient gravity may justify the temporary abridgment of some rights—a practice permitted by the International Covenant on Civil and Political Rights as long as states do not discriminate on certain grounds (Ignatieff, 2004, pp. 49–50). However, there is a crucial distinction between derogable rights (such as the right to be free from forced labor, which often gives way to wartime necessities for compulsory military service), which may be suspended in times of great emergency, and non-derogable rights (such as the right to be free from torture), which must be respected in all circumstances. In addition to violations of basic human rights, such as the right to life, the Belgrade Report of the International Law Association issued in 1980 identified seven persistent patterns of abuse during states of emergency, including the overthrow or replacement of existing governments, the arbitrary use of detention, the suspension of civil and political liberties such as freedom of expression, the use of ex post facto laws to punish newly created crimes, the debilitation of the judiciary, and the prolongation of emergency even after the conditions that prompted the initial declaration of emergency ceased to exist (Chowdhury, 1989, pp. 4–6). In response to the threat of terrorism, nominally liberal democratic countries have resorted to censorship of media, new and enhanced forms of surveillance, curfews, indefinite detention, and other measures that would be difficult to justify under normal conditions. The potential for such abuses is precisely what prompts some to insist on forms of “constitutional absolutism” that commit government to follow the exact same rules in emergencies that apply in normal circumstances (Gross & Aoláin, 2006, pp. 86–109). Emergency as Exception A staple of scholarship on emergency is the idea that there is a fundamental ontological divide between a state of normalcy and a state of emergency. The notion of an insuperable divide between normal situations and emergencies has powerfully informed understandings of the proper role of law when an emergency strikes. The most outspoken proponent of this understanding of the tension between the rule of law and emergency is Carl Schmitt, a German legal and political theorist-turned-Nazi whose Weimar-era writings received renewed interest following the September 11, 2001, terrorist attacks (see, e.g., Agamben, 2005; Lazar, 2009; Posner & Vermeule, 2011). Schmitt drew a sharp contrast between the “norm” and the “exception” to justify departures from ordinary law in times of emergency. Law, he argued, prescribes general rules designed to deal with the normal situation, which is routine and calculable. However, the world is susceptible to the eruption of extraordinary events that cannot be foreseen and therefore cannot be provided for in advance by law. “The exception, which is not codified in the existing legal order, can at best be characterized as a case of extreme peril, a danger to the existence of the state, and the like” (Schmitt, 2005, pp. 6–7). Because an emergency is by definition not a “normal” situation, legal norms do not apply. In Schmitt’s view, the inability to foresee every situation that may arise means that the rule of law has to submit to the use of extra-legal ad hoc measures, or “decisions,” by the sovereign. According to Schmitt, the only entity capable of responding effectively and expeditiously to such a threat is “the sheer executive, which is not conditioned in advance by any norm in the legal sense” (Schmitt, 2014, p. 8). Insistence on following the law—specifically, the demand for prior legal authorization to take any action—is not only naïve, it is dangerous, Schmitt argued. Rigid adherence to the rule of law would impede the government’s ability to deal with extraordinary circumstances that require maximum flexibility. Although Schmitt’s theory was part of a much broader polemic against the shortcomings of liberal democracy, it expresses (albeit in particularly stark form) ideas that have been endorsed by many champions of the rule of law, including some of the liberal thinkers Schmitt excoriated. John Locke defended the use of extra-legal action by the executive, or what he called “prerogative,” in order to serve the public welfare when strict adherence to the law would do more harm than good. Locke’s theory, which has exerted a strong influence on liberal political thought regarding emergency, states that “this power to act according to discretion, for the publick good, without the prescription of the Law, and sometimes even against it,” is justified by the fact that “many accidents may happen, wherein a strict and rigid observation of the Laws may do harm” (Locke, 1988; on Locke’s influence on liberal political thought concerning emergency powers, see Fatovic, 2009). Lockean reasoning on this score has been echoed by generations of American statesmen. Reflecting on the duties of a high officer, Thomas Jefferson wrote: A strict observance of the written laws is doubtless one of the high duties of a good citizen, but it is not the highest. The laws of necessity, of self-preservation, of saving our country when in danger, are of higher obligation. To lose our country by a scrupulous adherence to written law, would be to lose the law itself, with life, liberty, property and all those who are enjoying them with us; thus absurdly sacrificing the ends to the means. (Jefferson, 1984, pp. 1231–1232) The dilemma facing government officials who find that strict adherence to legal rules imperils not only public safety but the preservation of the rule of law itself was most famously expressed in Abraham Lincoln’s defense of his unilateral decision to suspend the writ of habeas corpus during the early stages of the Civil War: “are all the laws, but one, to go unexecuted, and the government itself go to pieces, lest that one be violated?” (Lincoln, 1989, p. 253; emphasis in original). The idea that there is an essential divide between the normal situation and times of emergency that justifies departures from ordinary law has informed practical responses to emergency for centuries. This is the assumption underlying the institution of the dictatorship in republican Rome (as well as modern “models of accommodation” to emergency such as the state of siege and martial law) (Gross & Aoláin, 2006, pp. 17–35; Rossiter, 2002). The office of dictator was designed to deal with extraordinary circumstances beyond the competence of ordinary magistrates, including, most commonly, external military threats and domestic uprisings. The dictator possessed enormous authority to do almost anything he thought necessary to bring about an end to the emergency, including issuing new laws, ruling by decree, commanding other magistrates, and putting citizens to death without trial (Lazar, 2009; Nicolet, 2004; Schmitt, 2014). Despite the dictator’s broad discretionary authority to employ extra-legal measures, the norms surrounding the Roman dictatorship belie the notion that responses to emergency stand wholly outside the law. It was a constitutional, hence legal, office structured by rules that determined how it came into being (the dictator was nominated by either of two consuls after a recommendation by the Senate), specified its maximum duration (six months or until the crisis passed, whichever came first), and imposed some inviolable restrictions on the otherwise vast powers of the office (the dictator could not unilaterally impose new taxes). Perhaps the most important norm governing the behavior of dictators in republican Rome was the expectation that the officeholder would serve in what Schmitt described as a “commissarial” capacity to restore the status quo ante in its entirety and refrain from introducing permanent changes into the legal order (on Schmitt’s distinction between “commissarial” and “sovereign” dictatorship, see Schmitt, 2014). These features suggest that law continues to structure and condition the use of power even in arrangements that permit the use of extra-legal measures. Scholars have pointed to Roman dictatorship and especially to modern practices to argue that the norm-exception dichotomy fails to take into account the multifarious ways the normal situation already makes allowances for the use of extraordinary powers and also ignores all the ways that the exceptional case is always infused with law. As Nomi Claire Lazar argues, there are significant continuities between states of emergency and states of normalcy that challenge facile dichotomies like those between norm and decision. It is not simply a choice between law and discretion, but between different kinds of law and different degrees of discretion. Lazar contends that there are always moments of discretion and decision even in the most placid periods of normalcy, and that legal norms continue to structure life even in the most extreme emergencies. If nothing else, law continues to inform ideas about what is permissible in times of emergency. In addition, law structures how emergencies get defined, when they can be declared, what powers the state may use, and when they come to an end (Lazar, 2009). The Legalization of Emergency Powers Whether they approve or disapprove of temporary deviations from existing laws in times of emergency, scholars have devoted more attention to extra-legal responses than to formally legal responses. However, recent scholarship has begun to challenge the dominance of the extra-legal framework on normative and empirical grounds. This work, which tends to be more historically informed than earlier scholarship that generally took its cues from political and legal theory, has shown that real-world responses to emergency involve not just the violation or suspension of law, but its expansion and proliferation as well. Emergencies are not always or only antithetical to law; they are also “jurisgenerative,” stimulating the development of new and different law (Sarat, 2010, p. 4). Since the 19th century, there has been a general trend toward the ever-more detailed juridification of emergency powers. Governments today seldom have to resort to extra-legal measures like those employed by Roman dictators because public laws themselves now authorize governments to take a variety of extraordinary—but technically not extra-legal—actions (Schmitt, 2014, p. 221).6 In fact, most constitutions now contain provisions that stipulate the conditions under which an emergency may be declared, identify the proper officers or institutions authorized to issue such a proclamation, specify the powers that may be activated, indicate the rights that may (or may not) be derogated, and limit the time period during which emergency measures may be employed. Contrary to Schmitt’s claim that modern constitutions seek to “delimit the extraordinary functions as precisely as possible,” some constitutions provide breathtaking grants of authority that confer legality on almost any conceivable action.7 The increasing reliance on legal measures to deal with emergencies does not necessarily alleviate concerns about the rule of law. Whether it ignores the requirement of generality in law by singling out some individuals or groups for different treatment or modifies normal protections for human rights, the juridification of emergency power presents challenges for both formal and substantive conceptions of the rule of law. The danger today is not (only) that governments dealing with emergency will operate in a lawless manner, but that they will receive all the legal cover they need from compliant legislatures and feckless courts to take actions that technically comport with the letter of the law but violate its spirit, thereby eroding respect for the rule of law. Scholars point to the use of enabling statutes that confer new powers on government and sometimes permit it to sidestep ordinary constraints as the most significant threat to the rule of law in times of emergency. The most infamous example of an enabling provision is Article 48 of the Weimar Constitution, which ended up giving the German president largely undefined authority to promulgate emergency decrees and contributed to the collapse of the Weimar Republic (Rossiter, 2002, pp. 31–73). Enabling legislation that supplies the executive with additional discretionary powers or confers extraordinary powers on security forces has been a feature of both authoritarian regimes and constitutional democracies. For instance, emergency laws in Egypt have permitted the president to restrict freedom of movement and assembly, censor the media, inspect private communications, regulate hours of operation for businesses, and confiscate property, among other things (Reza, 2007, p. 539). Countries held up as exemplars of the rule of law are no exception to this trend. In the United States, the powers available to the executive since the 19th century have “come to be increasingly rooted in statutory law,” thereby obviating the need for the use of extra-legal action of the sort defended by Jefferson and Lincoln (Relyea, 2007, p. 18). Legal accommodation of emergency government in the United States accelerated during the New Deal and World War II. In addition to actions President Franklin Delano Roosevelt took on his own authority to deal with the Great Depression, he relied on a variety of new powers granted by recently enacted statutes. In some cases, legislation was passed to ratify legally questionable actions he had already taken. For instance, a few days after Roosevelt became president, Congress passed the Emergency Banking Act, which gave him statutory authority to declare a bank holiday—something he had done on his own authority within 48 hours of assuming office (Relyea, 2007, p. 7). Emergency Law in the United States Partially in response to abuses of presidential power stemming from the Vietnam War and the Watergate scandal, the U.S. Congress took steps to curb the statutory growth in emergency powers. When the Special Committee on the Termination of the National Emergency looked into the matter, it found that there were four separate proclamations of emergency simultaneously in effect (Relyea, 2007, p. 9). A subsequent panel found that there were “470 provisions of federal law which delegated extraordinary authority to the executive in time of national emergency” (Relyea, 2007, p. 10). Attempts to reform this situation resulted in the passage of new legislation designed to establish formal procedures regulating the declaration of and responses to emergency. Whatever the intentions of lawmakers, one of the major effects of much legislation pertaining to emergencies since the 1970s has been to augment the powers of the executive.8 The result, according to Kim Scheppele, is that “national emergencies have been declared far more often after the reforms of the 1970s than before that date because presidential declarations of emergency have been regularized, routinized, and taken into normal constitutional practice. Emergencies have become so common that hardly anyone notices them” (Scheppele, 2006, p. 856). Even though there are constraints built into some of this legislation (such as the sunset provisions contained in the Economic Stabilization Act of 1970, which authorizes the president to impose wage and price controls), much of the legislation presidents have invoked in recent years is on permanent standby—“dormant until activated by the President” (Relyea, 2007, p. 6). The routinization of emergency powers has advanced so far that U.S. presidents have been able to argue that even the most extraordinary and controversial measures are solidly grounded in law. The prosecution of the so-called War on Terror by the George W. Bush administration provoked fierce criticism that its surveillance policies, use of “enhanced interrogation techniques” against suspected terrorists, indefinite detention of enemy combatants, and many other counterterrorism actions violated the rule of law, but attorneys working for the administration steadfastly maintained that its actions were lawful. In addition to citing the president’s inherent constitutional authority under Article II of the U.S. Constitution, administration officials claimed that various federal statutes give the president legal authority to carry out every one of his national security policies (see, e.g., Yoo, 2003, 2006). For instance, President Bush found all the legal authorization he needed for extraordinary measures, such as freezing the assets of suspected terrorists, in decades-old statutes such as the National Emergencies Act and the International Economic Emergency Powers Act (Scheppele, 2006, p. 857). In addition to relying on grants of authority from legislation already on the books, governments also pressure lawmakers to pass new laws empowering them to deal with emergencies after they arise. Statutes passed shortly after the September 11 attacks, including the Authorization for Use of Military Force and the USA PATRIOT Act, gave the government new or enhanced powers to use military force against suspected terrorists, engage in expanded surveillance and wiretapping operations, and search banking and business records, among other things (see Yoo, 2006; Goldsmith, 2007, 2012). Many U.S. allies followed suit. The United Kingdom passed new counterterrorism laws such as the Anti-Terrorism, Crime and Security Act 2001, which gave government the power to detain indefinitely non-nationals determined to be a national security risk (Dyzenhaus, 2006, pp. 175–176). Scholarship on emergency powers after 9/11 has generally focused on the dangers posed to the rule of law from expansions of the executive’s powers to confront perceived threats to national security, but open-ended grants of authority have not been limited to putative emergencies arising from acts of terrorism. For instance, Secretary of the Treasury Hank Paulson sought nearly unfettered authority from Congress to deal with the financial upheaval resulting from the subprime mortgage crisis in the fall of 2008. The original three-page proposal submitted to Congress sought authorization to spend up to $700 billion to buy and sell distressed assets without any restrictions or oversight other than periodic reports (Herszenhorn, 2008). Although Congress balked at the idea of giving the Treasury secretary the authority to purchase mortgage-related assets “without limitation,” the Troubled Asset Relief Program established by the Emergency Economic Stabilization Act of 2008 gave the Treasury Department broad authority to purchase or insure so-called toxic assets from financial institutions (Text of draft proposal, 2008). Legal Threats to the Rule of Law Critics have argued that the current practice of ruling by law in times of emergency is troubling because it relies on “the legal form to cloak arbitrary power” (Balasubramaniam, 2008). Legality, they argue, provides little more than a “fig leaf” for the very practices that the rule of law is meant to prevent. Whether governments make use of capacious constructions of constitutional powers, “tendentious” readings of existing law, or delegations of power from newly enacted statutes, critics have argued that such uses of law achieve only a façade of legality and fail to live up to the promise of the rule of law (on tendentious uses of existing law, see Bruff, 2009). The danger in the long run is that respect for the rule of law and the values it is supposed to uphold gets eroded. Related to this is what David Dyzenhaus describes as “the problem of seepage of the exceptional into the ordinary which affects all attempts to adapt the rule of law” (Dyzenhaus, 2008, p. 55). Once they receive the imprimatur of law, extraordinary measures that were supposed to be only temporary come to be seen as normal and have a tendency to become permanent. This line of argumentation tends to draw attention to the shortcomings of formalistic conceptions of the rule of law, emphasizing instead the importance of substantive values to any meaningful and effective realization of that idea. Those who advocate a more substantive understanding of the rule of law contend that some measures are impermissible no matter what their formally legal basis is. Not only does a substantive view of the rule of law militate against resorting to certain actions (such as the suspension of certain civil liberties and rights) that should never be permissible, but it also impedes consolidations of power likely to pave the way for tyranny. One of the reasons for this, as Dyzenhaus has argued, is that a substantive conception of the rule of law commits all branches of government—especially the judiciary—to the maintenance “of fundamental constitutional principles which protect individuals from arbitrary action by the state” (Dyzenhaus, 2006, p. 2). Spatial and Temporal Bounds of Emergencies Much of the scholarship and law on emergency has been predicated on two assumptions about the temporal and spatial boundaries of emergencies. The first is that an emergency is a temporally discrete phenomenon: it has a more or less definite beginning and end. The second is that an emergency has a long geographical reach: it is more or less national in scope. Recent scholarship has challenged both of these assumptions on theoretical and empirical grounds. Permanent Emergency and the New Normal The prevailing expectation is that the end of an emergency brings a return to normalcy, or at least something closely approximating the conditions that existed prior to an emergency. However, many measures—both legal and extra-legal—that would not have been accepted before an emergency tend to linger long after the perceived danger has passed. In the most extreme cases, governments have declared indefinite states of emergency that have resulted in permanent rule by emergency measures. For instance, Egypt has been in an almost continuous state of emergency for almost one hundred years, dating back to British imperial rule, which has allowed the government to consolidate power and suppress both violent and nonviolent political opposition. Emergency powers in Egypt have been used to carry out mass arrests, detain suspects indefinitely, and hold them incommunicado (Reza, 2007, p. 540). Since achieving independence in 1961, Cameroon has also experienced frequent and extended periods of emergency rule that have enabled various presidents to silence political opposition and consolidate power (Fombad, 2004). Throughout much of the apartheid era, South Africa was under a state of emergency that subjected black citizens to restrictions on public gatherings, strict curfews, warrantless searches, and detention without trial, to name just a few measures adopted in the name of security (Chowdhury, 1989, pp. 47–48; for other examples of permanent or prolonged states of emergency around the globe, see Chowdhury, 1989, pp. 45–54). The tendency of the extraordinary to become or redefine the ordinary has received increased scholarly attention in the wake of the U.S. War on Terror following the 9/11 attacks. Numerous scholars have argued that the exception has become the norm during the global war on terror (Agamben, 2005; Hardt & Negri, 2004, p. 7; Panitch, 2002). Because terrorism, as opposed to individual terrorists or terrorist organizations, is a technique that can never be eradicated, there is potentially no end to the supposed emergency and therefore potentially no end to emergency government (Ackerman, 2006). But whether a formal state of emergency has been terminated or not, Kim Scheppele contends that the normal situation is hardly ever the same after an emergency because the baseline tends to shift (Scheppele, 2006, p. 840). The most drastic and controversial practices may be terminated once the emergency passes, but many others (such as expanded surveillance of all or parts of the population and increased security measures at airports) remain intact. In addition, laws passed with built-in sunset provisions sometimes get renewed and extended indefinitely as the public grows accustomed to new exercises of power and new constraints on individual liberty. For instance, many provisions of the USA PATRIOT Act that were supposed to expire have been reauthorized and extended. However, some scholars have contested the claim that developments after 9/11 represent a radical break with some normal situation, arguing that emergency powers have been integral to the development of the state in the 20th century, especially in the area of economic regulation (Neocleous, 2006). Indeed, historical institutionalist research demonstrates the extent to which emergency powers have been vital to state-building projects in the United States since the beginning of the 20th century (Curley, 2015). There is another aspect to the temporality of emergency that poses a threat to the rule of law. If expansions in discretionary power constitute the gravest threat to the rule of law during an emergency, instability, or what Fuller calls inconstancy, in the law is perhaps the most serious threat to the rule of law in the immediate aftermath of an emergency. Before conditions can return to normal, some governments have taken advantage of the upheaval created by an emergency to carry out policy experiments that were not politically feasible before emergency struck. Since the final decades of the 20th century, many governments have exploited the instability following an emergency to implement unpopular neoliberal reforms, including the privatization of public services and public lands, cutbacks in spending on social services, the deregulation of business, and the transfer of property rights. As one proponent of neoliberal reforms noted, “These worst of times give rise to the best of opportunities for those who understand the need for fundamental economic reform” (John Williamson quoted in Klein, 2007, p. 213). Some of the most radical changes have involved substantial redistributions of wealth—but not in the direction that rule of law theorists such as A. V. Dicey, Hayek, and Milton Friedman feared. Although these theorists looked to the rule of law as a bulwark against a downward redistribution of wealth from rich to poor that may be favored by democratic majorities, recent examples illustrate that emergencies often provide elites opportunities to redistribute wealth upward from poor to rich.9 Naomi Klein documents numerous instances in which governments have exploited crises to carry out drastic economic changes that were not politically possible in normal circumstances. After Hurricane Mitch tore through the Caribbean in October 1998, the governments of Honduras, Guatemala, and Nicaragua implemented a variety of economic reforms that included the privatization of state-owned companies, reductions in environmental standards, the abolition of land-reform laws, and other changes that made it easier to relocate residents who stood in the way of powerful economic interests (Klein, 2007, p. 500). What happened in the aftermath of the cataclysmic 2004 Indian Ocean tsunami that killed nearly 230,000 in 14 countries and displaced millions provides an even starker example. In Sri Lanka, hundreds of thousands of villagers who for generations had lived near the coastline, earning a living from small-scale fishing, were prevented from returning to the land where their homes once stood. Public officials claimed that it was necessary to create a “buffer zone” near coastal regions for reasons of safety, yet large businesses involved in the tourist industry were exempt from these restrictions. The land once occupied by fishing people was handed over to foreign investors and entrepreneurs so they could build world-class hotels and tourist resorts (Klein, 2007, pp. 9, 487–492). The governments of Thailand and the Maldives undertook similar changes that displaced thousands in the name of economic development (Klein, 2007, pp. 504–507). Public officials in the U.S. have also used emergencies as opportunities to pursue neoliberal reforms. Only days after Hurricane Katrina made landfall in New Orleans in 2005, President Bush issued a proclamation suspending portions of the Davis-Bacon Act, which mandates that firms contracted to work on public projects pay local prevailing wages (Relyea, 2007, p. 19). Residents of public housing in New Orleans who were displaced by Hurricane Katrina have seen their former neighborhoods turned over to private development (Klein, 2007, pp. 519–524). Uneven Impacts of Emergency Powers The notion that an emergency or the measures adopted to deal with it apply to the entire population has also come under increasing scrutiny. In practice, the dangers commonly associated with a state of emergency tend to be localized. Even during a state of war, which is often used as a template for thinking about a state of emergency, disruptions to everyday life and the ordinary functioning of government can be limited to very specific geographic locations. Though the sense of fear and feelings of insecurity are often widespread in times of emergency, actual threats to life and property tend to be localized. Rarely do emergencies actually rise to the level of existential threats to the nation as a whole. Most could be classified as “small emergencies”: problems that require exceptional solutions, but not so grave or extreme as to “be seen as fundamentally disruptive of the overall order of things, or of the prospects for realization of a constitutional ideal” (Scheppele, 2006, p. 836). Even during the Civil War, which is arguably the single greatest emergency the United States has ever confronted, armed hostilities in the North were generally confined to border states. But as the concept of emergency has been stretched to include violent situations short of war and other kinds of crises directly affecting only small segments of the population, small-scale emergencies have become so common that “America is now—and has been since the First World War—virtually always in a state of emergency, one way or another” (Scheppele, 2006, p. 836). In the fall of 2005 alone, “nearly every American state was in a separate, presidentially declared state of emergency” (Scheppele, 2006, p. 841). Although governments have used emergency conditions confined to a specific location to justify the imposition of emergency measures affecting the entire population, it has become increasingly common for them to apply different sets of laws and measures to different parts of the population under their jurisdiction. Many emergency measures that get adopted or activated—such as laws against price gouging—are applied only in directly affected areas (Dillbary, 2010). Michael Ignatieff has proposed a tripartite spatial scheme for the classification of emergency measures: (a) national emergencies, which result in measures affecting an entire country, such as a nationwide state of martial law; (b) “territorial” emergencies, which are “confined to special zones of the country,” such as zones of occupation and areas with active combat operations; and (c) “selective” emergencies, which subject particular individuals, such as suspected terrorists, to exceptional forms of state power and diminished or suspended privileges, immunities, and rights (Ignatieff, 2004, pp. 25–26). Scholars and journalists have also drawn attention to the ways that the burdens of some emergency measures tend to fall most heavily or exclusively on members of certain ethnic, religious, or national groups—particularly those who have been perceived as Muslims or of Middle Eastern descent during the War on Terror—but far less attention has been given to class as a factor in making individuals subject to or exempt from extraordinary exercises of power. For example, despite government claims that enhanced surveillance and security measures at airports are critical to preventing another 9/11-style attack, those who can pass a background check and afford the annual fee are eligible for a pass that grants them expedited and reduced screening at airport security check-ins (Honig, 2009, p. 155). Conclusion The trend toward the legalization of extraordinary powers in times of emergency, along with the increasing normalization of emergency powers in ordinary circumstances, reveals challenges to the rule of law that are arguably every bit as worrisome as the lawless exercises of power that have alarmed thinkers for millennia. Each involves concentrations and expansions of discretionary power with the potential for abuse. But unlike those extralegal exercises of power that have always represented the antithesis of the rule of law, lawful exercises of emergency power may be more insidious because their inconsistency with rule of law values is more difficult to identify—and therefore more difficult to resist. It remains an open question whether respect for the rule of law over the long run is better preserved by an open acknowledgment that temporary departures from ordinary legal rules are sometimes necessary in times of emergency or by modifications to the form and content of existing laws. Each approach poses risks to the rule of law. However, it is worth recalling that law does not simply enable or constrain power. As Aristotle pointed out over two millennia ago, the efficacy of law depends almost as much on its didactic function as it does on its regulative functions: “Law trains the holders of office expressly in its own spirit, and then sets them to decide and settle those residuary issues which it cannot regulate” (Aristotle, 1995, p. 128). If existing laws must give way, somehow or other, in times of emergency, then it becomes all the more important that those who make and administer the law remain committed to its best ideals. References • Ackerman, B. (2006). Before the next attack: Preserving civil liberties in an age of terrorism. New Haven, CT: Yale University Press. • Adams, J. (2000). Thoughts on government: Applicable to the present state of the American colonies. In C. B. Thompson (Ed.), The revolutionary writings of John Adams. Indianapolis, IN: Liberty Fund. • Alexander, G. J. (1984). The illusory protection of human rights by national courts during periods of emergency. Human Rights Law Journal, 5, 1–65. • Aristotle. (1995). Politics (Trans. Ernest Barker). Oxford, U.K.: Oxford University Press. • Agamben, G. (2005). State of exception (Trans. K. Attell). Chicago, IL: University of Chicago Press. • Balasubramaniam, R. R. (2008). Indefinite detention: Rule by law or rule of law? In V. V. Ramraj (Ed.), Emergencies and the limits of legality (pp. 118–140). Cambridge: Cambridge University Press. • Bingham, T. (2011). The rule of law. New York, NY: Penguin Books. • Bruff, H. H. (2009). Bad advice: Bush’s lawyers in the war on terror. Lawrence: University Press of Kansas. • Campbell, T. (2008). Emergency strategies for prescriptive legal positivists: Anti-terrorist law and legal theory. In V. V. Ramraj (Ed.), Emergencies and the limits of legality (pp. 201–228). Cambridge, U.K.: Cambridge University Press. • Chowdhury, S. R. (1989). Rule of law in a state of emergency: The Paris minimum standards of human rights norms in a state of emergency. New York, NY: St. Martin’s Press. • Corwin, E. S. (1957). The president: Office and powers, 1787–1957 (4th rev. ed.). New York, NY: New York University Press. • Craig, P. P. (1997). Formal and substantive conceptions of the rule of law: An analytical framework. Public Law, 21, 467–487. • Curley, T. M. (2015). Models of emergency statebuilding in the United States. Perspectives on Politics, 13(3), 537–613. • Dauber, M. L. (2013). The sympathetic state: Disaster relief and the origins of the American welfare state. Chicago, IL: University of Chicago Press. • Dillbary, J. S. (2010). Emergencies, body parts and price gouging. In Sarat, A. (Ed.), Sovereignty, emergency, legality (pp. 165–181). New York, NY: Cambridge University Press. • Dyzenhaus, D. (2006), The constitution of law: Legality in a time of emergency. Cambridge, U.K.: Cambridge University Press. • Dyzenhaus, D. (2008). The Compulsion of Legality. In V. V. Ramraj (Ed.), Emergencies and the limits of legality (pp. 33–59). Cambridge, U.K.: Cambridge University Press. • Fatovic, C. (2009). Outside the law: Emergency and executive power. Baltimore, MD: Johns Hopkins University Press. • Fombad, C. M. (2004). Cameroon’s emergency powers: A recipe for (un)constitutional dictatorship? Journal of African Law, 48(1), 62–81. • Fuller, L. (1969). The morality of law (Rev. ed.). New Haven, CT: Yale University Press. • Goldsmith, J. (2007). The terror presidency: Law and judgment inside the Bush administration. New York, NY: W. W. Norton. • Goldsmith, J. (2012). Power and constraint: The accountable presidency after 9/11. New York, NY: W. W. Norton. • Gross, O., & Aoláin, F. N. (2006). Law in times of crisis: Emergency powers in theory and practice. Cambridge, U.K.: Cambridge University Press. • Hardt, M., & Negri, A. (2004). Multitude: War and democracy in the age of empire. New York, NY: Penguin. • Hart, H. L. A. (1961). The concept of law. Oxford, U.K.: Clarendon Press. • Hayek, F. A. (1944). The road to serfdom. Chicago, IL: University of Chicago Press. • Hayek, F. A. (1960). The constitution of liberty. Chicago, IL: Henry Regnery Company. • Herszenhorn, D. M. (2008, September 20). Administration is seeking$700 billion for Wall Street. New York Times. • Holder, P., & Martin, B. (2009). Climate crisis? The politics of emergency framing. Economic and Political Weekly, 44(36), 53–60. • Honig, B. (2009). Emergency politics: Paradox, law, democracy. Princeton, NJ: Princeton University Press. • Hussain, N. (2003). The jurisprudence of emergency: Colonialism and the rule of law. Ann Arbor: University of Michigan Press. • Ignatieff, M. (2004). The lesser evil: Political ethics in an age of terror. Princeton, NJ: Princeton University Press. • Jefferson, T. (1984). Letter to John B. Colvin [Sept. 20, 1810]. In M. D. Peterson (Ed.), Jefferson: Writings. New York, NY: The Library of America. • Klein, N. (2007). The shock doctrine: The rise of disaster capitalism. New York, NY: Picador. • Lazar, N. C. (2009). States of emergency in liberal democracies. Cambridge, U.K.: Cambridge University Press. • Lincoln, A. (1989). Message to Congress in special session [July 4, 1861]. In D. E. Fehrenbacher (Ed.), Lincoln: Speeches and writings, 1859-1865. New York, NY: The Library of America. • Locke, J. (1988). Two treatises of government (ed. Peter Laslett). Cambridge, U.K.: Cambridge University Press. • Neocleous, M. (2006). The problem with normality: Taking exception to “permanent emergency.” Alternatives, 31, 191–213. • Nicolet, C. (2004). Dictatorship in Rome. In P. Baehr & M. Richter (Eds.), Dictatorship in history and theory. Cambridge, U.K.: Cambridge University Press. • Olson, R.S. (2000). Toward a politics of disaster: Losses, values, agenda, and blame. International Journal of Mass Emergencies and Disasters, 18(2), 265–287. • Ostwald, M. (1986). From popular sovereignty to the sovereignty of law: Law, society, and politics in fifth-century Athens. Berkeley: University of California Press. • Posner, E. A., & Vermeule, A. (2011). The executive unbound: After the Madisonian republic. Oxford, U.K.: Oxford University Press. • Raz, J. (1979). The authority of law: Essays on law and morality. Oxford, U.K.: Clarendon Press. • Relyea, H. C. (2007, August 30). CRS: National emergency powers. CRS Report No. 98–505. Washington, DC: U.S. Congressional Research Service. • Reza, S. (2007). Endless emergency: The case of Egypt. New Criminal Law Review: An International and Interdisciplinary Journal, 10(4), 532–553. • Rossiter, C. (2002). Constitutional dictatorship: Crisis government in the modern democracies. New Brunswick, NJ: Transaction. • Sarat, A. (2010). Introduction: Toward new conceptions of the relationship of law and sovereignty under conditions of emergency. In A. Sarat (Ed.), Sovereignty, emergency, legality. Cambridge, U.K.: Cambridge University Press. • Scheppele, K. L. (2006). Small emergencies. Georgia Law Review, 40, 835–862. • Schmitt, C. (2005). Political theology: Four chapters on the concept of sovereignty, trans. George Schwab. Chicago, IL: University of Chicago Press. • Schmitt, C. (2014). Dictatorship: From the origin of the modern concept of sovereignty to proletarian class struggle. Cambridge, U.K.: Polity Press. • Skaaning, S.-E. (2010). Measuring the rule of law. Political Research Quarterly, 63(2), 449–460. • Tamanaha, B. Z. (2004). On the rule of law: History, politics, theory. Cambridge, U.K.: Cambridge University Press. • Text of draft proposal for bailout plan. (2008, September 20). New York Times. • Unger, R. M. (1976). Law in modern society: Toward a criticism of social theory. New York, NY: The Free Press. • Yoo, J. (2003). The powers of war and peace: The constitution and foreign affairs after 9/11. Chicago, IL: University of Chicago Press. • Yoo, J. (2006). War by other means: An insider’s account of the war on terror. New York, NY: Atlantic Monthly Press. Notes • 1. The ideal of equality under the law can be traced back to the Greek notion of isonomia, which initially referred to the political equality of magistrates but eventually came to express the political and legal equality of citizens. On the development of this concept in ancient Athens, see Ostwald (1986). On the influence of this idea in the development of rule of law thinking, see Hayek (1960), pp. 162–175. • 2. However, as Michele Dauber notes, the use of a “disaster narrative” to frame those suffering from hunger and poverty during the Great Depression as blameless victims of circumstances beyond their control enabled President Franklin D. Roosevelt to argue successfully for an expanded federal role in the economy that exceeded what many believed was compatible with the rule of law and constitutional government (Dauber, 2013). • 3. On the potential risks involved in labeling a situation as an emergency, see Holder & Martin (2009), pp. 53–60. • 4. War is sometimes treated as a distinct category unto itself in theory and practice. • 5. The distinction between the “natural” and the “human-made,” or “technologically driven,” is a slippery one. Experts on disaster management and risk mitigation note that many so-called natural disasters become emergencies only as a result of human action or inaction (see Olson, 2000, pp. 265–287). • 6. According to Tom Campbell, these developments involve “rule-change not rule-abandonment” (Campbell, 2008, p. 210). • 7. The Constitution of Cameroon provides a striking example of this trend: “In the event of a serious threat to the nation’s territorial integrity or to its existence, its independence or institutions, the President of the Republic may declare a state of siege by decree and take any such measures as he may deem necessary” (Cameroon Constitution, Article 9, Section 2, quoted in Fombad, 2004, pp. 62–81; emphasis added). • 8. One example is the Disaster Relief and Emergency Assistance Act of 1974, which grants the president the power to make a unilateral declaration of emergency, to determine when and where public funds will be spent, and to override procedures established in other laws regarding the administration of funds, and also establishes procedures enabling the president to violate or suspend other laws. The National Emergencies Act of 1976 establishes procedures regulating the activation and use of emergency powers authorized in other statutes, including the requirement that the president make a formal declaration of emergency and cite the specific statutory authority that will be used. The International Emergency Economic Powers Act of 1977 authorizes the president to issue commercial regulations after a declaration of an emergency originating “in whole or substantial part outside the United States” (International Emergency Economic Powers Act of 1977, 50 U.S.C. §1701(a)). • 9. This does not even include states of exception declared in response to economic underdevelopment, which often result in measures that fail to produce promised improvements in economic development, but do deliver particularized benefits that “accrue to privileged groups rather than to the bulk of the population” (Chowdhury, 1989, p. 21).
2022-12-05 17:38:23
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.28748583793640137, "perplexity": 4013.2401666899414}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-49/segments/1669446711042.33/warc/CC-MAIN-20221205164659-20221205194659-00422.warc.gz"}
http://math.stackexchange.com/questions/132023/limits-in-integration
# Limits in integration I'm having trouble with considering this limit: $\lim_{c\rightarrow0}\int_{1}^{\infty}\frac{c}{x}dx$ It is almost like writing $\lim_{c\rightarrow0}(c\infty)$, but maybe not quite the same. Does the limit $\lim_{c\rightarrow0}\int_{1}^{\infty}\frac{c}{x}dx$ exist? Is it 0? It would appear to be zero... But if we use the epsilon-delta definition of limit then we fail... Should I be using some "rule" like L'Hopital's rule? If so, I don't know which one to use... Can we bring c outside the integrand? If so, why? If not, why? $\lim_{c\rightarrow0}(\lim_{k\rightarrow\infty}\int_{1}^{k}\frac{c}{x}dx)$ Should I be considering this one? If so, I don't know how to progress... I assume you can't just swap the limits because we have to be "careful" here as opposed to "usual". Maybe even it doesn't make sense to ask for the first limit. Help please? - $\int_1^\infty\frac{c}{x}dx$ doesn't exist for any $c\ne0$ so the question is nonsensical. – anon Apr 15 '12 at 10:34 I just spoke to someone. There are notational issues here... some people say the integral "diverges". Some people say the integral "does not exist". Others say "It converges to infinity". They all mean the same thing. My limit doesn't exist because it is not a limit of real numbers (or you could think about it in terms of sequences... same stuff happening). Apparently if we consider the limit on the projected real line then the limit makes sense and would be at the point infinity. – Adam Rubinson Apr 15 '12 at 11:01 If you write that out in more detail, it will be an acceptable answer (you can answer yourself). In particular, note that $\int_1^y \frac{c}{x}dx \to \infty$ if $y \to \infty$, so if you calculate the integral first, the limit would, indeed, be $\infty$. On the other hand, $\int_1^\infty \lim_{c \to 0} \frac{c}{x} dx = 0$, which is why this is interesting. – Johannes Kloos Apr 15 '12 at 12:02 I agree with most of what you say. Except, strictly speaking, the "limit of integrals" you mention would not be infinity. No such limit exists, namely because it does not make sense to talk about such limit (in R). This is because you would have to consider the sequence (infinity, infinity,...), which is NOT a sequence in R tending to infinity. But in the extended real line it does make sense to talk about such a sequence, so the limit does exist in that space, and is infinity in that space. – Adam Rubinson Apr 15 '12 at 13:21
2016-02-08 04:15:31
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9598411917686462, "perplexity": 271.7991163663492}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2016-07/segments/1454701152130.53/warc/CC-MAIN-20160205193912-00296-ip-10-236-182-209.ec2.internal.warc.gz"}
https://www.gradesaver.com/textbooks/math/algebra/algebra-1-common-core-15th-edition/chapter-10-radical-expressions-and-equations-10-2-simplifying-radicals-got-it-page-621/3
## Algebra 1: Common Core (15th Edition) Published by Prentice Hall # Chapter 10 - Radical Expressions and Equations - 10-2 Simplifying Radicals - Got It? - Page 621: 3 #### Answer a. $18\sqrt 3$ b. $3a^{2}* \sqrt 2$ c. $210x^{3}$ d. Yes, we can simplify to get $42t\sqrt 2t$ #### Work Step by Step a) We can simplify as follows: = $3\sqrt 6*\sqrt 18$ = $3*\sqrt 6*\sqrt 6*\sqrt 3$ = $3*6*\sqrt 3$ = $18\sqrt 3$ a) We can simplify as follows: = $\sqrt 2a*\sqrt 9a^{3}$ = $\sqrt 2a*\sqrt 9*\sqrt a^{3}$ = $\sqrt 2*\sqrt a*3*\sqrt a^{2}*\sqrt a$ = $\sqrt 2*\sqrt a*3*a*\sqrt a$ = $\sqrt 2*a*3*a$ (since $\sqrt a*\sqrt a$ = a) = $3a^{2}\sqrt 2$. a) We can simplify as follows: = $7\sqrt 5x*3\sqrt 20x^{5}$ = $7\sqrt 5*\sqrt x*3\sqrt 5*\sqrt 4*\sqrt x^{4}*\sqrt x$ = $7\sqrt 5*\sqrt x*3\sqrt 5* 2* x^{2}*\sqrt x$ = $7*3* 2* x^{2}*x*5$ (since $\sqrt 5 *\sqrt 5 = 5$ and $\sqrt x *\sqrt x =x$) = $210x^{3}$ a) We can simplify as follows: = $2\sqrt 7t*3\sqrt 14t^{2}$ = $2\sqrt 7*\sqrt t*3\sqrt 7*\sqrt 2*\sqrt t^{2}$ = $2* 7*\sqrt t*3*\sqrt 2*\sqrt t^{2}$ = $42t\sqrt 2t$. After you claim an answer you’ll have 24 hours to send in a draft. An editor will review the submission and either publish your submission or provide feedback.
2020-09-26 05:50:04
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9229777455329895, "perplexity": 356.56935078639344}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": false}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-40/segments/1600400234232.50/warc/CC-MAIN-20200926040104-20200926070104-00298.warc.gz"}
https://socratic.org/questions/how-do-you-find-the-antiderivative-of-e-2x-3-e-2x
# How do you find the antiderivative of e^(2x) / (3+e^(2x))? Jun 21, 2016 $\int {e}^{2 x} / \left(3 + {e}^{2 x}\right) \setminus \mathrm{dx} = \ln \sqrt{3 + {e}^{2 x}} + C$ #### Explanation: easiest way always is to recognise the patterns generalisation $\frac{d}{\mathrm{dx}} \ln \left(f \left(x\right)\right) = \frac{f ' \left(x\right)}{f} \left(x\right)$ so if we consider $\frac{d}{\mathrm{dx}} \ln \left(3 + {e}^{2 x}\right) = \frac{1}{3 + {e}^{2 x}} \cdot 2 {e}^{2 x}$ then we're pretty much done because $\frac{d}{\mathrm{dx}} \ln \left(3 + {e}^{2 x}\right) = \frac{2 {e}^{2 x}}{3 + {e}^{2 x}}$ then we actually want $\frac{1}{2} \cdot \frac{d}{\mathrm{dx}} \ln \left(3 + {e}^{2 x}\right) = \frac{d}{\mathrm{dx}} \left[\frac{1}{2} \cdot \ln \left(3 + {e}^{2 x}\right)\right]$ moving the constant inside the derivative $= \frac{d}{\mathrm{dx}} \left[\ln \sqrt{3 + {e}^{2 x}} \setminus\right]$ thusly $\int {e}^{2 x} / \left(3 + {e}^{2 x}\right) \setminus \mathrm{dx} = \ln \sqrt{3 + {e}^{2 x}} + C$ you can plough through a whole series of subs but seeing the pattern is a real life saver.
2019-03-25 18:58:30
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 7, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.766239583492279, "perplexity": 1159.8851228301478}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-13/segments/1552912204086.87/warc/CC-MAIN-20190325174034-20190325200034-00037.warc.gz"}
https://self-evident.org/?p=505
# Fun with uniform distributions (Update: If there are too many numbers and equations below, Mike at Rortybomb has created a fantastic post illustrating the principles graphically. And he even uses lognormal distributions like a real financial engineer.) In my earlier post on the “Geithner Put”, some people objected to my model as unrealistic.  Which is true.  So, using ideas from Andrew Foland (via private mail), I decided to grind out the math for a uniform distribution. Yes, a Gaussian might make more sense, but I doubt the answers would be all that different.  And besides, that might not lead to a nice closed-form solution. Anyway, here is Andrew’s model.  Assume lots of identical assets.  Assume each has an unknown value uniformly and independently distributed between m-a and m+a.  In other words, m is the average value and 2a is the range of possible values, and everything in the range is equally likely.  Let k be the “leverage factor”; i.e., the fraction of the purchase price that consists of equity.  So for 6:1 leverage, k is 1/7. Finally, let y represent the price the investor pays, and denote by p the average profit per asset. Thanks to the wonderful site QuickMath, I can share my formula and you can play with it.  For example, if the leverage is 6:1 (set k=1/7), and the possible values are uniformly distributed from $0 to$100 (set m=50, a=50), and the investor pays $60 (set y=60), then the average profit will be$3.22 per asset (solve for p). So in this example, admittedly a very wide range, the fund puts in 1/7 * $60 =$8.57 and thus earns a 37.6% return… While still overpaying by 20%. Another example: Same leverage factor (k=1/7), but say the assets are worth between $35 and$65 (m=50, a=15), and say the fund simply pays what they are really worth (y=50). By paying only what the assets are really worth, the fund is entitled to no returns at all… But in fact it will earn $1.03. That is a double-digit return on a 1/7 *$50 = $7 investment that should have broken even. Where did the extra return come from? From the FDIC, who ate the cost of the losing bets. I can see why Bill Gross is getting excited. If you like, you can also turn it around: Assume leverage of 6:1 (set k=1/7) and assume the range of possible values is$10 to $30 (set m=20, a=10). Then the investor can pay up to$21.94 and still break even (set p=0 and solve for y).  In this case the equation has two solutions, but only one of them is actually less than $30, so that is the answer. (You can try setting k=0 or k=a/m to see that the formula passes the “sniff test”.) Or use the same assumptions, but instead of setting p=0, set it to$.50 and then solve for y. That tells us the investor can bid up to $21.62 and still receive a$0.50 return (on a 1/7 * $21 =$3 investment). Adjust the numbers yourself and click “Solve” if you want to experiment. The investments and profits are split with the Treasury, but this does not affect the returns and so it does not enter into the formula. The big formula at the top is the key, and if someone out there could check my math, I would be much obliged.  All I did was calculate the average profit by integrating it from m-a to m+a and dividing by 2a.  Because the loan provides a “floor” of k*y for the loss, the profit function is piecewise linear with two pieces, which is why the formula is the weighted sum of two parts.  Note that the formula is only valid for (1-k)*m ≥ m-a — or, equivalently, k ≤ a/m — that is, when the “floor” provided by the non-recourse loan is at least the minimum possible value of the asset.  This will be true for reasonably high leverage and wide ranges, which is what we are interested in here. I will have more to say about the qualititative features of this model later.  But at first glance, it does appear that those claiming my model was simply too extreme may have a point. Update Although once you take the leverage into account, it sure looks to me like the private equity folks are going to make out pretty well at FDIC’s expense. ### 5 comments to Fun with uniform distributions • I like it Nemo. However, isn’t the loan going to cost something like 1% or 2% and, if the assets are fairly priced (at expected value), this subsidy alone will be sufficient to jack prices? Assuming that the bank started out with any subordination at origination and has taken some decent sized haircuts, the leverage plus heavily subsidized interest rate would make this a winning deal. Vulture investors want 20% plus returns in this environment. To get it on cash flows with any significant duration and no financing, this requires very low prices. With leverage @ 2%, the same vultures could get the 20% and pay much higher prices. • Dimon applying your model it seems to be right, that IF investors pay the “real” price they will earn a nice profit, but the question is how do they know what the real price is? so isn’t there an incentive to underpay? even if they don’t underpay and pay the “real” price, there is still the question will the banks sell their assets at the “real” price or will it make them insolvent(if investors underpay it would become an even larger problem)? • grr I like it too. I echo CapVandal’s comments. • One other comment. Obviously the investors are getting a subsidy. However, it seems to me that the Treasury is pari passu with the investors for the “equity” piece. I think the Treasury needs to offer up their share to the public in small quantities. There is excess back room capacity so this could be easily and cheaply done. Especially since the Treasury sets the rules. Warn people in big red letters that it is risky, but let people buy if they want. This would eliminate some of the accusations of big subsidies to speculators. Make it the People’s hedge fund. I would personally buy a little chunk of Maiden Lane III. Call me a sucker, but the Treasury and FRBNY, etc. make money on most deals and offset a big chunk of the few big losers. • […] program was correctly identified as the public writing a “put option” on those debts. As such, the public insurance would cause the hedge funds to overbid for the […]
2018-06-21 12:12:14
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.5100380182266235, "perplexity": 1552.1775223084794}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-26/segments/1529267864148.93/warc/CC-MAIN-20180621114153-20180621134153-00633.warc.gz"}
https://mathoverflow.net/questions/289994/implicit-function-theorem-metric-spaces
# Implicit function theorem metric spaces Are there versions of the implicit function theorem in spaces that lack a natural linear structure, e.g. metric spaces. A quick google search has found me no results. I am specifically interested in applying such results to the space of probability measures on some Polish space equipped with the $p$-Wasserstein metric. • My guess is: since the IFT may be viewed as an application of fixed-point theory, and fixed-point theory has been studied in general metric spaces, there should be some version of this possible in your settings. – Suvrit Jan 5 '18 at 15:42 • @suvrit - I agree with your comment. It's just that one would need some object in a general metric space to fulfil the role played by the Frechet derivative in a Banach space – almosteverywhere Jan 6 '18 at 4:24 • Have a look at the book "springer.com/us/book/9783764387211" for doing "gradients" in metric spaces; Wasserstein spaces etc. are a special focus too. – Suvrit Jan 6 '18 at 15:23 • Thanks for the reference but the link seems to be broken. Can you give me the name of the book? – almosteverywhere Jan 6 '18 at 15:32 • The link above has a spurious quotation mark " -- if you remove it the link shoudl work (name: Gradient flows in metric spaces ...) – Suvrit Jan 6 '18 at 17:25
2019-05-21 13:57:17
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.6768069267272949, "perplexity": 453.30840511894615}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": false}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-22/segments/1558232256381.7/warc/CC-MAIN-20190521122503-20190521144503-00438.warc.gz"}
https://www.gradesaver.com/textbooks/math/calculus/university-calculus-early-transcendentals-3rd-edition/chapter-9-section-9-3-the-integral-test-exercises-page-504/18
## University Calculus: Early Transcendentals (3rd Edition) Since, we have $\Sigma_{n=1}^\infty \dfrac{-8}{n}=-8\Sigma_{n=1}^\infty \dfrac{1}{n}$ Here, the given series is a harmonic series with partial sums not bounded. and, since $\Sigma_{n=1}^\infty \dfrac{1}{n}$ diverges, so does our sum. Hence, the given series diverges.
2021-04-17 09:33:33
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9923549294471741, "perplexity": 470.5935741678058}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-17/segments/1618038118762.49/warc/CC-MAIN-20210417071833-20210417101833-00043.warc.gz"}
https://projecteuclid.org/euclid.die/1356019598
## Differential and Integral Equations ### Gradient estimates for the heat equation in the exterior domains under the Neumann boundary condition Kazuhiro Ishige #### Abstract We consider the Cauchy-Neumann problem for the heat equation in the exterior domain $\Omega$ of a compact set in ${\bf R}^N$ ($N\ge 2$). In this paper we give an estimate of the $L^\infty$-norm of the gradient of the solutions. #### Article information Source Differential Integral Equations, Volume 22, Number 5/6 (2009), 401-410. Dates First available in Project Euclid: 20 December 2012
2018-06-18 09:42:09
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.7468123435974121, "perplexity": 472.405761371033}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-26/segments/1529267860168.62/warc/CC-MAIN-20180618090026-20180618110026-00460.warc.gz"}
https://www.zbmath.org/authors/?q=ai%3Adunbar.steven-r
# zbMATH — the first resource for mathematics ## Dunbar, Steven R. Compute Distance To: Author ID: dunbar.steven-r Published as: Dunbar, S. R.; Dunbar, Steven R. Documents Indexed: 24 Publications since 1983, including 1 Book all top 5 #### Co-Authors 8 single-authored 5 Fabrykowski, Jacek 2 Douglass, Rod W. 2 Feng, Zuming 2 Othmer, Hans G. 2 Rousseau, Cecil C. 1 Alt, Wolfgang 1 Bosman, Reinier J. C. 1 Camp, W. J. 1 Dawkins, Paul T. 1 Gelca, Răzvan 1 Le, Ian 1 Logan, John David 1 Nooij, Sander E. M. 1 Rybakowski, Krzysztof P. 1 Schmitt, Klaus all top 5 #### Serials 10 Mathematics Magazine 2 Journal of Mathematical Biology 2 SIAM Journal on Applied Mathematics 1 IMA Journal of Applied Mathematics 1 Journal of Computational Physics 1 Journal of Mathematical Analysis and Applications 1 Journal of Differential Equations 1 Transactions of the American Mathematical Society 1 SIAM Journal on Mathematical Analysis 1 The College Mathematics Journal 1 AMS/MAA Textbooks all top 5 #### Fields 9 Mathematics education (97-XX) 8 Partial differential equations (35-XX) 7 Biology and other natural sciences (92-XX) 5 Probability theory and stochastic processes (60-XX) 4 Ordinary differential equations (34-XX) 3 Dynamical systems and ergodic theory (37-XX) 1 Measure and integration (28-XX) 1 Convex and discrete geometry (52-XX) 1 Global analysis, analysis on manifolds (58-XX) 1 Numerical analysis (65-XX) 1 Mechanics of particles and systems (70-XX) 1 Fluid mechanics (76-XX) 1 Geophysics (86-XX) 1 Game theory, economics, finance, and other social and behavioral sciences (91-XX) #### Citations contained in zbMATH Open 15 Publications have been cited 520 times in 416 Documents Cited by Year Models of dispersal in biological systems. Zbl 0713.92018 Othmer, Hans G.; Dunbar, S. R.; Alt, W. 1988 Travelling wave solutions of diffusive Lotka-Volterra equations. Zbl 0509.92024 Dunbar, Steven R. 1983 Traveling wave solutions of diffusive Lotka-Volterra equations: A heteroclinic connection in $$R^ 4$$. Zbl 0556.35078 Dunbar, Steven R. 1984 Traveling waves in diffusive predator-prey equations: Periodic orbits and point-to-periodic heteroclinic orbits. Zbl 0617.92020 Dunbar, Steven R. 1986 Persistence in models of predator-prey populations with diffusion. Zbl 0605.34044 Dunbar, S. R.; Rybakowski, K. P.; Schmitt, K. 1986 On a nonlinear hyperbolic equation describing transmission lines, cell movement, and branching random walks. Zbl 0592.92003 Dunbar, Steven R.; Othmer, Hans G. 1986 Geometric analysis of a nonlinear boundary value problem from physical oceanography. Zbl 0770.34021 Dunbar, Steven R. 1993 The origin and nature of spurious eigenvalues in the spectral tau method. Zbl 0924.65077 Dawkins, Paul T.; Dunbar, Steven R.; Douglass, Rod W. 1998 A branching random evolution and a nonlinear hyperbolic equation. Zbl 0664.60082 Dunbar, Steven R. 1988 The average distance between points in geometric figures. Zbl 0995.52500 Dunbar, Steven R. 1997 Fabrykowski, Jacek; Dunbar, Steven R. 2012 37th United States of America Mathematical Olympiad. Zbl 1223.97004 Rousseau, Cecil; Dunbar, Steven R. 2009 The track of a bicycle back tire. Zbl 1153.70306 Dunbar, Steven R.; Bosman, Reinier J. C.; Nooij, Sander E. M. 2001 Travelling waves in model reacting flows with reversible kinetics. Zbl 0759.76077 Logan, J. David; Dunbar, Steven R. 1992 The divider dimension of the graph of a function. Zbl 0756.28004 Dunbar, Steven R.; Douglass, Rod W.; Camp, W. J. 1992 Fabrykowski, Jacek; Dunbar, Steven R. 2012 37th United States of America Mathematical Olympiad. Zbl 1223.97004 Rousseau, Cecil; Dunbar, Steven R. 2009 The track of a bicycle back tire. Zbl 1153.70306 Dunbar, Steven R.; Bosman, Reinier J. C.; Nooij, Sander E. M. 2001 The origin and nature of spurious eigenvalues in the spectral tau method. Zbl 0924.65077 Dawkins, Paul T.; Dunbar, Steven R.; Douglass, Rod W. 1998 The average distance between points in geometric figures. Zbl 0995.52500 Dunbar, Steven R. 1997 Geometric analysis of a nonlinear boundary value problem from physical oceanography. Zbl 0770.34021 Dunbar, Steven R. 1993 Travelling waves in model reacting flows with reversible kinetics. Zbl 0759.76077 Logan, J. David; Dunbar, Steven R. 1992 The divider dimension of the graph of a function. Zbl 0756.28004 Dunbar, Steven R.; Douglass, Rod W.; Camp, W. J. 1992 Models of dispersal in biological systems. Zbl 0713.92018 Othmer, Hans G.; Dunbar, S. R.; Alt, W. 1988 A branching random evolution and a nonlinear hyperbolic equation. Zbl 0664.60082 Dunbar, Steven R. 1988 Traveling waves in diffusive predator-prey equations: Periodic orbits and point-to-periodic heteroclinic orbits. Zbl 0617.92020 Dunbar, Steven R. 1986 Persistence in models of predator-prey populations with diffusion. Zbl 0605.34044 Dunbar, S. R.; Rybakowski, K. P.; Schmitt, K. 1986 On a nonlinear hyperbolic equation describing transmission lines, cell movement, and branching random walks. Zbl 0592.92003 Dunbar, Steven R.; Othmer, Hans G. 1986 Traveling wave solutions of diffusive Lotka-Volterra equations: A heteroclinic connection in $$R^ 4$$. Zbl 0556.35078 Dunbar, Steven R. 1984 Travelling wave solutions of diffusive Lotka-Volterra equations. Zbl 0509.92024 Dunbar, Steven R. 1983 all top 5 #### Cited by 639 Authors 17 Hillen, Thomas 14 Bellomo, Nicola 14 Painter, Kevin J. 10 Perthame, Benoît 9 Bellouquid, Abdelghani 9 Sherratt, Jonathan A. 8 Calvez, Vincent 8 Petrovskii, Sergei V. 7 Al-Said, Eisa A. 7 Noor, Muhammad Aslam 7 Zhang, Tianran 6 Bianca, Carlo 6 de Vries, Gerda 6 Li, Wan-Tong 6 Lin, Guo 6 Othmer, Hans G. 6 Ruan, Shigui 6 Soler, Juan S. 6 Vauchelet, Nicolas 5 Chaplain, Mark A. J. 5 Erban, Radek 5 Schmeiser, Christian 5 Xue, Chuan 5 Zhao, Xiao-Qiang 4 Edelstein-Keshet, Leah 4 Huang, Wenzhang 4 Hutson, V. C. L. 4 Lachowicz, Mirosław 4 Lewis, Mark Alun 4 Maini, Philip Kumar 4 Malchow, Horst 4 Nieto, Juanjo 4 Pan, Shuxia 4 Shen, Wenxian 4 Simpson, Matthew J. 4 Surulescu, Christina 4 Vickers, Glenn T. 4 Wang, Wendi 3 Baker, Ruth Elizabeth 3 Barbera, Elvira 3 Campos, Daniel G. 3 Chalub, Fabio A. C. C. 3 Chouhad, Nadia 3 Currò, Carmela 3 Ducrot, Arnaud 3 Feng, Wei 3 Feng, Zhaosheng 3 Gerisch, Alf 3 Hosono, Yuzo 3 Hou, Xiaojie 3 Hutson, Vivian 3 Loy, Nadia 3 Lu, Xin 3 Méndez, Vicenç 3 Preziosi, Luigi 3 Shigesada, Nanako 3 Tranquillo, Robert T. 3 Valenti, Giovanna 3 Wang, Kaifa 3 Wang, Zhi Cheng 3 Wang, Zhian 3 Weng, Peixuan 3 Wu, Chufen 3 Yang, Ting-Hui 3 Yasuda, Shugo 3 Yuan, Rong 2 Ai, Shangbing 2 Almanasreh, Hasan 2 Alt, Wolfgang 2 Bai, Xueli 2 Banerjee, Malay 2 Bao, Xiongxiong 2 Bellouquid, Abdel 2 Bhattacharyya, Rakhi 2 Bouin, Emeric 2 Bournaveas, Nikolaos 2 Bressloff, Paul C. 2 Burini, Diletta 2 Buttenschön, Andreas 2 Chertock, Alina E. 2 Cosner, Chris 2 Czapla, Dawid 2 Dagbovie, Ayawoa S. 2 Dawes, Adriana T. 2 Deutsch, Andreas 2 Ding, Wei 2 Eftimie, Radu 2 Emelianenko, Maria 2 Franz, Benjamin 2 Giletti, Thomas 2 Gosse, Laurent 2 Grünbaum, Daniel 2 Guo, Jong-Shenq 2 Ha, Seung-Yeal 2 Hadeler, Karl-Peter 2 Hill, Nicholas A. 2 Hsu, Cheng-Hsiung 2 Jabin, Pierre-Emmanuel 2 Jin, Yu 2 Kang, Kyungkeun ...and 539 more Authors all top 5 #### Cited in 114 Serials 44 Journal of Mathematical Biology 23 Bulletin of Mathematical Biology 22 Journal of Differential Equations 22 M$$^3$$AS. Mathematical Models & Methods in Applied Sciences 13 Journal of Mathematical Analysis and Applications 12 Journal of Theoretical Biology 11 Mathematical Biosciences 10 Journal of Dynamics and Differential Equations 9 Applied Mathematics and Computation 9 Nonlinear Analysis. Real World Applications 9 Discrete and Continuous Dynamical Systems. Series B 9 Kinetic and Related Models 8 Nonlinear Analysis. Theory, Methods & Applications. Series A: Theory and Methods 8 Mathematical and Computer Modelling 7 SIAM Journal on Applied Mathematics 7 Communications on Pure and Applied Analysis 6 Theoretical Population Biology 6 Physica D 6 Nonlinear Analysis. Theory, Methods & Applications 5 Computers & Mathematics with Applications 5 Mathematical Methods in the Applied Sciences 5 Nonlinearity 5 Applied Mathematical Modelling 5 Abstract and Applied Analysis 5 International Journal of Biomathematics 4 Journal of Computational Physics 4 Applied Mathematics Letters 4 International Journal of Computer Mathematics 3 Journal of Fluid Mechanics 3 Rocky Mountain Journal of Mathematics 3 Journal of Computational and Applied Mathematics 3 Japan Journal of Industrial and Applied Mathematics 3 Journal of Nonlinear Science 3 NoDEA. Nonlinear Differential Equations and Applications 3 Communications in Nonlinear Science and Numerical Simulation 3 Journal of Biological Dynamics 3 Journal of Physics A: Mathematical and Theoretical 3 Mathematical Modelling of Natural Phenomena 2 Applicable Analysis 2 Bulletin of the Australian Mathematical Society 2 International Journal of Systems Science 2 Physica A 2 Chaos, Solitons and Fractals 2 Journal of Optimization Theory and Applications 2 Stochastic Analysis and Applications 2 Annales de l’Institut Henri Poincaré. Analyse Non Linéaire 2 SIAM Journal on Mathematical Analysis 2 Calculus of Variations and Partial Differential Equations 2 Discrete and Continuous Dynamical Systems 2 Differential Equations and Dynamical Systems 2 Journal of Applied Mathematics 2 Multiscale Modeling & Simulation 2 Advances in Difference Equations 2 Journal of Statistical Mechanics: Theory and Experiment 1 Archive for Rational Mechanics and Analysis 1 Communications on Pure and Applied Mathematics 1 Israel Journal of Mathematics 1 Journal of Mathematical Physics 1 Journal of Statistical Physics 1 Wave Motion 1 ZAMP. Zeitschrift für angewandte Mathematik und Physik 1 Annali di Matematica Pura ed Applicata. Serie Quarta 1 Duke Mathematical Journal 1 Journal of Applied Probability 1 Mathematics and Computers in Simulation 1 Quarterly of Applied Mathematics 1 SIAM Journal on Numerical Analysis 1 Systems & Control Letters 1 Acta Applicandae Mathematicae 1 Japan Journal of Applied Mathematics 1 Revista Matemática Iberoamericana 1 Numerical Methods for Partial Differential Equations 1 Dynamics and Stability of Systems 1 Journal of Scientific Computing 1 Journal of Integral Equations and Applications 1 Applications of Mathematics 1 Journal of Global Optimization 1 Computational Mathematics and Mathematical Physics 1 Aequationes Mathematicae 1 Communications in Partial Differential Equations 1 Journal de Mathématiques Pures et Appliquées. Neuvième Série 1 Proceedings of the Royal Society of Edinburgh. Section A. Mathematics 1 Stochastic Processes and their Applications 1 Bulletin of the American Mathematical Society. New Series 1 Proceedings of the Indian Academy of Sciences. Mathematical Sciences 1 Indagationes Mathematicae. New Series 1 Annales de la Faculté des Sciences de Toulouse. Mathématiques. Série VI 1 Physics of Fluids 1 Applied and Computational Harmonic Analysis 1 Electronic Journal of Differential Equations (EJDE) 1 Journal of the Egyptian Mathematical Society 1 Mathematics and Mechanics of Solids 1 European Series in Applied and Industrial Mathematics (ESAIM): Probability and Statistics 1 Nonlinear Dynamics 1 Chaos 1 Proceedings of the Royal Society of London. Series A. Mathematical, Physical and Engineering Sciences 1 Discrete Dynamics in Nature and Society 1 Methodology and Computing in Applied Probability 1 International Journal of Modern Physics C 1 Dynamical Systems ...and 14 more Serials all top 5 #### Cited in 33 Fields 301 Biology and other natural sciences (92-XX) 278 Partial differential equations (35-XX) 55 Probability theory and stochastic processes (60-XX) 52 Ordinary differential equations (34-XX) 49 Numerical analysis (65-XX) 34 Dynamical systems and ergodic theory (37-XX) 33 Statistical mechanics, structure of matter (82-XX) 27 Integral equations (45-XX) 14 Operator theory (47-XX) 12 Game theory, economics, finance, and other social and behavioral sciences (91-XX) 11 Fluid mechanics (76-XX) 8 Calculus of variations and optimal control; optimization (49-XX) 7 Classical thermodynamics, heat transfer (80-XX) 7 Systems theory; control (93-XX) 6 Statistics (62-XX) 4 Convex and discrete geometry (52-XX) 4 Computer science (68-XX) 3 Mechanics of deformable solids (74-XX) 3 Optics, electromagnetic theory (78-XX) 3 Geophysics (86-XX) 2 Real functions (26-XX) 1 History and biography (01-XX) 1 Measure and integration (28-XX) 1 Difference and functional equations (39-XX) 1 Approximations and expansions (41-XX) 1 Integral transforms, operational calculus (44-XX) 1 Geometry (51-XX) 1 Differential geometry (53-XX) 1 General topology (54-XX) 1 Global analysis, analysis on manifolds (58-XX) 1 Mechanics of particles and systems (70-XX) 1 Operations research, mathematical programming (90-XX) 1 Information and communication theory, circuits (94-XX)
2021-05-08 01:59:39
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.35628989338874817, "perplexity": 6316.113917838604}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-21/segments/1620243988831.77/warc/CC-MAIN-20210508001259-20210508031259-00591.warc.gz"}
http://blog.kilotrader.com/2010/05/
Summary of Candlestick Backtesting This post summarizes the candlestick back-testing so far. To understand the effectiveness / profitability of these signals, I choose to look at two measures of performance. • Number of Trades : The number of trades is number of times the particular showed happened during the back-test period. There are two inferences from this number: • If the number is small, the results are not statistically reliable. • The number provides an insight into the prevalence of the signal. This is important when building a system • Win-Rate : The win rate represents number of times this particular signal was profitable. Looking forward, this can be thought of the probability that this signal will be profitable The back-tests were done on the SP500 symbols over the past 15 years. Doji52.72%3403 Bullish Engulfing Pattern45.60%6268 Bullish Harami47.87%3787 Hammer46.71%2387 Bullish Kicker Pattern54.6%1511 Backtesting the Bullish Harami Pattern This is the third post in my three part series of backtesting a candlestick patterns. Earlier, I back-tested the Bullish Engulfing Pattern and the Hammer. Today, I will back test the Bullish Harami Pattern. Quantifying the Hammer A Bullish Harami pattern is formed when all the following conditions should be met : 1. today's bar is white (up day) and yesterday's bar is dark 2. today's candle height is less than yesterday's candle body Back Testing the Pattern DataSet : SP500 BackTest Period : 15 years •  Look for 3 consecutive down days • A Bullish Harami is formed on the third day and the fourth day • Buy at Open on the fifth day Sell Signal: • Sell on the sixth day Performance I am only interested in finding out if this particular signal does act as a trend reversal signal. I am not interested in building a trading system. The best performance measure for this task is the Win-Rate as it represents the probability that my hypothesis is correct. PERFORMANCE Average Profit %-0.27%Win Rate46.71%Loss Rate53.29% Average Bars Held2Average Profit %2.54%Average Loss %-2.73% Average Bars Held2Average Bars Held2 Setup Backtest the Hammer This is the second post in my three part series of backtesting a few candlestick patterns. Earlier, I back-tested the Bullish Engulfing Pattern. Today, I will back test the Hammer. Quantifying the Hammer A Hammer pattern is formed when all the following conditions should be met : 1. today's bar is white (up day) followed by three down days 2. Candle's body height is less than 40% of the total candle height 3. Close is located near the top of the candle Back Testing the Pattern DataSet : SP500 BackTest Period : 15 years •  Look for 3 consecutive down days • A Hammer candlestick is formed on the fourth day • Buy at Open on the fifth day Sell Signal: • Sell on the sixth day Performance I am only interested in finding out if this particular signal does act as a trend reversal signal. I am not interested in building a trading system. The best performance measure for this task is the Win-Rate as it represents the probability that my hypothesis is correct. PERFORMANCE Average Profit %-0.21%Win Rate47.87%Loss Rate52.13% Average Bars Held2Average Profit %2.68%Average Loss %-2.86% Average Bars Held2Average Bars Held2 Setup Backtesting Bullish Engulfing Pattern In my blog post yesterday, I said that I will be back testing three candlestick patterns. Today, it is the Bullish Engulfing Pattern. Quantifying the B-E Pattern A Bullish-Engulfing pattern is formed when all the following conditions should be met : 1. today's bar is white (up day) 2. yesterday's bar is dark (down day) 3. today's trading range is larger than yesterday's 4. today's close is greater than yesterday's open Backtesting the pattern DataSet : SP500 BackTest Period : 15 years •  Look for 3 consecutive down days • A Bullish-Engulfing candlestick is formed on the fourth day • Buy at Open on the fifth day Sell Signal: • Sell on the sixth day Performance I am only interested in finding out if this particular signal does act as a trend reversal signal. I am not interested in building a trading system. The best performance measure for this task is the Win-Rate as it represents the probability that my hypothesis is correct. PERFORMANCE Average Profit %-0.37%Win Rate45.60%Loss Rate54.40% Average Bars Held2Average Profit %2.56%Average Loss %-2.83% Average Bars Held2Average Bars Held2 In this particular example, there are two things I would like to note : • Bullish Engulfing patterns happen quite often. For the SP 500 stocks in the last 15 years, they showed up about 400 times in one year • They are NOT very good at predicting a trend reversal. For the SP 500 stocks in the last 15 years, they were correct only 45% of the time More on Candles My recent forays into candlestick patterns has not been very encouraging. I tested the doji as a trend reversal signal and the results were not impressive. On doing some more research, I came across candlestickgenius.com. They claim to be fairly successful at spotting candlestick pattern and have a description on the most reliable candlestick patterns. In the next few posts, I will be testing three of these patterns and presenting the results. Bullish Engulfing Bullish engulfing pattern is formed when • today's candle is taller than previous day's • today's candle body is taller than previous day's • today's candle is white and the previous day's is black Hammer A hammer is formed when: • today's candle is white • today's candle body is short in relation to the candle height • today's candle body is located near the top of the candle Bullish Harami Pattern A bullish harami is formed when: • today's candle is white • previous day's candle is dark • today's candle is shorter than previous Do Candlestick Patterns Work ? Recently, I have been exposed to a lot of material on candlestick charts (Japanese Candlestick Charting Techniques, Candlestick Forum). I really like the intuitive look and feel of a candlestick charts. Generally speaking, quantitative trading and chart patterns do not work well together. Candlesticks make it easier. In candlestick parlance, a doji is formed when a security closes very near its open price. From a quantitative perspective, I could spot a doji as follows : $\frac{Abs (Close[bar]-Open[bar])}{High[bar] - Low[bar]} < 0.15$ Doji as a trend reversal signal Conventional wisdom states that the doji is a trend reversal signal. It signals a 'draw' between the buyers and sellers. If a doji shows up at end of consecutive down days, it may mean that the down trend is coming to an end. Back testing the Idea Let us back test this idea. I will set up my test as follows : DataSet : SP500 BackTest Period : 15 years Condition 1 : Look for 5 consecutive down days Condition 2 : A doji is formed on the sixth day Buy At Open on the seventh day Sell Signal: Sell on the eighth day BackTest Results Net Profit($7,782.10)Number of Trades3,403 Profit per Bar($1.14)Average Profit($2.29) Total Commission($13,612.00)Average Profit %0.04% Average Bars Held2 Win Rate52.72%Loss Rate47.28% Gross Profit$152,433.90 Gross Loss($160,216.00) Average Profit$84.97 Average Loss($99.57) Average Profit %2.69%Average Loss %-2.92% Average Bars Held2Average Bars Held2 Max Consecutive Winners36Max Consecutive Losses18 The theory is impressive. The name of the signal is impressive. The back test results are NOT. And this leads me to question. Do candlestick patterns work? Performance Metrics - resonance Recently, I blogged about Performance Metrics. My friend at Engineering Returns agrees with me. I am not sure if the author has ranked his KPIs in ascending order of importance. But if he did, his top three metrics match with my top three. A Price Action Based Strategy The use of price action to make buy/sell is prevalent. According to many, price action is the fundamental thesis of practitioners of technical analysis.  Technical Analysis is the art of following price action. I am presenting back test results for a simple price action based strategy. This is a "long-only" strategy - If price goes down for three consecutive days - If ATR (Average Trading Range) is lesser for three consecutive days TYPICAL SETUP A entry typical is shown below: BACK TEST RESULTS DataSet : SP500 Period : 15 years Net Profit$18,457.37 Profit per Bar$5.41 Total Commission($6,640.00) Number of Trades1,660 Average Profit$11.12 Average Profit %0.26% Average Bars Held2.06 Win Rate59.16% Gross Profit$69,689.33 Average Profit$70.97 Average Profit %1.88% Average Bars Held1.95 Loss Rate40.84% Gross Loss($51,231.96) Average Loss($75.56) Average Loss %-2.09% Average Bars Held2.21 Maximum Drawdown(\$2,383.78) Profit Factor1.36 Recovery Factor7.74 Payoff Ratio0.9 It is amazing to note a simple strategy as stated above, - has a win rate of 60% - is profitable over a long period of time Performance Metrics "What Gets Measured Gets Done" Over the past few weeks, I have been thinking about how to measure the success/failure of my trading strategy. I wrote a post titled, What is a good strategy. I have also been looking what other people are doing with respect to performance metrics of a strategy. Stockalicious claims to the the Worlds Easiest Portfolio Analysis tool. It sure is easy. I uploaded my trades saved in a csv format and it gave me the following metrics Total Return Maximum Return Minimum Return Annualized Return Volatility Sharpe Ratio Max DrawDown Max Cost of Capital Max Equity On top of these measures, I think there are two other important measures: CAGR and Win Ratio. The Win ratio is easily calculated. It is the ratio of $Win Ratio = \frac{Number Of Winning Trades}{Number of Losing Trades}$ CAGR stands for Compound Annual Growth Rate. I think it gives the most accurate way of comparing two strategy by normalizing time over an year. It can be calculated as shown below $CAGR = \left ( \left ( \frac{EndingCapital}{StartingCapital} \right )^{\frac{1}{NoOfYears}}-1 \right ) * 100$ A portfolio that has a rate of return of 1% over 4 trading days has a CAGR of ~85%. Another portfolio that has a rate of return of 5% over 20 trading days also has a CAGR of ~85%. Gains of 1% over 4 days may be more achievable that gains of 5% of over a month. Another important metric is the Max Drawdown as a measure of risk to the portfolio. This is a measure of the difference between the highest value of the portfolio to the lowest value of the portfolio. A very high drawdown can make the strategy impractical as you may not the capital to execute the on the signals provided by the strategy. I think that these three measures (win ratio, cagr and draw down) are the most important of all the performance measures out there
2019-02-20 09:33:46
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 4, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.4327269196510315, "perplexity": 4127.538509702582}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-09/segments/1550247494694.1/warc/CC-MAIN-20190220085318-20190220111318-00362.warc.gz"}
https://www.hpmuseum.org/forum/post-22922.html
[FRAM71] Pre-Production Batch 11-30-2014, 12:03 PM (This post was last modified: 10-04-2015 09:13 AM by Hans Brueggemann.) Post: #1 Hans Brueggemann Member Posts: 200 Joined: Dec 2013 [FRAM71] Pre-Production Batch all, i'm glad to announce that the pre-production batch of FRAM71 finally made it into the "wild". there's a bunch of early adopters out there now who expressed their wish to publicly pool their findings. so, here is the thread where all info should go about experiences / bugs / updates around FRAM71, and let me start it off by attaching the latest version of the manual. best regards, hans 2015-10-04: Firmware update V511 "NASHVILLE" Release Note: V511 finally fixes a spurious bug where configuration would not be properly reset when configuring more than 8 modules. Highly recommended, this is the last update before the upcoming V600. Applicable User's Manual: FRAM71_V511_HW104 "NASHVILLE" My sincere thanks go to Bob Rosperi for his excellent presentation in Nashville, and to Dave Frederikson for his invaluable support. 2015-07-19: Understanding the HP-71B memory allocation routine 2015-07-19: FRAM71 Configuration Sheet Configuration sheet from manual in *.doc format. (thx to Bob Prosperi for the suggestion) 2015-07-05: FlashPro_Cable document update graphics added to clarify cable connections. (thx to Michael Lopez for the hint) 2015-06-11: How to setup FlashPro4/5 in case of warnings This Post tells you how. 2015-05-16: Firmware update V510 Release Note: V510 fixes a bug where the internal configuration latches wouldn't get properly reset when multiple chips where removed from the configuration area at the same time. in V510, all internal configuration latches get reset during the first DIN phase after either [ON], INIT:1, INIT:3, or a configuration request [++] by the DIAGNOSTIC module. the reset is indicated by a brief flash of the LED. Applicable User's Manual: FRAM71_V501/2_HW104 2015-05-16: Firmware update V502 Release Note: V502 removes a limitation of V501 where CHIP_0 has to be configured LCIM in order to make it visible to the HP-71B (this limitation exists with V501 only, V43x does not have this limitation). Applicable User's Manual: FRAM71_HW104_SW502_b.PDF Users should only upgrade to V502 if they need CHIP_0 to be configurable as part of a multi-chip module. All V5xx firmwares now allow for on-the-fly change of F-base addresses (i.e., under program control, without power cycling the HP-71B) 2015-03-06: Firmware update V430 Release Note: NEW: support bank-switching of multiple FRAM memory blocks into same HP-71B address space NEW: Firmware support for upgrading FRAM71 to 1 MByte of FRAM CHANGED: now, 15 out of 16 x 32kB F-Blocks available (up from 13 in V421) CHANGED: automatic F-Block assignment removed to improve configuration flexibility REMOVED: REDEYE-support, due to lack of public interest ;o) (note that this is a major firmware upgrade. read the fine updated manual for details. highly recommended for all users who do not plan to use REDEYE support.) 2014-12-21: Firmware update V421 Release Note: in V420, FRAM71's write-protect flags of soft-ROM-declared RAM modules have no effect. POKE from the POKELEX lexfile can change contents of those ROMs. in V421, write-protect flags now block all write-attempts to soft-ROM areas. POKE from the POKELEX lexfile executes "silently", but does not alter contents. (note that this is a minor fix which does not affect/enhance "standard" use cases. users who have no means to update their FRAM71 should contact me through e-mail for options.) Attached File(s) FRAM71_V511_HW104.zip (Size: 153.69 KB / Downloads: 95) FRAM71_HW104_SW511_Nashville.pdf (Size: 1.71 MB / Downloads: 256) FRAM71_FlashPro_Cable.pdf (Size: 152.47 KB / Downloads: 132) FRAM71_CONF_TABLE.doc (Size: 108 KB / Downloads: 70) 12-10-2014, 03:51 AM Post: #2 Dave Frederickson Senior Member Posts: 1,947 Joined: Dec 2013 ROM Images So you have a bunch of ROM images for the 71 that you'd like to load into your FRAM71, but you don't have an RS-232 to HP-IL interface and your '98 machine is in the garage. How to copy those ROM images to a LIF image so that they can be loaded with a PIL-Box? 1. Copy all the ROM.bin and IRAM.rom images you want in a LIF image, HPDir, and the below batch file into a subdirectory. Code: @echo off HPDir -initialize -lif -9122 ROMS.lif if exist *.bin copy *.bin *.#E21C > nul if exist *.rom copy *.rom *.#E21C > nul echo. for %%f in (*.#E21C) do (         echo %%~nf :         HPDir -add ROMS.lif %%f         echo.         if %%~zf == 16384 HPDir -attrib -aux 800100800000 ROMS.lif %%~nf         if %%~zf == 32768 HPDir -attrib -aux 800100000100 ROMS.lif %%~nf         if %%~zf == 65536 HPDir -attrib -aux 800100000200 ROMS.lif %%~nf ) del *.#E21C 2. Double-click the batch file. A 9114 compatible LIF image will be created containing all of the ROM images. 3. Verify with HPDir: Code: >hpdir roms.lif                            SYS  FILE   NUMBER   RECORD     MODIFIED    PUB OPEN FILE NAME             LEV TYPE  TYPE  RECORDS   LENGTH DATE       TIME ACC STAT ===================== === ==== ===== ======== ======== =============== === ==== 71DIAG                  1 98X6 #e21c      256      256 00-<?>-00 00:00 HPILROM-1A              1 98X6 #e21c       64      256 00-<?>-00 00:00 HPILROM-1B              1 98X6 #e21c       64      256 00-<?>-00 00:00 JPC-1E                  1 98X6 #e21c      128      256 00-<?>-00 00:00 SURVEY                  1 98X6 #e21c       64      256 00-<?>-00 00:00 WB71                    1 98X6 #e21c      128      256 00-<?>-00 00:00 JPC-D                   1 98X6 #e21c      128      256 00-<?>-00 00:00 AMPISTAT                1 98X6 #e21c      128      256 00-<?>-00 00:00 CIRCUIT                 1 98X6 #e21c       64      256 00-<?>-00 00:00 CURVEFIT                1 98X6 #e21c      128      256 00-<?>-00 00:00 DATAACQ                 1 98X6 #e21c      256      256 00-<?>-00 00:00 DATACOMM                1 98X6 #e21c       64      256 00-<?>-00 00:00 DATAMNGT                1 98X6 #e21c      128      256 00-<?>-00 00:00 FORTHROM                1 98X6 #e21c       64      256 00-<?>-00 00:00 FORTH41                 1 98X6 #e21c       64      256 00-<?>-00 00:00 HP71DEMO                1 98X6 #e21c       64      256 00-<?>-00 00:00 MATHROM                 1 98X6 #e21c      128      256 00-<?>-00 00:00 TEXTEDIT                1 98X6 #e21c       64      256 00-<?>-00 00:00 ZENWAND                 1 98X6 #e21c       64      256 00-<?>-00 00:00 JPC-F01                 1 98X6 #e21c      128      256 00-<?>-00 00:00 FINANCE                 1 98X6 #e21c       64      256 00-<?>-00 00:00 53248 of 626688 bytes free. or ILPer: Code:    NAME    S TYPE   LEN    DATE    TIME  71DIAG       ROM   65536 01/00/00 00:00  HPILROM-1A   ROM   16384 01/00/00 00:00  HPILROM-1B   ROM   16384 01/00/00 00:00  JPC-1E       ROM   32768 01/00/00 00:00  SURVEY       ROM   16384 01/00/00 00:00  WB71         ROM   32768 01/00/00 00:00  JPC-D        ROM   32768 01/00/00 00:00  AMPISTAT     ROM   32768 01/00/00 00:00  CIRCUIT      ROM   16384 01/00/00 00:00  CURVEFIT     ROM   32768 01/00/00 00:00  DATAACQ      ROM   65536 01/00/00 00:00  DATACOMM     ROM   16384 01/00/00 00:00  DATAMNGT     ROM   32768 01/00/00 00:00  FORTHROM     ROM   16384 01/00/00 00:00  FORTH41      ROM   16384 01/00/00 00:00  HP71DEMO     ROM   16384 01/00/00 00:00  MATHROM      ROM   32768 01/00/00 00:00  TEXTEDIT     ROM   16384 01/00/00 00:00  ZENWAND      ROM   16384 01/00/00 00:00  JPC-F01      ROM   32768 01/00/00 00:00  FINANCE      ROM   16384 01/00/00 00:00 If you don't have a FRAM71 you can try out the ROM images with Emu71. Dave 12-13-2014, 06:36 PM Post: #3 Dave Frederickson Senior Member Posts: 1,947 Joined: Dec 2013 IRAM vs ROM After a ROM image is copied to a FRAM71 IRAM, is there any reason to reconfigure the IRAM to ROM? That is, other than Diag ROM behavior. Either will survive a Memory Lost. 12-13-2014, 09:55 PM Post: #4 Paul Berger (Canada) Senior Member Posts: 504 Joined: Dec 2013 RE: [FRAM71] Pre-Production Batch I have the soft part of FORTH and the Math "ROM" in IRAM and I see no reason to change that that way if I want to "swap" ROMs I can just delete the contents and copy in the new one. Its easy to have the contents on a diskette and have a little BASIC program to do the copying. 12-13-2014, 11:01 PM (This post was last modified: 12-13-2014 11:02 PM by Dave Frederickson.) Post: #5 Dave Frederickson Senior Member Posts: 1,947 Joined: Dec 2013 RE: [FRAM71] Pre-Production Batch Once you've copied an image to IRAM in FRAM71 you can reconfigure the IRAM to be ROM. This better mimics the real hardware, but is there any reason to do this? Perhaps some tidbit of code checks to see if the module is indeed a ROM? 12-14-2014, 12:40 AM Post: #6 rprosperi Senior Member Posts: 4,556 Joined: Dec 2013 RE: [FRAM71] Pre-Production Batch (12-13-2014 11:01 PM)Dave Frederickson Wrote:  Once you've copied an image to IRAM in FRAM71 you can reconfigure the IRAM to be ROM. This better mimics the real hardware, but is there any reason to do this? Perhaps some tidbit of code checks to see if the module is indeed a ROM? Reconfiguring the IRAMs to ROM will protect the port contents from errant code romping thru memory. For example when playing with Forth (or even some unknown LEX files) I've trashed the contents of IRAM ports, but the "ROM" ports are preserved as-is. Basically this really just saves the time needed to ROMCOPY the port's content back to what it was, but it takes a lot less time to just POKE the revised FRAM config string to make them ROM one time, than rebuilding the IRAMs each time it gets destroyed. --Bob Prosperi 12-14-2014, 12:30 PM (This post was last modified: 12-14-2014 12:32 PM by Hans Brueggemann.) Post: #7 Hans Brueggemann Member Posts: 200 Joined: Dec 2013 RE: [FRAM71] Pre-Production Batch (12-14-2014 12:40 AM)rprosperi Wrote:  Reconfiguring the IRAMs to ROM will protect the port contents from errant code romping thru memory. For example when playing with Forth (or even some unknown LEX files) I've trashed the contents of IRAM ports, but the "ROM" ports are preserved as-is. Basically this really just saves the time needed to ROMCOPY the port's content back to what it was, but it takes a lot less time to just POKE the revised FRAM config string to make them ROM one time, than rebuilding the IRAMs each time it gets destroyed. exactly this. while FREE PORT (5.xy) mimics the freed RAM portion into soft configured ROM and hence protects its contents from a MEMORY LOST, it can't protect its contents against alterations that result from POKE operations. using FRAM71s configuration area to re-define portions of RAM to soft configured ROM sets FRAM71s internal read-only flag on that area which gives a better protection against overwrites, albeit not perfect, as the configuration area itself remains unprotected. also, doing POKE into an IRAM area will work without notice, while trying to POKE into a ROM-declared RAM area will result in "ILLEGAL ACCESS". hans 12-20-2014, 07:15 PM (This post was last modified: 12-21-2014 08:47 AM by J-F Garnier.) Post: #8 J-F Garnier Senior Member Posts: 484 Joined: Dec 2013 RE: [FRAM71] Pre-Production Batch Although I missed the first batch of FRAM71, Sylvain Côté was so kind to lend me his module for some time, so since today I have a new toy to play with! My first comments and observations: (12-14-2014 12:40 AM)rprosperi Wrote:  Reconfiguring the IRAMs to ROM will protect the port contents from errant code romping thru memory. Changing the module type from IRAM to ROM without precautions can produce several side effects and strange errors, especially if there are Basic programs inside. The right, secure way to create a ROM module is to use ROMCOPY. It is also possible to manually compile each Basic program by hand by running each. (12-14-2014 12:30 PM)Hans Brueggemann Wrote:  ... using FRAM71s configuration area to re-define portions of RAM to soft configured ROM sets FRAM71s internal read-only flag on that area which gives a better protection against overwrites Is really this internal read-only flag implemented? I tried to POKE (with the special, unrestricted POKE function from JPC ROM) into a ROM-declared area, and the memory was actually changed. J-F (Edited: changed 'unprotected' to 'unrestricted' for clarification) 12-20-2014, 08:34 PM Post: #9 Dave Frederickson Senior Member Posts: 1,947 Joined: Dec 2013 RE: [FRAM71] Pre-Production Batch (12-20-2014 07:15 PM)J-F Garnier Wrote: (12-14-2014 12:40 AM)rprosperi Wrote:  Reconfiguring the IRAMs to ROM will protect the port contents from errant code romping thru memory. Changing the module type from IRAM to ROM without precautions can produce several side effects and strange errors, especially if there are Basic programs inside. The right, secure way to create a ROM module is to use ROMCOPY. It is also possible to manually compile each Basic program by hand by running each. I believe we're looking at two different situations here. The first has to do with copying ROM images to a FRAM71 IRAM and the difference between leaving the memory configured as IRAM or reconfiguring it as ROM. The second situation is what you describe which is to create a ROM image. In that case ROMCOPY is the correct method as it creates an image with the proper checksums. Note the caveat in Note 6 on p.12 of the manual. Soft-configured ROMs should be configured the same as the physical module. So while I'm unaware of any undesired side-affects, that means that a 32K ROM, which is physically two 16K ROMs, should be configured in FRAM71 as two 16K Chips. The image can be copied to a single 32K Chip, however the ROM test in the Diag ROM will fail calculating the checksums. Dave 12-20-2014, 08:59 PM (This post was last modified: 12-20-2014 09:04 PM by Dave Frederickson.) Post: #10 Dave Frederickson Senior Member Posts: 1,947 Joined: Dec 2013 ROMCOPY (12-20-2014 07:15 PM)J-F Garnier Wrote:  The right, secure way to create a ROM module is to use ROMCOPY. For those not familiar with ROMCOPY here're a couple of reference documents: Use of ROMCOPY comes with warnings so understand what you're doing and take the proper precautions. Thanks to Joe Horn for the above documents. Dave 12-20-2014, 09:41 PM Post: #11 J-F Garnier Senior Member Posts: 484 Joined: Dec 2013 RE: [FRAM71] Pre-Production Batch (12-20-2014 08:34 PM)Dave Frederickson Wrote: (12-20-2014 07:15 PM)J-F Garnier Wrote:  Changing the module type from IRAM to ROM without precautions can produce several side effects and strange errors... I believe we're looking at two different situations here. The first has to do with copying ROM images to a FRAM71 IRAM and the difference between leaving the memory configured as IRAM or reconfiguring it as ROM. The second situation is what you describe which is to create a ROM image. In that case ROMCOPY is the correct method as it creates an image with the proper checksums. Yes, you're right, my comment was unclear. It is perfectly correct to load a ROM image (with ROMCOPY) in IRAM then change the module type to ROM. This is what I'm used to do also in Emu71. My comment was about using the ROM type to protect a IRAM built manually by program entry or normal COPY. Note that this has to do not only with wrong checksum (that will not harm in normal use), but with Basic programs 'compiling' - actually the right term here is 'chaining', meaning the calculations of the GOTO/GOSUB targets. 12-21-2014, 06:28 PM (This post was last modified: 12-21-2014 06:30 PM by Hans Brueggemann.) Post: #12 Hans Brueggemann Member Posts: 200 Joined: Dec 2013 RE: [FRAM71] Pre-Production Batch (12-20-2014 07:15 PM)J-F Garnier Wrote:  Is really this internal read-only flag implemented? I tried to POKE (with the special, unrestricted POKE function from JPC ROM) into a ROM-declared area, and the memory was actually changed. J-F thanks for digging this up, J-F! i tested the write protection flags against the standard POKE only. ouch. there is a firmware update to V421 available at the start of this thread which fixes this issue. hans 12-22-2014, 12:56 AM (This post was last modified: 12-22-2014 01:34 AM by Dave Frederickson.) Post: #13 Dave Frederickson Senior Member Posts: 1,947 Joined: Dec 2013 RE: [FRAM71] Pre-Production Batch (12-21-2014 06:28 PM)Hans Brueggemann Wrote:  2014-12-21: Firmware update V421 Release Note: in V420, FRAM71's write-protect flags of soft-ROM-declared RAM modules have no effect. POKE from the POKELEX lexfile can change contents of those ROMs. in V421, write-protect flags now block all write-attempts to soft-ROM areas. POKE from the POKELEX lexfile executes "silently", but does not alter contents. (note that this is a minor fix which does not affect/enhance "standard" use cases. users who have no means to update their FRAM71 should contact me through e-mail for options.) I found two NIB FlashPro4 programmers on eBay for \$25 ea. What's In the Box With a few cable parts an adapter can be made, or the included cable can be modified. Let's rock and roll! Dave 02-07-2015, 06:55 PM (This post was last modified: 02-07-2015 06:56 PM by Hans Brueggemann.) Post: #14 Hans Brueggemann Member Posts: 200 Joined: Dec 2013 [FRAM71] Bankswitching? In a recent e-mail, Dave Frederickson pointed out to me that it would be nice to be able to access all 512kB of FRAM, by using some sort of bank switching scheme. i find this idea quite intriguing and FRAM71's FPGA still has ample amount of free logic available (as per V.421) to handle such a task. alas... i'm short on ideas of how to implement that to gain maximum use from such a feature. when looking at the used FRAM address space (nibble addresses), 00000-1FFFF is reserved for diagnostic/alternate O/S, 20000-2BFFF is unused, 2C000-2C01F is reserved for Memory Configuration, 2C020-2FFFF is unused, 30000-FFFFF is reserved for the 0..12 configurable Memory Chips. that gives basically the range 20000-2BFFFF as a possible candidate for bankswitching. so, what can we do with that? is it worth a try in the light of the amount of FRAM already available? what's your take on this, valued users? hans 02-07-2015, 11:50 PM Post: #15 rprosperi Senior Member Posts: 4,556 Joined: Dec 2013 RE: [FRAM71] Pre-Production Batch (02-07-2015 06:55 PM)Hans Brueggemann Wrote:  In a recent e-mail, Dave Frederickson pointed out to me that it would be nice to be able to access all 512kB of FRAM, by using some sort of bank switching scheme. i find this idea quite intriguing and FRAM71's FPGA still has ample amount of free logic available (as per V.421) to handle such a task. alas... i'm short on ideas of how to implement that to gain maximum use from such a feature. when looking at the used FRAM address space (nibble addresses), 00000-1FFFF is reserved for diagnostic/alternate O/S, 20000-2BFFF is unused, 2C000-2C01F is reserved for Memory Configuration, 2C020-2FFFF is unused, 30000-FFFFF is reserved for the 0..12 configurable Memory Chips. that gives basically the range 20000-2BFFFF as a possible candidate for bankswitching. so, what can we do with that? is it worth a try in the light of the amount of FRAM already available? what's your take on this, valued users? hans Dave and I have been discussing this for a while... and it grew out of wondering why the full 512KB wasn't accessible. Clearly FRAM71 has the means to do it, so why doesn't it do so? There had to be a reason, we reasoned. And I guess that lead to your discussion. I have yet to find anything serious that exceeds FRAM71's current abilities, but also haven't tried yet. Still, I would say that yes, it's worth it. I've never owned any calculator* with which I haven't run into a memory limit, whether doing "serious" work, or just experimenting/playing. Before Clonix/NOV and the CL were made, who would have ever guessed that a 41 could ever need more that 4 modules? Now, 4, 5, or 6 are just the basic OS/system build, before adding-in app roms. From your notes above, it appears one could create a bank-switched scheme with a 16KB window in the available address space, and while a 32KB window would be more convenient to, for example, swap-in/out ROM images or IRAMs, this is still useful. Just some feedback to stimulate discussion. * Notable exception - I've never filled my 50g 2 GB SD card --Bob Prosperi 02-08-2015, 08:52 AM Post: #16 Paul Dale Senior Member Posts: 1,682 Joined: Dec 2013 RE: [FRAM71] Pre-Production Batch A truly write protected module image? Thinking maths here but anything that fits. - Pauli 02-08-2015, 04:28 PM (This post was last modified: 02-08-2015 04:34 PM by Dave Frederickson.) Post: #17 Dave Frederickson Senior Member Posts: 1,947 Joined: Dec 2013 RE: [FRAM71] Pre-Production Batch (02-07-2015 06:55 PM)Hans Brueggemann Wrote:  In a recent e-mail, Dave Frederickson pointed out to me that it would be nice to be able to access all 512kB of FRAM, by using some sort of bank switching scheme. i find this idea quite intriguing and FRAM71's FPGA still has ample amount of free logic available (as per V.421) to handle such a task. alas... i'm short on ideas of how to implement that to gain maximum use from such a feature. when looking at the used FRAM address space (nibble addresses), 00000-1FFFF is reserved for diagnostic/alternate O/S, 20000-2BFFF is unused, 2C000-2C01F is reserved for Memory Configuration, 2C020-2FFFF is unused, 30000-FFFFF is reserved for the 0..12 configurable Memory Chips. that gives basically the range 20000-2BFFFF as a possible candidate for bankswitching. so, what can we do with that? is it worth a try in the light of the amount of FRAM already available? what's your take on this, valued users? It's not the system address space I was suggesting be utilized, but the FRAM memory itself. Currently the 512k FRAM is divided into 16-32k "Chips", 13 of which are available for configuration as RAM or (truly write-protected) ROM. Two are reserved for the SYSRAM feature and the last is unused. What I'm suggesting is that another configuration register, like the 0x2C000 register, perhaps at address 0x2C020, but only 16 bits in length, be created. Each bit would correspond to a 32k Chip in FRAM and if set, that Chip would be "Enabled". The rules would be: 1. Up to 13 bits can be set at one time, corresponding to Chips 12 - 0. 2. The two least significant bits are reserved for SYSRAM and are always zero By manipulating the bits the "active" Chips can be switched making different ROM configurations possible without reloading an image. It should be possible to load an alternate O/S into the 64k SYSRAM and configure the other 14-32k Chips into RAM or ROM. For example, if 0x2C020 is loaded with 0xF000 and that would enable the 0xF0000 block of FRAM. This would be mapped into Chip 12 and 32k ROM1 could be load into it. If 0x2C020 were loaded with 0x8000 that would enable the 0xE0000 block of FRAM and 32k ROM2 could be load into it. The significant difference is that the set bits in 0x2C020 determine which blocks of FRAM get mapped to Chips, so 0xE0000 become Chip 12, also. By manipulating the bit in 0x2C020 either ROM1 or ROM2 can be enabled. Another example, load 0x2C020 with 0xFFF8 enabling 13 blocks of FRAM starting at 0x30000 and load 0x30000 with ROM1. Load 0x2C020 with 0xFFFB enabling 13 blocks of FRAM starting at 0x20000 with the 32k block at 0x30000 disabled. Memory at 0x20000 gets mapped to Chip 0 and ROM2 can get loaded into FRAM at 0x20000. Now 14-32k blocks of FRAM can be utilized for RAM or ROM, with the limitation of only 13 of those being enabled at one time. The FRAM configuration registers at 0x2C000 would need to be moved to the NVRAM in the FPGA, if they're not already there. Does that make sense? It's important to understand that FRAM memory would get mapped to Chips which then get mapped to 71B address space by the 71's O/S during the power-on memory configuration. Dave 02-10-2015, 02:19 AM Post: #18 Michael Fehlhammer Member Posts: 215 Joined: Dec 2013 RE: [FRAM71] Pre-Production Batch I have to admit that I didn't really understand Dave's sophisticated considerations, probably it's too late at night right now, but I'd like to contribute a very simple argument why bank switching would be helpful: I am pretty sure that most of the FRAM71 users have been 71b power users before and own a lot of physical modules, like math and curve fitting, for example, and they certainly use the IL module. These three modules consume 2 * 32k + 48k = 112k of address space, right? If I plug in these modules, 112 (more) kiloBytes of FRAM cannot be used. If I had a bankswitching mechanism, I could make use of the "hidden" RAM pages. 02-10-2015, 02:54 AM (This post was last modified: 02-10-2015 03:07 AM by Dave Frederickson.) Post: #19 Dave Frederickson Senior Member Posts: 1,947 Joined: Dec 2013 RE: [FRAM71] Pre-Production Batch (02-10-2015 02:19 AM)Michael Fehlhammer Wrote:  I have to admit that I didn't really understand Dave's sophisticated considerations, probably it's too late at night right now, but I'd like to contribute a very simple argument why bank switching would be helpful: I am pretty sure that most of the FRAM71 users have been 71b power users before and own a lot of physical modules, like math and curve fitting, for example, and they certainly use the IL module. These three modules consume 2 * 32k + 48k = 112k of address space, right? If I plug in these modules, 112 (more) kiloBytes of FRAM cannot be used. If I had a bankswitching mechanism, I could make use of the "hidden" RAM pages. Correct. Other RAM and ROM's subtract from the FRAM available to the 71B. All 512k of FRAM can't be accessed all at once because of the 71B's 512k addressing limitation. What I think could be done via bank-switching is to disable blocks of FRAM for one configuration and enable different blocks for another configuration. The 71 would configure only the enabled blocks. I would think that it's possible to bank-switch all 16 blocks of FRAM. In reality there's probably something I don't understand about the FRAM71 architecture preventing this. 02-10-2015, 05:44 PM (This post was last modified: 02-10-2015 05:46 PM by Hans Brueggemann.) Post: #20 Hans Brueggemann Member Posts: 200 Joined: Dec 2013 RE: [FRAM71] Pre-Production Batch this is the memory organization in FRAM (i.e., internal addressing, _not_ HP-71B addresses): 1) 00000-1FFFF is reserved for diagnostic/alternate O/S, 2) 20000-2BFFF is unused, 3) 2C000-2C01F is reserved for Memory Configuration, 4) 2C020-2FFFF is unused, 5) 30000-FFFFF is reserved for the 0..12 configurable Memory Chips. a. on start-up, HP-71B runs through a memory identification/assignment routine, identifying all memory on the bus by releasing an ID command while DIN of the port to be examined is high. after the first chip in the daisy chain has responded, it gets pre-configured by the HP-71B and in turn passes DIN=High on to the next chip on the daisy chain. this process repeats for a particular port, until there are no more chips responding to ID, or the max number of chips (16) on that daisy chain have been reached. b. SYSROM (or, SYSRAM for that matter) gets not identified by the HP-71B, it's "assumed to be there" at 00000-01FFFF. FRAM71 maps its first two 32kB FRAM segments directly onto those addresses. SYSRAM is then selected by OD-ing (output-disabling) the calculator's SYSROM and at the same time output-enabling the SYSRAM area c. FRAM71 does not use the internal FPGA-RAM for a simple reason: that RAM is volatile and hence would screw FRAM71's memory configuration as soon as your HP-71B loses power. a far more elegant way to keep the configuration is to store all neccessary values in FRAM itself, where it is kept safe for decades. but that comes at a price: the allocation of the configaration area ( 2C000-2C01F) fragments one of the 32kB blocks. that's why that block (or better to say, its remnants) is not available to the user. the configuration area is directly mapped to the cardreader address space, where it doesn't get cleared out by [ON]/[/],3, and where it is not interfering with any other system addresses, i.e., display area, IL mailbox (tried that accidentially -nasty surprise!), or the scratchpad at the far end of the address range. i hope this clarifies a bit how FRAM71 is internally organized, and why 3 out of 16 32kB blocks are "gone". what i hear though is a request of having the FRAM memory chips readied with ROM images "behind the curtains" and then pick an appropriate set by setting the respective bits in the configuration area. is that correct? would that feature justify to kick out UART- and REDEYE- support? thanks for your great input, guys! hans « Next Oldest | Next Newest » User(s) browsing this thread: 2 Guest(s)
2021-03-08 06:16:03
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.26239949464797974, "perplexity": 4339.5470753184045}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": false}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-10/segments/1614178381989.92/warc/CC-MAIN-20210308052217-20210308082217-00039.warc.gz"}
https://linguistics.stackexchange.com/questions/31118/how-to-interpret-this-form-of-heaps-law
# How to interpret this form of Heaps' Law? Heaps' Law basically is an empirical function that says the number of distinct words you'll find in a document grows as a function to the length of the document. The equation given in the Wikipedia link is where $V_R$ is the number of distinct words in a document of size $n$, and $K$ and $\beta$ are free parameters that are chosen empirically (usually $0 \le K \le 100$ and $0.4 \le \beta \le 0.6$). I'm currently following a course on Youtube called Deep Learning for NLP by Oxford University and DeepMind. There is a slide in a lecture that demonstrates Heaps' Law in a rather different way: The equation given with the logarithms apparently is also Heaps' Law. The fastest growing curve is a corpus for Twitter data and the slowest is for the Wall Street Journal. Tweets usually have less structure and more spelling errors, etc. compared to the WSJ which would explain the faster-growing curve. The main question that I had is how Heaps' Law seems to have taken on the form that the author has given? It's a bit of a reach but the author didn't specify what any of these parameters are and I was wondering if anybody might be familiar with Heaps' Law to give me some advise on how to solve my question. A straightforward rewriting of the Wikipedia formula gives log V_R(n) = log K*n^beta = log K + log n^beta = log K + beta*log n This allows us to identify K=C and beta=-alpha (probably the WSJ uses a different formulation of Heaps' law V_R (n) = \frac{K}{n^\alpha} ). The remaining b is a strange additional parameter not present in the original formulation of the law (and irrelevant, too, because the law is about large numbers where n-b is approximately equal to n). • Thanks for the answer. I tried to apply logarithms to each side but it didn't come to mind that K = C and β = -α. This may also sound like a bit of an out-of-placed question, but would you happen to know what a "singleton" in this context is? My knowledge of set theory tells me that it means a single perceptual unit, or a word in this context. – Seankala Apr 9 '19 at 15:34 • From the small context given, I can only guess what a singleton could be here. My guess is that it refers to a hapax legomenon, i.e., a word form that occurs exactly once in the corpus (or sample). – jk - Reinstate Monica Apr 9 '19 at 16:07 • alpha in the chart is notably not in the same range as implied for beta in the question. I'm not sure whether that makes a huge difference. I guess it does. – vectory Apr 9 '19 at 17:59 • singleton 34.3%, 70% must mean hapax legomenon percentage of new words. However, that still seems quite high. Edit: that wouldn't even make sense, if every word is new at some point, unless they don't count a seizable number of basic vocabulary as new, like, compact OED sized. – vectory Apr 9 '19 at 18:02 The question is interesting from a (my) novice math perspective. From a basic linguistic perspective, there is little to no difference between either form. All you need is a slowly decreasing count. Both forms describe standard distributions, a concept that's naturally observed in nature. The specific formula of any such distribution depends on an accurate model. It doesn't hold much explanatory power, if the model isn't empiricly grounded, but it's a heuristic--we might speak of so called fudge factors. For the specifics you should check out datascience.se, or whatever it's called where statistics are treated (compression of text is also rather important in signal processing). The first formular is akin to the area of a circle V = k * n ^ beta A = pi * r ^ 2 but inverted, i.e. taking the square root (beta=1/2) instead of the square; also, it has a random factor k instead of pi (=3.14...). This can be pictured various ways, for example as light cone projected onto a surface, or a stream of words onto a lexicon: Where the radious of a light cone increases linearly with distance, it's area increases squarely; if this area illuminated a text, the number of new words would increase linearly with distance from the lamp. This only explains the inversion of the exponantial function, but the fudge factor is another matter, depending on the model. While the factor pi relates the circumference of a circle to its radius, a different factor implies first of all a different shape, either of the light cone, or alternatively of the surface (left as exercise to the reader to the reader), but it still grows linearly with distance. So it doesn't even make a difference in my simplistical model. In other words, if counting text length in number of words n, so the text grows linearly with each word, it should grow squarely if counting each newly introduced word. V ~ n ^ beta Or vice-versa as the formular has it: The number of new words grows proportional to the square root of the number of total words. The second formular is essentially the same. I too have no idea what the extra variables are. Removing the logarithm and transposing, we see 1. f(w) = C * (r(w)-b)^(-alpha). 2. 1/C * (r(w)-b)^a = 1 / f(w). This is in principle the same polynomial form as V=K*n^b in either case, with several new parameters. It's not apparent why to choose the transposed form, which works as well, iff it were that V = 1 / f(w), k = 1/C, n = (r(w) - b), beta = alpha. There are a few notable differences. What's with those parameters? I'd assume the following: • b is likely a threshold under which the distribution is useless, because if r(w)<b, then the logarithm of the difference (r(w) - b) is undefined. Perhaps that's the Basic vocabulary. that's the major difference in any case. Another difference would be to focus on the transposed form. • If C is a constant as usual notation practice has it, then writing log(C) would be constant as well. This might just be a courtesy to ease solving for (w). It's inversely proportional to k, but that shouldn't trouble us now. I'm keen to assume that it means Corpus, but that gives me troubles. [todo] • That leaves alpha to be explained, which seems to be a variable nudge factor determined per corpus by a specific statistical procedure for error correction. The last one is crucial. Raising to a negative power of alpha (=reciproke of the power of alpha) is not quite the same as taking the square root (power of 0.5). But it is similar in effect because the ranges of the exponents are also different than in the first formular; here we have b < 1 < alpha. The very important difference is that the number of new words will tend to zero as the number of typed words tends to infinity. In contrast, the old formula would require ever new words to grow the text. Somehow I'm trying to see 1/f as a derivative, compared to mechanical accelleration. But I'll rather leave the rest of the exercise to the reader. Please add a link to the video to your question. thx bye
2020-10-21 19:11:08
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.7771217226982117, "perplexity": 733.3286716698299}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-45/segments/1603107877420.17/warc/CC-MAIN-20201021180646-20201021210646-00141.warc.gz"}
https://topanswers.xyz/tex?q=1635
Anonymous 1123 I am rarely to use pgfplots to plot a surface, and then, I do not know How start to draw this graph. How to draw this graph? ![image.png](/image?hash=01342df68363d310582bf3610c353a7a985e68b3e6986f0a5da03865cfb7bd4f) user 3.14159 In general, pgfplots is a great tool for 3d plots. This is certainly the case if you plot a single surface, as in this example. If you have multiple intersecting plots, then you may have to work harder (but according to how I read the manual, the future may bring some tools that do such things automatically). To produces this plot, all we need to do is to use polar coordinates and a simple trick for the vertical wall: cut of the radius function via rr(\x)=min(\x,1);. Notice also that the axes are autmomatically almost right, but this is because they happen to be so in this example, so we only have to fix a stretch of the z-axis. In general, pgfplots does not compute the intersections of the axes with the surfaces, so we do have to do that ourselves. Note also that this plot does not really highlight some of the very nice features of pgfplots such as colorful shading with interpolated colors. \documentclass[tikz,border=3mm]{standalone} \usepackage{pgfplots} \pgfplotsset{compat=1.17} \begin{document} \begin{tikzpicture}[line cap=round,line join=round,miter limit=5] \begin{axis}[width=15cm,unit vector ratio=1 1 1, view={120}{30},axis lines=center, xtick=\empty,ytick=\empty,ztick=\empty, xlabel={$x$},ylabel={$y$},zlabel={$z$}, xmin=-2,ymin=-2,zmin=0, xmax=2,ymax=2,zmax=3, declare function={f(\x,\y)=(\x<=1?2-\x*\x*(pow(cos(\y),2)/2+pow(sin(\y),2)):0); rr(\x)=min(\x,1);}] colormap/blackwhite,point meta=0, z buffer=sort, samples y=36, domain=0:1.04,y domain=0:360] ({rr(x)*cos(y)},{rr(x)*sin(y)},{f(x,y)}); \draw (0,0,2) -- (0,0,2.4); \path (0,0.5,2.5) node[right]{$\displaystyle z=2-\frac{x^2}{2}-y^2$}; \end{axis} \end{tikzpicture} \end{document} ![Screen Shot 2021-02-03 at 5.26.20 PM.png](/image?hash=1ba0d0188b1fc06f366f0dcbb11ef098a656e2ef3f339e2cde67eb4cdfde7f9d) Enter question or answer id or url (and optionally further answer ids/urls from the same question) from Separate each id/url with a space. No need to list your own answers; they will be imported automatically.
2021-08-01 17:23:27
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8193881511688232, "perplexity": 1156.6963844118175}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-31/segments/1627046154214.63/warc/CC-MAIN-20210801154943-20210801184943-00559.warc.gz"}
http://math.stackexchange.com/questions/16726/proof-whether-or-not-1-k-by-1-k1-rectangles-fit-inside-a-unit-square
# Proof whether or not 1/k by 1/(k+1) rectangles fit inside a unit square I am reading Concrete Mathematics and came across an interesting problem, number 37 of chapter 2. The answers to exercises lists no known answer to this problem: • Will all the 1/k by 1/(k+1) rectangles, for k $\ge$ 1, fit together inside a 1 by 1 square? (Recall that their areas sum to 1) My question is: Has a solution been found in the years after the book's publishing? Mathematics is a very large field, and much of the terminology that might aid in googling I am not yet aware of. - This is one of my favorite problems. –  Grumpy Parsnip Jan 7 '11 at 23:23 If you find a solution, please let me know :-) –  Derek Jennings Jan 8 '11 at 8:46
2015-07-05 06:22:10
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.6317219734191895, "perplexity": 891.8322929296187}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2015-27/segments/1435375097246.96/warc/CC-MAIN-20150627031817-00103-ip-10-179-60-89.ec2.internal.warc.gz"}
https://zbmath.org/?q=an:0726.60078
Martin boundaries on Wiener space.(English)Zbl 0726.60078 Diffusion processes and related problems in analysis, Vol. I: Diffusions in analysis and geometry, Proc. Int. Conf., Evanston/IL (USA) 1989, Prog. Probab. 22, 3-16 (1990). [For the entire collection see Zbl 0716.00011.] Let (X,$${\mathbb{P}})$$ be the infinite-dimensional Brownian motion with the state space $$B:=\{x\in {\mathcal C}_{[0,1]}| x(0)=0\}.$$ Denote with $${\mathcal P}$$ the class of probability measures, $${\mathbb{Q}}$$, on $$\Omega:={\mathcal C}([0,\infty),B)$$ such that for all $$t\geq 0$$, $${\mathbb{Q}}(\cdot | \hat F_ t)={\mathbb{P}}(\cdot | \hat F_ t),$$ where $$\hat F_ t:=\sigma (X_ s;\;s\geq t).$$ It is proved that for any $${\mathbb{Q}}\in {\mathcal P}$$ there is exactly one probability measure $$\nu$$ on B such that $(*)\;{\mathbb{Q}}=\int_{B}{\mathbb{P}}^ y\nu (dy),$ where $${\mathbb{P}}^ y$$ is the measure associated with $$X_ t+yt$$. Conversely, any probability measure $$\nu$$ on B induces via (*) a measure $${\mathbb{Q}}\in {\mathcal P}$$. Therefore, B is called the space-time Martin boundary of X. The extremal space-time harmonic functions can now be characterized by absolute continuity, which shows that only points in the Cameron-Martin space carries such a function. On the contrary to the finite-dimensional case it is seen that there are space-time harmonic functions, which cannot be represented in terms of extremals. The structure of h-processes is given by characterising the drift term. The infinite-dimensional Ornstein-Uhlenbeck processes are also discussed. Reviewer: P.Salminen (Åbo) MSC: 60J65 Brownian motion 60J50 Boundary theory for Markov processes Zbl 0716.00011
2023-02-03 17:50:00
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.7575628757476807, "perplexity": 414.8131067215358}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2023-06/segments/1674764500058.1/warc/CC-MAIN-20230203154140-20230203184140-00543.warc.gz"}
https://socratic.org/questions/5919e025b72cff11796ab0ea
# Question ab0ea May 19, 2017 $1.$ $11.34$ $\text{mg}$ $2.$ $256.00$ $\text{mg}$ #### Explanation: $1.$ The average concentration of the $\text{PCB}$s found in the chicks is $18.9$ $\text{mg}$ / $\text{kg}$. Also, the mass of a single chick is $0.6$ $\text{kg}$. Let's set this information up as a ratio: Rightarrow frac(18.9 " mg")(1 " kg")= frac(x)(0.6 " kg") Multiply both sides by $0.6$ $\text{kg}$: Rightarrow frac(18.9 " mg" times 0.6 " kg")(1 " kg") = frac(x times 0.6 " kg")(0.6 " kg") $R i g h t a r r o w 11.34$ $\text{mg} = x$ $\therefore x = 11.34$ $\text{mg}$ Therefore, a chick of mass $0.6$ $\text{kg}$ would contain $11.34$ $\text{mg}$ of $\text{PCB}$s. $2.$ The average concentration of $\text{PCB}$s in the body tissue of a human is $4.00$ $\text{ppm}$. Let's convert the units of the concentration from $\text{ppm}$ to $\text{mg}$ / $\text{kg}$: $R i g h t a r r o w 1$ $\text{ppm}$ $= 1$ $\text{mg}$ / ""kg"" $R i g h t a r r o w 4.00$ $\text{ppm}$ $= 4.00$ $\text{mg}$ / $\text{kg}$ We need to find the mass of $\text{PCB}$s found in a $64$ $\text{kg}$ human. So let's set up another ratio using this information: Rightarrow frac(4.00 " mg")(1 " kg") = frac(x)(64 " kg") Multiply both sides by $64$ $\text{kg}$: Rightarrow frac(4.00 " mg" times 64 " kg")(1 " kg") = frac(x times 64 " kg")(64 " kg")# $R i g h t a r r o w 256.00$ $\text{mg} = x$ $\therefore x = 256.00$ $\text{mg}$ Therefore, the mass of $\text{PCB}$s present in a $64$ $\text{kg}$ person's body is $256.00$ $\text{mg}$.
2022-01-21 14:13:26
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 58, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.5586796402931213, "perplexity": 5114.074814162603}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-05/segments/1642320303385.49/warc/CC-MAIN-20220121131830-20220121161830-00650.warc.gz"}
https://math.stackexchange.com/questions/1006006/serial-version-of-halls-marriage-theorem
# Serial version of Hall's marriage theorem? Hall's marriage theorem states that a collection of men can get married iff for every group of $k \geq 1$ men, the total number of women that like one or more of them is at least $k$. For example, if: • $M_1$ is liked by $W_1$ and $W_2$; • $M_2$ is liked by $W_2$ and $W_3$; • $M_3$ is liked by $W_3$ only. Then the condition is satisfied and so a matching exists ($W_1$-$M_1$, $W_2$-$M_2$ and $W_3$-$M_3$). But what happens if the men come serially? For example, let's suppose that $M_1$ comes first and selects any woman that likes him; then $M_2$ comes and selects any remaining woman that likes him; etc. In this case, the process will not necessarily conclude with a successful matching! For example, it is possible that: • $M_1$ selects $W_2$; • $M_2$ selects $W_3$; • $M_3$ now remains lonely because the only woman that liked him is already married! However, if $M_3$ is liked by all women, then the selection always concludes with a marriage, regardless of what $M_1$ and $M_2$ do. MY QUESTION IS: What conditions, stronger than Hall's marriage condition, guarantee that, for every selection of each man $M_i$, the following men in the series ($M_j$ for $j>i$) can select women that like them, so that the selection always concludes with all men being married? NOTES: • The number of women can be equal or larger than the number of men. • The order of men is pre-specified ($M_1$, ..., $M_n$); only the selection of each man is unknown in advance.
2019-12-06 21:36:12
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.7087176442146301, "perplexity": 241.63267931947087}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-51/segments/1575540490972.13/warc/CC-MAIN-20191206200121-20191206224121-00523.warc.gz"}
https://brilliant.org/problems/recursive-sequence/
Recursive Sequence A sequence $$\{ a_i\}$$is defined by the recurrence relation $$a_{n} = 40 - 4a_{n-1}$$ with $$\ a_0 = -4$$. There exists real valued constants $$r, s$$ and $$t$$ such that $$a_i = r \cdot s^i + t$$ for all non-negative integers $$i$$. Determine $$r^2+s^2+t^2$$. ×
2017-03-28 13:58:24
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9855068325996399, "perplexity": 131.13254650500204}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-13/segments/1490218189771.94/warc/CC-MAIN-20170322212949-00254-ip-10-233-31-227.ec2.internal.warc.gz"}
https://www.aimsciences.org/article/doi/10.3934/dcds.2013.33.3599
American Institute of Mathematical Sciences August  2013, 33(8): 3599-3640. doi: 10.3934/dcds.2013.33.3599 Porous media equations with two weights: Smoothing and decay properties of energy solutions via Poincaré inequalities 1 Dipartimento di Matematica, Politecnico di Milano, Piazza Leonardo da Vinci 32, 20133 Milano, Italy, Italy 2 Dipartimento di Matematica, Università di Roma "La Sapienza", Piazzale A. Moro 2, 00185 Roma, Italy Received  August 2012 Revised  November 2012 Published  January 2013 We study weighted porous media equations on domains $\Omega\subseteq{\mathbb R}^N$, either with Dirichlet or with Neumann homogeneous boundary conditions when $\Omega\not={\mathbb R}^N$. Existence of weak solutions and uniqueness in a suitable class is studied in detail. Moreover, $L^{q_0}$-$L^\varrho$ smoothing effects ($1\leq q_0<\varrho<\infty$) are discussed for short time, in connection with the validity of a Poincaré inequality in appropriate weighted Sobolev spaces, and the long-time asymptotic behaviour is also studied. In fact, we prove full equivalence between certain $L^{q_0}$-$L^\varrho$ smoothing effects and suitable weighted Poincaré-type inequalities. Particular emphasis is given to the Neumann problem, which is much less studied in the literature, as well as to the case $\Omega={\mathbb R}^N$ when the corresponding weight makes its measure finite, so that solutions converge to their weighted mean value instead than to zero. Examples are given in terms of wide classes of weights. Citation: Gabriele Grillo, Matteo Muratori, Maria Michaela Porzio. Porous media equations with two weights: Smoothing and decay properties of energy solutions via Poincaré inequalities. Discrete & Continuous Dynamical Systems - A, 2013, 33 (8) : 3599-3640. doi: 10.3934/dcds.2013.33.3599 References: [1] R. A. Adams, "Sobolev Spaces,", Pure and Applied Mathematics, (1975).   Google Scholar [2] N. D. Alikakos and R. Rostamian, Large time behavior of solutions of Neumann boundary value problem for the porous medium equation,, Indiana Univ. Math. J., 30 (1981), 749.  doi: 10.1512/iumj.1981.30.30056.  Google Scholar [3] D. Andreucci, G. R. Cirmi, S. Leonardi and A. F. Tedeev, Large time behavior of solutions to the Neumann problem for a quasilinear second order degenerate parabolic equation in domains with noncompact boundary,, J. Differential Equations, 174 (2001), 253.  doi: 10.1006/jdeq.2000.3948.  Google Scholar [4] D. Andreucci and A. F. Tedeev, Sharp estimates and finite speed of propagation for a Neumann problem in domains narrowing at infinity,, Adv. Differential Equations, 5 (2000), 833.   Google Scholar [5] D. Bakry, F. Barthe, P. Cattiaux and A. Guillin, A simple proof of the Poincaré inequality for a large class of probability measures including the log-concave case,, Elect. Comm. Prob., 13 (2008), 60.  doi: 10.1214/ECP.v13-1352.  Google Scholar [6] D. Bakry, T. Coulhon, M. Ledoux and L. Saloff-Coste, Sobolev inequalities in disguise,, Indiana Univ. Math. J., 44 (1995), 1033.  doi: 10.1512/iumj.1995.44.2019.  Google Scholar [7] A. Blanchet, M. Bonforte, J. Dolbeault, G. Grillo and J. L. Vázquez, Hardy-Poincaré inequalities and applications to nonlinear diffusions,, C. R. Math. Acad. Sci. Paris, 344 (2007), 431.  doi: 10.1016/j.crma.2007.01.011.  Google Scholar [8] M. Bonforte, J. Dolbeault, G. Grillo and J. L. Vázquez, Sharp rates of decay of solutions to the nonlinear fast diffusion equation via functional inequalities,, Proc. Natl. Acad. Sci. USA, 107 (2010), 16459.  doi: 10.1073/pnas.1003972107.  Google Scholar [9] M. Bonforte and G. Grillo, Asymptotics of the porous media equation via Sobolev inequalities,, J. Funct. Anal., 225 (2005), 33.  doi: 10.1016/j.jfa.2005.03.011.  Google Scholar [10] M. Bonforte and G. Grillo, Ultracontractive bounds for nonlinear evolution equations governed by the subcritical $p$-Laplacian,, in, 61 (2005), 15.  doi: 10.1007/3-7643-7317-2_2.  Google Scholar [11] M. Bonforte, G. Grillo and J. L. Vázquez, Fast diffusion flow on manifolds of nonpositive curvature,, J. Evol. Equ., 8 (2008), 99.  doi: 10.1007/s00028-007-0345-4.  Google Scholar [12] M. Bonforte, G. Grillo and J. L. Vázquez, Special fast diffusion with slow asymptotics: Entropy method and flow on a Riemann manifold,, Arch. Rat. Mech. Anal., 196 (2010), 631.  doi: 10.1007/s00205-009-0252-7.  Google Scholar [13] S. M. Buckley and P. Koskela, New Poincaré inequalities from old,, Ann. Acad. Sci. Fenn. Math., 23 (1998), 251.   Google Scholar [14] S.-K. Chua and R. L. Wheeden, Sharp conditions for weighted 1-dimensional Poincaré inequalities,, Indiana Univ. Math. J., 49 (2000), 143.  doi: 10.1512/iumj.2000.49.1754.  Google Scholar [15] S.-K. Chua and R. L. Wheeden, Weighted Poincaré inequalities on convex domains,, Math. Res. Lett., 17 (2010), 993.   Google Scholar [16] E. B. Davies, "Heat Kernels and Spectral Theory,", Cambridge Tracts in Mathematics, 92 (1989).  doi: 10.1017/CBO9780511566158.  Google Scholar [17] E. DiBenedetto and A. Friedman, Hölder estimates for nonlinear degenerate parabolic systems,, J. Reine Angew. Math., 357 (1985), 1.  doi: 10.1515/crll.1985.357.1.  Google Scholar [18] J. Dolbeault, I. Gentil, A. Guillin and F.-Y. Wang, $L^q$-functional inequalities and weighted porous media equations,, Potential Anal., 28 (2008), 35.  doi: 10.1007/s11118-007-9066-0.  Google Scholar [19] J. Dolbeault, B. Nazaret and G. Savaré, On the Bakry-Emery criterion for linear diffusions and weighted porous media equations,, Commun. Math. Sci., 6 (2008), 477.   Google Scholar [20] D. E. Edmunds and B. Opic, Weighted Poincaré and Friedrichs inequalities,, J. London Math. Soc. (2), 47 (1993), 79.  doi: 10.1112/jlms/s2-47.1.79.  Google Scholar [21] D. Eidus, The Cauchy problem for the nonlinear filtration equation in an inhomogeneous medium,, J. Differential Equations, 84 (1990), 309.  doi: 10.1016/0022-0396(90)90081-Y.  Google Scholar [22] D. Eidus and S. Kamin, The filtration equation in a class of functions decreasing at infinity,, Proc. Amer. Math. Soc., 120 (1994), 825.  doi: 10.2307/2160476.  Google Scholar [23] E. Fabes, M. Fukushima, L. Gross, C. Kenig, M. Röckner and D. W. Stroock, "Dirichlet Forms," Lectures given at the First C.I.M.E. Session held in Varenna, June 8-19, 1992, Edited by G. Dell'Antonio and U. Mosco,, Lecture Notes in Mathematics, 1563 (1993).   Google Scholar [24] P. Federbush, A partial alternate derivation of a result of Nelson,, J. Math. Phys., 10 (1969), 50.   Google Scholar [25] S. Filippas, L. Moschini and A. Tertikas, Sharp two-sided heat kernel estimates for critical Schrödinger operators on bounded domains,, Comm. Math. Phys., 273 (2007), 237.  doi: 10.1007/s00220-007-0253-z.  Google Scholar [26] M.-H. Giga, Y. Giga and J. Saal, "Nonlinear Partial Differential Equations. Asymptotic Behavior of Solutions and Self-Similar Solutions,", Progress in Nonlinear Differential Equations and their Applications, 79 (2010).  doi: 10.1007/978-0-8176-4651-6.  Google Scholar [27] A. Grigor'yan, Heat kernels on weighted manifolds and applications,, in, 398 (2006), 93.  doi: 10.1090/conm/398/07486.  Google Scholar [28] A. Grigor'yan, "Heat Kernel and Analysis on Manifolds,", AMS/IP Studies in Advanced Mathematics, 47 (2009).   Google Scholar [29] G. Grillo, On the equivalence between $p$-Poincaré inequalities and $L^r$-$L^q$ regularization and decay estimates of certain nonlinear evolutions,, J. Differential Equations, 249 (2010), 2561.  doi: 10.1016/j.jde.2010.05.022.  Google Scholar [30] L. Gross, Logarithmic Sobolev inequalities,, Amer. J. Math., 97 (1975), 1061.   Google Scholar [31] E. Hebey, "Nonlinear Analysis on Manifolds: Sobolev Spaces and Inequalities,", Courant Lecture Notes in Mathematics, 5 (1999).   Google Scholar [32] R. Hurri, The weighted Poincaré inequalities,, Math. Scand., 67 (1990), 145.   Google Scholar [33] S. Kamin, G. Reyes and J. L. Vázquez, Long time behavior for the inhomogeneous PME in a medium with rapidly decaying density,, Discrete Contin. Dyn. Syst., 26 (2010), 521.  doi: 10.3934/dcds.2010.26.521.  Google Scholar [34] S. Kamin and P. Rosenau, Propagation of thermal waves in an inhomogeneous medium,, Comm. Pure Appl. Math., 34 (1981), 831.  doi: 10.1002/cpa.3160340605.  Google Scholar [35] S. Kamin and P. Rosenau, Nonlinear diffusion in a finite mass medium,, Comm. Pure Appl. Math., 35 (1982), 113.  doi: 10.1002/cpa.3160350106.  Google Scholar [36] A. Kufner and B. Opic, How to define reasonably weighted Sobolev spaces,, Comment. Math. Univ. Carolin., 25 (1984), 537.   Google Scholar [37] A. Kufner and B. Opic, "Hardy-Type Inequalities,", Pitman Research Notes in Mathematics Series, 219 (1990).   Google Scholar [38] O. A. Ladyženskaja, V. A. Solonnikov and N. N. Ural'ceva, "Linear and Quasilinear Equations of Parabolic Type,", Translations of Mathematical Monographs, (1968).   Google Scholar [39] G. M. Lieberman, "Second Order Parabolic Differential Equations,", World Scientific Publishing Co., (1996).   Google Scholar [40] V. Maz'ja, "Sobolev Spaces,", Springer Series in Soviet Mathematics, (1985).   Google Scholar [41] B. Muckenhoupt, Hardy's inequality with weights,, Studia Math., 44 (1972), 31.   Google Scholar [42] B. Muckenhoupt, Weighted normed inequalities for the Hardy maximal function,, Trans. Amer. Math. Soc., 165 (1972), 207.   Google Scholar [43] L. Nirenberg, On elliptic partial differential equations,, Ann. Scuola Norm. Sup. Pisa (3), 13 (1959), 115.   Google Scholar [44] O. A. Oleĭnik, On the equations of unsteady filtration,, Dokl. Akad. Nauk SSSR (N.S.), 113 (1957), 1210.   Google Scholar [45] O. A. Oleĭnik, A. S. Kalašnikov and Y.-L. Čžou, The Cauchy problem and boundary problems for equations of the type of non-stationary filtration,, Izv. Akad. Nauk SSSR. Ser. Mat., 22 (1958), 667.   Google Scholar [46] M. M. Porzio, On decay estimates,, J. Evol. Equ., 9 (2009), 561.  doi: 10.1007/s00028-009-0024-8.  Google Scholar [47] M. M. Porzio and V. Vespri, Hölder estimates for local solutions of some doubly nonlinear degenerate parabolic equations,, J. Differential Equations, 103 (1993), 146.  doi: 10.1006/jdeq.1993.1045.  Google Scholar [48] G. Reyes and J. L. Vázquez, The Cauchy problem for the inhomogeneous porous medium equation,, Netw. Heterog. Media, 1 (2006), 337.  doi: 10.3934/nhm.2006.1.337.  Google Scholar [49] G. Reyes and J. L. Vázquez, The inhomogeneous PME in several space dimensions. Existence and uniqueness of finite energy solutions,, Commun. Pure Appl. Anal., 7 (2008), 1275.  doi: 10.3934/cpaa.2008.7.1275.  Google Scholar [50] G. Reyes and J. L. Vázquez, Long time behavior for the inhomogeneous PME in a medium with slowly decaying density,, Commun. Pure Appl. Anal., 8 (2009), 493.  doi: 10.3934/cpaa.2009.8.493.  Google Scholar [51] E. M. Stein, "Singular Integrals and Differentiability Properties of Functions,", Princeton Mathematical Series, (1970).   Google Scholar [52] J. L. Vázquez, "Smoothing and Decay Estimates for Nonlinear Diffusion Equations. Equations of Porous Medium Type,", Oxford Lecture Series in Mathematics and its Applications, 33 (2006).  doi: 10.1093/acprof:oso/9780199202973.001.0001.  Google Scholar [53] J. L. Vázquez, "The Porous Medium Equation. Mathematical Theory,", Oxford Mathematical Monographs, (2007).   Google Scholar [54] F.-Y. Wang, Orlicz-Poincaré inequalities,, Proc. Edinb. Math. Soc. (2), 51 (2008), 529.  doi: 10.1017/S0013091506000526.  Google Scholar show all references References: [1] R. A. Adams, "Sobolev Spaces,", Pure and Applied Mathematics, (1975).   Google Scholar [2] N. D. Alikakos and R. Rostamian, Large time behavior of solutions of Neumann boundary value problem for the porous medium equation,, Indiana Univ. Math. J., 30 (1981), 749.  doi: 10.1512/iumj.1981.30.30056.  Google Scholar [3] D. Andreucci, G. R. Cirmi, S. Leonardi and A. F. Tedeev, Large time behavior of solutions to the Neumann problem for a quasilinear second order degenerate parabolic equation in domains with noncompact boundary,, J. Differential Equations, 174 (2001), 253.  doi: 10.1006/jdeq.2000.3948.  Google Scholar [4] D. Andreucci and A. F. Tedeev, Sharp estimates and finite speed of propagation for a Neumann problem in domains narrowing at infinity,, Adv. Differential Equations, 5 (2000), 833.   Google Scholar [5] D. Bakry, F. Barthe, P. Cattiaux and A. Guillin, A simple proof of the Poincaré inequality for a large class of probability measures including the log-concave case,, Elect. Comm. Prob., 13 (2008), 60.  doi: 10.1214/ECP.v13-1352.  Google Scholar [6] D. Bakry, T. Coulhon, M. Ledoux and L. Saloff-Coste, Sobolev inequalities in disguise,, Indiana Univ. Math. J., 44 (1995), 1033.  doi: 10.1512/iumj.1995.44.2019.  Google Scholar [7] A. Blanchet, M. Bonforte, J. Dolbeault, G. Grillo and J. L. Vázquez, Hardy-Poincaré inequalities and applications to nonlinear diffusions,, C. R. Math. Acad. Sci. Paris, 344 (2007), 431.  doi: 10.1016/j.crma.2007.01.011.  Google Scholar [8] M. Bonforte, J. Dolbeault, G. Grillo and J. L. Vázquez, Sharp rates of decay of solutions to the nonlinear fast diffusion equation via functional inequalities,, Proc. Natl. Acad. Sci. USA, 107 (2010), 16459.  doi: 10.1073/pnas.1003972107.  Google Scholar [9] M. Bonforte and G. Grillo, Asymptotics of the porous media equation via Sobolev inequalities,, J. Funct. Anal., 225 (2005), 33.  doi: 10.1016/j.jfa.2005.03.011.  Google Scholar [10] M. Bonforte and G. Grillo, Ultracontractive bounds for nonlinear evolution equations governed by the subcritical $p$-Laplacian,, in, 61 (2005), 15.  doi: 10.1007/3-7643-7317-2_2.  Google Scholar [11] M. Bonforte, G. Grillo and J. L. Vázquez, Fast diffusion flow on manifolds of nonpositive curvature,, J. Evol. Equ., 8 (2008), 99.  doi: 10.1007/s00028-007-0345-4.  Google Scholar [12] M. Bonforte, G. Grillo and J. L. Vázquez, Special fast diffusion with slow asymptotics: Entropy method and flow on a Riemann manifold,, Arch. Rat. Mech. Anal., 196 (2010), 631.  doi: 10.1007/s00205-009-0252-7.  Google Scholar [13] S. M. Buckley and P. Koskela, New Poincaré inequalities from old,, Ann. Acad. Sci. Fenn. Math., 23 (1998), 251.   Google Scholar [14] S.-K. Chua and R. L. Wheeden, Sharp conditions for weighted 1-dimensional Poincaré inequalities,, Indiana Univ. Math. J., 49 (2000), 143.  doi: 10.1512/iumj.2000.49.1754.  Google Scholar [15] S.-K. Chua and R. L. Wheeden, Weighted Poincaré inequalities on convex domains,, Math. Res. Lett., 17 (2010), 993.   Google Scholar [16] E. B. Davies, "Heat Kernels and Spectral Theory,", Cambridge Tracts in Mathematics, 92 (1989).  doi: 10.1017/CBO9780511566158.  Google Scholar [17] E. DiBenedetto and A. Friedman, Hölder estimates for nonlinear degenerate parabolic systems,, J. Reine Angew. Math., 357 (1985), 1.  doi: 10.1515/crll.1985.357.1.  Google Scholar [18] J. Dolbeault, I. Gentil, A. Guillin and F.-Y. Wang, $L^q$-functional inequalities and weighted porous media equations,, Potential Anal., 28 (2008), 35.  doi: 10.1007/s11118-007-9066-0.  Google Scholar [19] J. Dolbeault, B. Nazaret and G. Savaré, On the Bakry-Emery criterion for linear diffusions and weighted porous media equations,, Commun. Math. Sci., 6 (2008), 477.   Google Scholar [20] D. E. Edmunds and B. Opic, Weighted Poincaré and Friedrichs inequalities,, J. London Math. Soc. (2), 47 (1993), 79.  doi: 10.1112/jlms/s2-47.1.79.  Google Scholar [21] D. Eidus, The Cauchy problem for the nonlinear filtration equation in an inhomogeneous medium,, J. Differential Equations, 84 (1990), 309.  doi: 10.1016/0022-0396(90)90081-Y.  Google Scholar [22] D. Eidus and S. Kamin, The filtration equation in a class of functions decreasing at infinity,, Proc. Amer. Math. Soc., 120 (1994), 825.  doi: 10.2307/2160476.  Google Scholar [23] E. Fabes, M. Fukushima, L. Gross, C. Kenig, M. Röckner and D. W. Stroock, "Dirichlet Forms," Lectures given at the First C.I.M.E. Session held in Varenna, June 8-19, 1992, Edited by G. Dell'Antonio and U. Mosco,, Lecture Notes in Mathematics, 1563 (1993).   Google Scholar [24] P. Federbush, A partial alternate derivation of a result of Nelson,, J. Math. Phys., 10 (1969), 50.   Google Scholar [25] S. Filippas, L. Moschini and A. Tertikas, Sharp two-sided heat kernel estimates for critical Schrödinger operators on bounded domains,, Comm. Math. Phys., 273 (2007), 237.  doi: 10.1007/s00220-007-0253-z.  Google Scholar [26] M.-H. Giga, Y. Giga and J. Saal, "Nonlinear Partial Differential Equations. Asymptotic Behavior of Solutions and Self-Similar Solutions,", Progress in Nonlinear Differential Equations and their Applications, 79 (2010).  doi: 10.1007/978-0-8176-4651-6.  Google Scholar [27] A. Grigor'yan, Heat kernels on weighted manifolds and applications,, in, 398 (2006), 93.  doi: 10.1090/conm/398/07486.  Google Scholar [28] A. Grigor'yan, "Heat Kernel and Analysis on Manifolds,", AMS/IP Studies in Advanced Mathematics, 47 (2009).   Google Scholar [29] G. Grillo, On the equivalence between $p$-Poincaré inequalities and $L^r$-$L^q$ regularization and decay estimates of certain nonlinear evolutions,, J. Differential Equations, 249 (2010), 2561.  doi: 10.1016/j.jde.2010.05.022.  Google Scholar [30] L. Gross, Logarithmic Sobolev inequalities,, Amer. J. Math., 97 (1975), 1061.   Google Scholar [31] E. Hebey, "Nonlinear Analysis on Manifolds: Sobolev Spaces and Inequalities,", Courant Lecture Notes in Mathematics, 5 (1999).   Google Scholar [32] R. Hurri, The weighted Poincaré inequalities,, Math. Scand., 67 (1990), 145.   Google Scholar [33] S. Kamin, G. Reyes and J. L. Vázquez, Long time behavior for the inhomogeneous PME in a medium with rapidly decaying density,, Discrete Contin. Dyn. Syst., 26 (2010), 521.  doi: 10.3934/dcds.2010.26.521.  Google Scholar [34] S. Kamin and P. Rosenau, Propagation of thermal waves in an inhomogeneous medium,, Comm. Pure Appl. Math., 34 (1981), 831.  doi: 10.1002/cpa.3160340605.  Google Scholar [35] S. Kamin and P. Rosenau, Nonlinear diffusion in a finite mass medium,, Comm. Pure Appl. Math., 35 (1982), 113.  doi: 10.1002/cpa.3160350106.  Google Scholar [36] A. Kufner and B. Opic, How to define reasonably weighted Sobolev spaces,, Comment. Math. Univ. Carolin., 25 (1984), 537.   Google Scholar [37] A. Kufner and B. Opic, "Hardy-Type Inequalities,", Pitman Research Notes in Mathematics Series, 219 (1990).   Google Scholar [38] O. A. Ladyženskaja, V. A. Solonnikov and N. N. Ural'ceva, "Linear and Quasilinear Equations of Parabolic Type,", Translations of Mathematical Monographs, (1968).   Google Scholar [39] G. M. Lieberman, "Second Order Parabolic Differential Equations,", World Scientific Publishing Co., (1996).   Google Scholar [40] V. Maz'ja, "Sobolev Spaces,", Springer Series in Soviet Mathematics, (1985).   Google Scholar [41] B. Muckenhoupt, Hardy's inequality with weights,, Studia Math., 44 (1972), 31.   Google Scholar [42] B. Muckenhoupt, Weighted normed inequalities for the Hardy maximal function,, Trans. Amer. Math. Soc., 165 (1972), 207.   Google Scholar [43] L. Nirenberg, On elliptic partial differential equations,, Ann. Scuola Norm. Sup. Pisa (3), 13 (1959), 115.   Google Scholar [44] O. A. Oleĭnik, On the equations of unsteady filtration,, Dokl. Akad. Nauk SSSR (N.S.), 113 (1957), 1210.   Google Scholar [45] O. A. Oleĭnik, A. S. Kalašnikov and Y.-L. Čžou, The Cauchy problem and boundary problems for equations of the type of non-stationary filtration,, Izv. Akad. Nauk SSSR. Ser. Mat., 22 (1958), 667.   Google Scholar [46] M. M. Porzio, On decay estimates,, J. Evol. Equ., 9 (2009), 561.  doi: 10.1007/s00028-009-0024-8.  Google Scholar [47] M. M. Porzio and V. Vespri, Hölder estimates for local solutions of some doubly nonlinear degenerate parabolic equations,, J. Differential Equations, 103 (1993), 146.  doi: 10.1006/jdeq.1993.1045.  Google Scholar [48] G. Reyes and J. L. Vázquez, The Cauchy problem for the inhomogeneous porous medium equation,, Netw. Heterog. Media, 1 (2006), 337.  doi: 10.3934/nhm.2006.1.337.  Google Scholar [49] G. Reyes and J. L. Vázquez, The inhomogeneous PME in several space dimensions. Existence and uniqueness of finite energy solutions,, Commun. Pure Appl. Anal., 7 (2008), 1275.  doi: 10.3934/cpaa.2008.7.1275.  Google Scholar [50] G. Reyes and J. L. Vázquez, Long time behavior for the inhomogeneous PME in a medium with slowly decaying density,, Commun. Pure Appl. Anal., 8 (2009), 493.  doi: 10.3934/cpaa.2009.8.493.  Google Scholar [51] E. M. Stein, "Singular Integrals and Differentiability Properties of Functions,", Princeton Mathematical Series, (1970).   Google Scholar [52] J. L. Vázquez, "Smoothing and Decay Estimates for Nonlinear Diffusion Equations. Equations of Porous Medium Type,", Oxford Lecture Series in Mathematics and its Applications, 33 (2006).  doi: 10.1093/acprof:oso/9780199202973.001.0001.  Google Scholar [53] J. L. Vázquez, "The Porous Medium Equation. Mathematical Theory,", Oxford Mathematical Monographs, (2007).   Google Scholar [54] F.-Y. Wang, Orlicz-Poincaré inequalities,, Proc. Edinb. Math. Soc. (2), 51 (2008), 529.  doi: 10.1017/S0013091506000526.  Google Scholar [1] Indranil Chowdhury, Gyula Csató, Prosenjit Roy, Firoj Sk. Study of fractional Poincaré inequalities on unbounded domains. Discrete & Continuous Dynamical Systems - A, 2020  doi: 10.3934/dcds.2020394 [2] Gongbao Li, Tao Yang. Improved Sobolev inequalities involving weighted Morrey norms and the existence of nontrivial solutions to doubly critical elliptic systems involving fractional Laplacian and Hardy terms. Discrete & Continuous Dynamical Systems - S, 2020  doi: 10.3934/dcdss.2020469 [3] Anna Canale, Francesco Pappalardo, Ciro Tarantino. Weighted multipolar Hardy inequalities and evolution problems with Kolmogorov operators perturbed by singular potentials. Communications on Pure & Applied Analysis, 2021, 20 (1) : 405-425. doi: 10.3934/cpaa.2020274 [4] Eduard Marušić-Paloka, Igor Pažanin. Homogenization and singular perturbation in porous media. Communications on Pure & Applied Analysis, , () : -. doi: 10.3934/cpaa.2020279 [5] José Madrid, João P. G. Ramos. On optimal autocorrelation inequalities on the real line. Communications on Pure & Applied Analysis, 2021, 20 (1) : 369-388. doi: 10.3934/cpaa.2020271 [6] Olivier Pironneau, Alexei Lozinski, Alain Perronnet, Frédéric Hecht. Numerical zoom for multiscale problems with an application to flows through porous media. Discrete & Continuous Dynamical Systems - A, 2009, 23 (1&2) : 265-280. doi: 10.3934/dcds.2009.23.265 [7] Ran Zhang, Shengqiang Liu. On the asymptotic behaviour of traveling wave solution for a discrete diffusive epidemic model. Discrete & Continuous Dynamical Systems - B, 2021, 26 (2) : 1197-1204. doi: 10.3934/dcdsb.2020159 [8] Shipra Singh, Aviv Gibali, Xiaolong Qin. Cooperation in traffic network problems via evolutionary split variational inequalities. Journal of Industrial & Management Optimization, 2020  doi: 10.3934/jimo.2020170 [9] Guojie Zheng, Dihong Xu, Taige Wang. A unique continuation property for a class of parabolic differential inequalities in a bounded domain. Communications on Pure & Applied Analysis, , () : -. doi: 10.3934/cpaa.2020280 [10] Tomasz Szostok. Inequalities of Hermite-Hadamard type for higher order convex functions, revisited. Communications on Pure & Applied Analysis, , () : -. doi: 10.3934/cpaa.2020296 [11] Julian Tugaut. Captivity of the solution to the granular media equation. Kinetic & Related Models, , () : -. doi: 10.3934/krm.2021002 [12] Wei Feng, Michael Freeze, Xin Lu. On competition models under allee effect: Asymptotic behavior and traveling waves. Communications on Pure & Applied Analysis, 2020, 19 (12) : 5609-5626. doi: 10.3934/cpaa.2020256 [13] Kohei Nakamura. An application of interpolation inequalities between the deviation of curvature and the isoperimetric ratio to the length-preserving flow. Discrete & Continuous Dynamical Systems - S, 2021, 14 (3) : 1093-1102. doi: 10.3934/dcdss.2020385 [14] Imam Wijaya, Hirofumi Notsu. Stability estimates and a Lagrange-Galerkin scheme for a Navier-Stokes type model of flow in non-homogeneous porous media. Discrete & Continuous Dynamical Systems - S, 2021, 14 (3) : 1197-1212. doi: 10.3934/dcdss.2020234 [15] Haruki Umakoshi. A semilinear heat equation with initial data in negative Sobolev spaces. Discrete & Continuous Dynamical Systems - S, 2021, 14 (2) : 745-767. doi: 10.3934/dcdss.2020365 [16] Lateef Olakunle Jolaoso, Maggie Aphane. Bregman subgradient extragradient method with monotone self-adjustment stepsize for solving pseudo-monotone variational inequalities and fixed point problems. Journal of Industrial & Management Optimization, 2020  doi: 10.3934/jimo.2020178 [17] Yangjian Sun, Changjian Liu. The Poincaré bifurcation of a SD oscillator. Discrete & Continuous Dynamical Systems - B, 2021, 26 (3) : 1565-1577. doi: 10.3934/dcdsb.2020173 [18] Yongxiu Shi, Haitao Wan. Refined asymptotic behavior and uniqueness of large solutions to a quasilinear elliptic equation in a borderline case. Electronic Research Archive, , () : -. doi: 10.3934/era.2020119 [19] Yifan Chen, Thomas Y. Hou. Function approximation via the subsampled Poincaré inequality. Discrete & Continuous Dynamical Systems - A, 2021, 41 (1) : 169-199. doi: 10.3934/dcds.2020296 [20] Tahir Aliyev Azeroğlu, Bülent Nafi Örnek, Timur Düzenli. Some results on the behaviour of transfer functions at the right half plane. Evolution Equations & Control Theory, 2020  doi: 10.3934/eect.2020106 2019 Impact Factor: 1.338
2021-01-17 16:09:30
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8194817900657654, "perplexity": 7505.3275648473045}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-04/segments/1610703513062.16/warc/CC-MAIN-20210117143625-20210117173625-00595.warc.gz"}
http://en.wikipedia.org/wiki/Kaluza-Klein_theory
# Kaluza–Klein theory (Redirected from Kaluza-Klein theory) In physics, Kaluza–Klein theory (KK theory) is a model that seeks to unify the two fundamental forces of gravitation and electromagnetism. The theory was first published in 1921. It was proposed by the mathematician Theodor Kaluza who extended general relativity to a five-dimensional spacetime. The resulting equations can be separated into further sets of equations, one of which is equivalent to Einstein field equations, another set equivalent to Maxwell's equations for the electromagnetic field and the final part an extra scalar field now termed the "radion". ## Overview The space M × C is compactified over the compact set C, and after Kaluza–Klein decomposition we have an effective field theory over M. A splitting of five-dimensional spacetime into the Einstein equations and Maxwell equations in four dimensions was first discovered by Gunnar Nordström in 1914, in the context of his theory of gravity, but subsequently forgotten. Kaluza published his derivation in 1921 as an attempt to unify electromagnetism with Einstein's general relativity. In 1926, Oskar Klein proposed that the fourth spatial dimension is curled up in a circle of a very small radius, so that a particle moving a short distance along that axis would return to where it began. The distance a particle can travel before reaching its initial position is said to be the size of the dimension. This extra dimension is a compact set, and the phenomenon of having a space-time with compact dimensions is referred to as compactification. In modern geometry, the extra fifth dimension can be understood to be the circle group U(1), as electromagnetism can essentially be formulated as a gauge theory on a fiber bundle, the circle bundle, with gauge group U(1). In Kaluza–Klein theory this group suggests that gauge symmetry is the symmetry of circular compact dimensions. Once this geometrical interpretation is understood, it is relatively straightforward to replace U(1) by a general Lie group. Such generalizations are often called Yang–Mills theories. If a distinction is drawn, then it is that Yang–Mills theories occur on a flat space-time, whereas Kaluza–Klein treats the more general case of curved spacetime. The base space of Kaluza–Klein theory need not be four-dimensional space-time; it can be any (pseudo-)Riemannian manifold, or even a supersymmetric manifold or orbifold or even a noncommutative space. As an approach to the unification of the forces, it is straightforward to apply the Kaluza–Klein theory in an attempt to unify gravity with the strong and electroweak forces by using the symmetry group of the Standard Model, SU(3) × SU(2) × U(1). However, an attempt to convert this interesting geometrical construction into a bona-fide model of reality flounders on a number of issues, including the fact that the fermions must be introduced in an artificial way (in nonsupersymmetric models). Nonetheless, KK remains an important touchstone in theoretical physics and is often embedded in more sophisticated theories. It is studied in its own right as an object of geometric interest in K-theory. Even in the absence of a completely satisfying theoretical physics framework, the idea of exploring extra, compactified, dimensions is of considerable interest in the experimental physics and astrophysics communities. A variety of predictions, with real experimental consequences, can be made (in the case of large extra dimensions/warped models). For example, on the simplest of principles, one might expect to have standing waves in the extra compactified dimension(s). If a spatial extra dimension is of radius R, the invariant mass of such standing waves would be Mn = nh/Rc with n an integer, h being Planck's constant and c the speed of light. This set of possible mass values is often called the Kaluza–Klein tower. Similarly, in Thermal quantum field theory a compactification of the euclidean time dimension leads to the Matsubara frequencies and thus to a discretized thermal energy spectrum. Examples of experimental pursuits include work by the CDF collaboration, which has re-analyzed particle collider data for the signature of effects associated with large extra dimensions/warped models. Brandenberger and Vafa have speculated that in the early universe, cosmic inflation causes three of the space dimensions to expand to cosmological size while the remaining dimensions of space remained microscopic. ## Space-time-matter theory One particular variant of Kaluza–Klein theory is space-time-matter theory or induced matter theory, chiefly promulgated by Paul Wesson and other members of the so-called Space-Time-Matter Consortium.[1] In this version of the theory, it is noted that solutions to the equation $R_{AB}=0\,$ with RAB the five-dimensional Ricci curvature, may be re-expressed so that in four dimensions, these solutions satisfy Einstein's equations $G_{\mu\nu} = 8\pi T_{\mu\nu}\,$ with the precise form of the Tμν following from the Ricci-flat condition on the five-dimensional space. Since the energy–momentum tensor Tμν is normally understood to be due to concentrations of matter in four-dimensional space, the above result is interpreted as saying that four-dimensional matter is induced from geometry in five-dimensional space. In particular, the soliton solutions of RAB = 0 can be shown to contain the Friedmann–Lemaitre–Robertson–Walker metric in both radiation-dominated (early universe) and matter-dominated (later universe) forms. The general equations can be shown to be sufficiently consistent with classical tests of general relativity to be acceptable on physical principles, while still leaving considerable freedom to also provide interesting cosmological models. ## Geometric interpretation The Kaluza–Klein theory is striking because it has a particularly elegant presentation in terms of geometry. In a certain sense, it looks just like ordinary gravity in free space, except that it is phrased in five dimensions instead of four. ### The Einstein equations The equations governing ordinary gravity in free space can be obtained from an action, by applying the variational principle to a certain action. Let M be a (pseudo-)Riemannian manifold, which may be taken as the spacetime of general relativity. If g is the metric on this manifold, one defines the action S(g) as $S(g)=\int_M R(g) \mathrm{vol}(g)\,$ where R(g) is the scalar curvature and vol(g) is the volume element. By applying the variational principle to the action $\frac{\delta S(g)}{\delta g} = 0$ $R_{ij} - \frac{1}{2}g_{ij}R = 0$ Here, Rij is the Ricci tensor. ### The Maxwell equations By contrast, the Maxwell equations describing electromagnetism can be understood to be the Hodge equations of a principal U(1)-bundle or circle bundle π: PM with fiber U(1). That is, the electromagnetic field F is a harmonic 2-form in the space Ω2(M) of differentiable 2-forms on the manifold M. In the absence of charges and currents, the free-field Maxwell equations are dF = 0 and d*F = 0. where * is the Hodge star. ### The Kaluza–Klein geometry To build the Kaluza–Klein theory, one picks an invariant metric on the circle S1 that is the fiber of the U(1)-bundle of electromagnetism. In this discussion, an invariant metric is simply one that is invariant under rotations of the circle. Suppose this metric gives the circle a total length of Λ. One then considers metrics $\widehat{g}$ on the bundle P that are consistent with both the fiber metric, and the metric on the underlying manifold M. The consistency conditions are: • The projection of $\widehat{g}$ to the vertical subspace $\mbox{Vert}_pP \subset T_pP$ needs to agree with metric on the fiber over a point in the manifold M. • The projection of $\widehat{g}$ to the horizontal subspace $\mbox{Hor}_pP \subset T_pP$ of the tangent space at point pP must be isomorphic to the metric g on M at π(p). The Kaluza–Klein action for such a metric is given by $S(\widehat{g})=\int_P R(\widehat{g}) \;\mbox{vol}(\widehat{g})\,$ The scalar curvature, written in components, then expands to $R(\widehat{g}) = \pi^*\left( R(g) - \frac{\Lambda^2}{2} \vert F \vert^2\right)$ where π* is the pullback of the fiber bundle projection π: PM. The connection A on the fiber bundle is related to the electromagnetic field strength as $\pi^*F = \mathrm{d}A$ That there always exists such a connection, even for fiber bundles of arbitrarily complex topology, is a result from homology and specifically, K-theory. Applying Fubini's theorem and integrating on the fiber, one gets $S(\widehat{g})=\Lambda \int_M \left( R(g) - \frac{1}{\Lambda^2} \vert F \vert^2 \right) \;\mbox{vol}(g)$ Varying the action with respect to the component A, one regains the Maxwell equations. Applying the variational principle to the base metric g, one gets the Einstein equations $R_{ij} - \frac{1}{2}g_{ij}R = \frac{1}{\Lambda^2} T_{ij}$ with the stress–energy tensor being given by $T^{ij} = F^{ik}F^{jl}g_{kl} - \frac{1}{4}g^{ij} \vert F \vert^2,$ sometimes called the Maxwell stress tensor. The original theory identifies Λ with the fiber metric g55, and allows Λ to vary from fiber to fiber. In this case, the coupling between gravity and the electromagnetic field is not constant, but has its own dynamical field, the radion. ### Generalizations In the above, the size of the loop Λ acts as a coupling constant between the gravitational field and the electromagnetic field. If the base manifold is four-dimensional, the Kaluza–Klein manifold P is five-dimensional. The fifth dimension is a compact space, and is called the compact dimension. The technique of introducing compact dimensions to obtain a higher-dimensional manifold is referred to as compactification. Compactification does not produce group actions on chiral fermions except in very specific cases: the dimension of the total space must be 2 mod 8 and the G-index of the Dirac operator of the compact space must be nonzero.[2] The above development generalizes in a more-or-less straightforward fashion to general principal G-bundles for some arbitrary Lie group G taking the place of U(1). In such a case, the theory is often referred to as a Yang–Mills theory, and is sometimes taken to be synonymous. If the underlying manifold is supersymmetric, the resulting theory is a super-symmetric Yang–Mills theory. ## Empirical tests Up to now, no experimental or observational signs of extra dimensions have been officially reported. Many theoretical search techniques for detecting Kaluza–Klein resonances have been proposed using the mass couplings of such resonances with the top quark, however until the Large Hadron Collider (LHC) reaches full operational power observation of such resonances are unlikely. An analysis of results from the LHC in December 2010 severely constrains theories with large extra dimensions.[3] The observation of a Higgs-like boson at the LHC puts a brand new empirical test in the search for Kaluza–Klein resonances and supersymmetric particles. The loop Feynman diagrams that exist in the Higgs Interactions allow any particle with electric charge and mass to run in such a loop. Standard Model particles besides the top quark and W boson do not make big contributions to the cross-section observed in the H → γγ decay, but if there are new particles beyond the Standard Model, they could potentially change the ratio of the predicted Standard Model H → γγ cross-section to the experimentally observed cross-section. Hence a measurement of any dramatic change to the H → γγ cross section predicted by the Standard Model is crucial in probing the physics beyond it.
2014-09-02 07:15:03
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 16, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8222734928131104, "perplexity": 479.7864521231871}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2014-35/segments/1409535921869.7/warc/CC-MAIN-20140901014521-00413-ip-10-180-136-8.ec2.internal.warc.gz"}
https://www.physicsforums.com/threads/particle-distribution-diffusion.893828/
# Particle distribution, Diffusion 1. Nov 18, 2016 ### Selveste 1. The problem statement, all variables and given/known data An initial particle distribution n(r, t) is distributed along an infinite line along the $z$-axis in a coordinate system. The particle distribution is let go and spreads out from this line. $a)$ How likely is it to find a particle on a circle with distance $r$ from the $z$-axis at the time $t$? $b)$ What is the most likely distance $r$ from origo to find a particle at the time $t$? 2. Relevant equations The diffusion equation is given by $$\frac{\partial n}{\partial t} = D \nabla^2 n$$ where $\nabla^2$ is the laplace-operator, $D$ is the diffusion constant and $n$ is the particle density. 3. The attempt at a solution I take it by "line along the z-axis" they mean ON the z-axis(?). a) Im not sure how to go about this. Would it involve a fourier transform, or can it be done more easily? Any help on where/how to start would be appreciated. b) The most likely distance from the z-axis would be zero, because of symmetry(?). So the distance from origo would be z. Thanks. 2. Nov 18, 2016 ### Staff: Mentor I interpreted it in the same way. a) What is the distribution of an initial point-like source? How can you generalize this to a 1-dimensional source? There is no symmetry you can use as distance cannot be negative and different distances have different differential volumes. The most likely point will be on the z-axis, but the most likely distance won't. The problem statement is confusing (is it translated?), as r seems to be the radial direction, but then it is the distance to the z-axis, not the distance to the origin (where the most likely value would be very messy to calculate).
2017-11-19 07:35:05
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.7856139540672302, "perplexity": 396.60096197993613}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-47/segments/1510934805417.47/warc/CC-MAIN-20171119061756-20171119081756-00008.warc.gz"}
https://chess.stackexchange.com/questions/8864/do-better-board-representations-than-the-standard-one-exist
# Do better board representations than the standard one exist? Consider the classical representation of the state of a chess game. I'll draw the board: R BK BNR PPPP PP N Q PP p np p ppp pp r bkqbnr Here is its move list: d5 d5 c4 c6 Nf3 e6 Qd3 Nf6 ... Here is its FEN notation: rnbqkb1r/pp3ppp/2p1pn2/3p4/2PP4/3Q1N2/PP2PPPP/RNB1KB1R w KQkq - These three ways to describe the game state are isomorphic and contain exactly the same information represented differently. A computer would have no problem converting from one representation to the other (kind of, the castling flag isn't available in the first representation and the second one encodes all the game states). Computers uses even stranger representations like bit boards which I won't go into here. In fact, there is an almost endless amount of ways to represent the state of a chess game. For example, here is one I just came up with: R.BK.BNRPPPP. PP..N.Q.......PP......p.....np.p..ppp...ppr.bkqbnr But if I asked you to "find the best move for white" the first representation would be much easier for you than the others. Why? Is it just because the classical representation is what most players are used to or does it have some inherent advantage? Could you train your skill at playing chess using only FEN notation and over time come to prefer that? Has any research been done into this area? Maybe there is a better way to represent a chess game than the classical one which gives you a huge advantage when calculating lines deeply but just no one has found it? • The board diagrams above are not equal. Only FEN includes the player on move, that a pawn be taken en passant, and which sides can castle which way. – Tony Ennis Mar 20 '15 at 11:27 • You might improve the question so the various representations of the position match one another. – Tony Ennis Mar 20 '15 at 11:53 • You're right about that and there are other details about the game state that isn't encoded either like the 50 move rule, repetition count and so on. I hope what I*m asking about gets across anyway. – Björn Lindqvist Mar 20 '15 at 16:31 • How do you convert from FEN to the move list? How do you get around the fact that the move list is not uniquely determined by the position? – bof Mar 20 '15 at 18:58 • My point is that you can play a chess game by only looking at the FEN instead of the board. I know that the move list contains more info in total but it's irrelevant for what I'm asking about. – Björn Lindqvist Mar 21 '15 at 13:56
2019-10-16 10:37:08
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.33784469962120056, "perplexity": 982.5562537189071}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-43/segments/1570986666959.47/warc/CC-MAIN-20191016090425-20191016113925-00351.warc.gz"}
https://forum.eveuniversity.org/viewtopic.php?f=186&t=115730&view=next
## Stealing an Azbel... Almost Forum rules This forum can be viewed publicly. Member ### Stealing an Azbel... Almost Discovery On 2/9/2020 around 1600 I stumbled into J165412 (C2, HS/C1 static) from ZN0-SR in the back pocket of NSC. Initially there was nothing remarkable besides a few drones and a container in space above an Azbel. But wait. The Azbel was unanchoring. I got excited until I learned that unanchoring takes 7 days, and an Azbel would take an Orca fitted for max cargo to pick up. I decided to keep eyes on it anyway and check back every few hours or so. The highsec static was at that time 3 jumps from Jita, so I parked my trading alt in the hole after injecting it into covops helios (yay for skill injectors). Planetary Industry The next day I was poking around the hole a little bit and talking on NSC comms, when an astero decloaked on the can above the Azbel. I still didn't want to expose myself, in the case this new guy was an alt of the owning corp and just watched. The same character started flying around the hole decloaked and ran a relic site I had a perch on before leaving the hole. While I was dscanning and researching the corp, the same guy came back in a Tayra and seemed to loot the can. I realized this guy was not at all in the same corp, so went to Highsec and brought a mammoth back scooping the rest of about ~100M of PI and stront. This was around the time I convinced Xyrin Bacard to put his alt with a cargo Orca in the hole logged off. Waiting The following 4 days were not very interesting, as the highsec connection was further away from everything. I spent a little time making safes, tacs, and oh picking up another 150M in PI/stront that someone left around again. It was at this point I started to make sense of the killboard, and specifically the last fight that took place (standup fighter). It seemed to me that The Mighty Beans was evicting Stake Tatare on the 8th around 0300 and the fight involved fighters owned by Stake Tartare from that same Azbel. There was no activity in the hole after that, minus an opportunistic MTU kill, giving a pretty tight window of time. As I found the structure the next day under the ownership of The Mighty Beans, and the corp seemed to not have much activity past 06-07, I figured that 0330-0800 on 2/15 was a reasonable window for unanchoring. The Perfect Harvester Hole The day of the unanchoring was the day after the gala event started, and I was scouting for a HSC Guardians Gala VIP roam trying to find wormholes acceptable to attempt a run on the site. The hole I found however was even better than that, a C4 static highsec shattered Pulsar with 85+ Green combat sites and only two wormholes connections out. As it was nearing the end of Cutecumber Roll's PvE fleet, the HSC guys ran only a couple sites before running out of MTUs and calling it. I stayed in the hole and notified WHC, who then promptly brought these wonderful folks to run a harvester: SPOILER WARNING! Danny AlgaertErywin ChelienJohn EcheriedesLucius Septimus SeverusLarkviSophia LigetiBrock Carlisle During the harvester I seem to have convinced most of the folks to come sit in the Azbel hole with me and wait for the unanchor as the time was nearing 0300. Also, Biwako Akami WHC resident structure stealer came online around this time offering to add his expertise. Moment of Truth The pirates involved: SPOILER WARNING! Biwako AcamiJohn EcheriedesRustingLucius Septimus SeverusBrock CarlisleXyrin Bacard (under the alias Tyrin) Around 0430, the quiet hole that I had grown to call home for the past week was starting to get busy. 3 new sigs popped and a few Tengus and Stratioses started popping around the hole. They were not part of the Mighty Beans, and came from a wandering connection. We decided that it was probably time to bring some rollers in the case that this 3rd party would start causing some trouble. We had people scanning down the chain, and also bringing rollers on alts to rage roll, when I made the dumb decision to log off my alt and bring in my main in a tengu for combat support. My failure was one of miscommunication as I thought we had plenty of people watching the hole, while in reality everyone had left the hole to pick up rollers or to scan down the chain. By the time I made it the 10 jumps in my tengu and got in the hole, dscan was chaotic with ships and capsules and containers. I had missed the event by minutes. I crossed the Orca picking up the structure in warp to my perch, chaotically and unhelpfully yelling in comms for everyone to get in the hole. The rollers started rolling, Lucius and I started shooting, enemy pods were getting in the ships that were strewn across space. I started bookmarking cans in the wrong folder telling people to warp to them, causing further confusion, but the calm ways of the WHC folks tempered my angst. Xyrin and Biwako brought their Orcas in and started hauling stuff from containers, with Xyrin having the true pirate mentality in abandoning the fighting and his own Orca to steal one of the enemy's. The Fight As we had rolled out the majority of the reinforcements The Mighty Beans, those left in the hole were bouncing around in capsules trying to grab any of the ships they found to fight us. Most of them forgot to turn on their hardeners and died quick. The battle commenced at 0549 2/15: Kills: Gila +258M Raven +181M Megathron +204M Epithal +2M Sigil +4M Procurer +44M Epithal +1M Sigil +1M Procurer +44M Epithal +204M Sigil +204M At some point, perhaps due to confusion, The Mighty Beans began killing their own ships. It is unclear whether these were manned or not, but it would be hilarious if they were. Catalyst +7M Ibis +0M We only suffered one loss, though technically Xyrin had stolen this epithal, so probably still counts as a kill? Epithal -1M What we stole While we were unable to loot the Azbel, we came away with around 1B in stolen ships/goods, including an Orca (thanks Xyrin for focusing on the important stuff) and a Crane: LOOT Battle Report ISK Destroyed: 756,277,456.31 ISK Stolen: 1,035,753,101.52 ISK Lost: 1,038,357.65 ISK Delta: +1,790,992,200.18 Efficiency: 99.99% Lessons Learned - Hacking a structure is a thing. But only for reinforcement timers and not unanchoring. - Always leave someone watching the structure. Work out shifts. - Leave your ship at a safe and get in an enemy ship for the fight. Thanks to all for coming out, it was some good fun, and my first WH stakeout. Member ### Re: Stealing an Azbel... Almost The Mighty Beans are some good guys. The corp that I first joined after leaving E-Uni had several people who joined after the previous incarnation of The Mighty Beans got evicted by Inner Hell (and after that corp disbanded, The Mighty Beans reformed). Congratulations on the haul. That's some good stuff. I'm sure that you helped teach the loot truck pilot a valuable lesson on falling asleep (the explanation that I got for the question of "Did E-Uni steal some stuff from the Beans recently?"). If the Beans couldn't extract their loot without something like this happening, they didn't deserve to hold onto it. Former E-Uni FC (LSC/WHC).
2020-10-23 00:10:46
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.43398433923721313, "perplexity": 6371.348232210939}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-45/segments/1603107880401.35/warc/CC-MAIN-20201022225046-20201023015046-00090.warc.gz"}
http://mpdf.github.io/troubleshooting/error-messages.html
mPDF Manual – Troubleshooting # Error messages “Output has already been sent from the script - PDF file generation aborted.” If you see this message it means that the script has sent output to the browser before starting to generate the PDF file. Most likely causes are: • a PHP error message - this should be displayed in your browser giving details of the problem • inadvertent whitespace in you PHP script files e.g. leaving space before or after the PHP tags <?php or ?>. Note: It is recommended to leave out ?> on the end of php-files. • you are using object_buffering to generate content for your PDF file - see below If no error message appears, try setting: <?php $mpdf = new \Mpdf\Mpdf(['debug' => true]); or use a PSR-3 Logger for more detailed logging. ### Object buffering In order to catch error messages and prevent them being included in a PDF file (which will be corrupted), mPDF 2.5 introduced a method to detect whether there had been any output from the script prior to generating the PDF file in Output(). This includes checking for ob_get_contents() - a PHP function to check if there is any output in the object-buffer. If you use object_buffering in the process of preparing the text for mPDF, this will falsely trigger the error message. If this is the case, add the following to your script to prevent it: <?php$mpdf = new \Mpdf\Mpdf([ 'debug' => true, 'allow_output_buffering' => true ]);
2019-05-20 13:21:08
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.5380306839942932, "perplexity": 5090.1572973449365}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-22/segments/1558232255944.3/warc/CC-MAIN-20190520121941-20190520143941-00029.warc.gz"}
http://sioc-journal.cn/Jwk_hxxb/CN/abstract/abstract344578.shtml
### 稀土金属有机配合物化学60年 1. 中国科学院上海有机化学研究所 金属有机国家重点实验室 上海 200032 • 收稿日期:2014-06-03 出版日期:2014-08-14 发布日期:2014-07-07 • 通讯作者: 钱长涛, 陈耀峰 E-mail:qianct@sioc.ac.cn;yaofchen@mail.sioc.ac.cn ### Sixty Years of the Chemistry of Rare-earth Organometallic Complexes Qian Changtao, Wang Chunhong, Chen Yaofeng 1. State Key Laboratory of Organometallic Chemistry, Shanghai Institute of Organic Chemistry, Chinese Academy of Sciences, Shanghai 200032 • Received:2014-06-03 Online:2014-08-14 Published:2014-07-07 60年来稀土金属有机配合物化学取得重要发展. 辅助配体从环戊二烯基,五甲基环戊二烯基,茚基发展到各种非茂配体,如双酚,β-二亚胺,胍基,脒基等. 配合物的种类从简单的三茂稀土配合物发展到各种形式的二茂稀土配合物和单茂稀土配合物. 非茂配体的应用不仅拓展了稀土金属有机配合物的结构种类,还极大推动稀土金属有机配合物在高分子和有机合成中的应用. 稀土金属有机配合物可有效催化烯烃均聚与共聚,共轭双烯烃以及极性单体的选择性聚合. 稀土金属有机配合物还能催化氢化,氢胺化和膦氢化等重要有机反应. 本文对稀土金属有机配合物化学过去60年的发展进行综述. Rare-earth elements include scandium, yttrium and fifteen lanthanides. Since Wilkinson and Birmingham reported the fist example of rare-earth organometallic complex, the chemistry of rare-earth organometallic complexes has had a great advance during the past sixty years. A variety of Cp containing rare-earth metal complexes, including mono-Cp, bis-Cp, and tri-Cp complexes, have been synthesized. Variation of the size of substituent on Cp ring, introducing the nitrogen or oxygen containing pendant arm to Cp ring or using ansa bis-Cp ligands make the synthesis and stabilization of these three types of rare-earth metal Cp complexes available. The Cp related ligands, such as indenyl and fluorenyl, also have been applied for the rare-earth organometallic complexes. From 1990's, there was a tendency to explore the rare-earth organometallic complexes with ancillary ligands beyond Cp and its derivatives in order to search for more efficient rare-earth metal catalysts. Non-Cp ligands, such as biphenolates, β-diketiminates, amidinates, guanidinates, etc. have been introduced into the chemistry of rare-earth organometallic complexes, and a larger number of non-Cp rare-earth organometallic complexes have been synthesized and characterized. The application of non-cyclopentadienyl ligands not only resulted in the rare-earth organometallic complexes with new structural features, but also the catalysts with high activity and high selectivity for the polymer synthesis and organic synthesis. The rare-earth organometallic complexes catalyze homo- and co-polymerization of olefins as well as specific polymerization of dienes and polar monomers. The discovery of the catalytic system composed of the neutral rare-earth metal dialkyls/borate enables to synthesize some interesting polymers which are difficult to be prepared by using other catalytic system. The rare-earth organometallic complexes are also able to catalyze some important organic reactions, such as hydroamination, hydrophosphinylation, and hydroalkoxylation, etc. Different from the late-translation metal catalyzed organic reactions, most of the rare-earth metal catalyzed ones do not involve oxidative addition and reductive elimination steps. The review describes some important developments of the chemistry of rare-earth organometallic complexes in the past sixty years.
2020-01-25 21:59:53
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.24615512788295746, "perplexity": 13929.872781263542}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-05/segments/1579251681412.74/warc/CC-MAIN-20200125191854-20200125221854-00519.warc.gz"}
https://www.aimsciences.org/article/doi/10.3934/amc.2013.7.475
# American Institute of Mathematical Sciences November  2013, 7(4): 475-484. doi: 10.3934/amc.2013.7.475 ## Correlation of binary sequence families derived from the multiplicative characters of finite fields 1 State Key Laboratory of Integrated Service Networks, Xidian University, Xi'an, Shanxi 710071, China 2 Department of Electrical and Computer Engineering, University of Waterloo, Waterloo, Ontario N2L 3G1, Canada Received  November 2012 Published  October 2013 In this paper, new constructions of the binary sequence families of period $q-1$ with large family size and low correlation, derived from multiplicative characters of finite fields for odd prime powers, are proposed. For $m ≥ 2$, the maximum correlation magnitudes of new sequence families $\mathcal{S}_m$ are bounded by $(2m-2)\sqrt{q}+2m+2$, and the family sizes of $\mathcal{S}_m$ are given by $q-1$ for $m=2$, $2(q-1)-1$ for $m=3$, $(q^2-1)q^{\frac{m-4}{2}}$ for $m$ even, $m>2$, and $2(q-1)q^{\frac{m-3}{2}}$ for $m$ odd, $m>3$. It is shown that the known binary Sidel'nikov-based sequence families are equivalent to the new constructions for the case $m=2$. Citation: Zilong Wang, Guang Gong. Correlation of binary sequence families derived from the multiplicative characters of finite fields. Advances in Mathematics of Communications, 2013, 7 (4) : 475-484. doi: 10.3934/amc.2013.7.475 ##### References: [1] P. Deligne, La conjecture de Weil I, Publ. Math. IHES, 43 (1974), 273-307.  Google Scholar [2] S. W. Golomb and G. Gong, Signal Design with Good Correlation: for Wireless Communications, Cryptography and Radar Applications, Cambridge University Press, 2005. doi: 10.1017/CBO9780511546907.  Google Scholar [3] L. Goubin, C. Mauduit and A. Sárközy, Construction of large families of pseudorandom binary sequences, J. Number Theory, 106 (2004), 56-69. doi: 10.1016/j.jnt.2003.12.002.  Google Scholar [4] Y. K. Han and K. Yang, New $M$-ary sequence families with low correlation and large size, IEEE Trans. Inf. Theory, 55 (2009), 1815-1823. doi: 10.1109/TIT.2009.2013040.  Google Scholar [5] T. Helleseth and P. V. Kumar, Sequences with low correlation, in Handbook of Coding Theory (eds. V. Pless and C. Huffman), Elsevier Science Publishers, 1998, 1765-1853.  Google Scholar [6] Y. Kim, J. Chung, J. S. No and H. Chung, New families of $M$-ary sequences with low correlation constructed from Sidel'nikov sequences, IEEE Trans. Inf. Theory, 54 (2008), 3768-3774. doi: 10.1109/TIT.2008.926428.  Google Scholar [7] Y. J. Kim and H. Y. Song, Cross correlation of Sidel'nikov sequences and their constant multiples, IEEE Trans. Inf. Theory, 53 (2007), 1220-1224. doi: 10.1109/TIT.2006.890723.  Google Scholar [8] Y. J. Kim, H. Y. Song, G. Gong and H. Chung, Crosscorrelation of $q$-ary power residue sequences of period $p$, in Proc. IEEE ISIT, 2006, 311-315. doi: 10.1109/ISIT.2006.261604.  Google Scholar [9] P. V. Kumar, T. Helleseth, A. R. Calderbank and A. R. Hammons, Large families of quaternary sequences with low correlation, IEEE Trans. Inf. Theory, 42 (1996), 579-592. doi: 10.1109/18.485726.  Google Scholar [10] V. M. Sidel'nikov, Some $k$-valued pseudo-random sequences and nearly equidistant codes, Probl. Inf. Transm., 5 (1969), 12-16.  Google Scholar [11] V. M. Sidel'nikov, On mutual correlation of sequences, Soviet Math. Dokl, 12 (1971), 197-201.  Google Scholar [12] D. Wan, Generators and irreducible polynomials over finite fields, Math. Comput., 66 (1997), 1195-1212. doi: 10.1090/S0025-5718-97-00835-1.  Google Scholar [13] Z. Wang and G. Gong, New polyphase sequence families with low correlation derived from the Weil bound of exponential sums, IEEE Trans. Inf. Theory, 59 (2013), 3990-3998. doi: 10.1109/TIT.2013.2243496.  Google Scholar [14] A. Weil, On some exponential sums, Proc. Natl. Acad. Sci. USA, 34 (1948), 204-207. doi: 10.1073/pnas.34.5.204.  Google Scholar [15] L. R. Welch, Lower bounds on the minimum correlation of signal, IEEE Trans. Inf. Theory, 20 (1974), 397-399. Google Scholar [16] N. Y. Yu and G. Gong, Multiplicative characters, the Weil Bound, and polyphase sequence families with low correlation, IEEE Trans. Inf. Theory, 56 (2010), 6376-6387. doi: 10.1109/TIT.2010.2079590.  Google Scholar [17] N. Y. Yu and G. Gong, New construction of $M$-ary sequence families with low correlation from the structure of Sidelnikov sequences, IEEE Trans. Inf. Theory, 56 (2010), 4061-4070. doi: 10.1109/TIT.2010.2050793.  Google Scholar show all references ##### References: [1] P. Deligne, La conjecture de Weil I, Publ. Math. IHES, 43 (1974), 273-307.  Google Scholar [2] S. W. Golomb and G. Gong, Signal Design with Good Correlation: for Wireless Communications, Cryptography and Radar Applications, Cambridge University Press, 2005. doi: 10.1017/CBO9780511546907.  Google Scholar [3] L. Goubin, C. Mauduit and A. Sárközy, Construction of large families of pseudorandom binary sequences, J. Number Theory, 106 (2004), 56-69. doi: 10.1016/j.jnt.2003.12.002.  Google Scholar [4] Y. K. Han and K. Yang, New $M$-ary sequence families with low correlation and large size, IEEE Trans. Inf. Theory, 55 (2009), 1815-1823. doi: 10.1109/TIT.2009.2013040.  Google Scholar [5] T. Helleseth and P. V. Kumar, Sequences with low correlation, in Handbook of Coding Theory (eds. V. Pless and C. Huffman), Elsevier Science Publishers, 1998, 1765-1853.  Google Scholar [6] Y. Kim, J. Chung, J. S. No and H. Chung, New families of $M$-ary sequences with low correlation constructed from Sidel'nikov sequences, IEEE Trans. Inf. Theory, 54 (2008), 3768-3774. doi: 10.1109/TIT.2008.926428.  Google Scholar [7] Y. J. Kim and H. Y. Song, Cross correlation of Sidel'nikov sequences and their constant multiples, IEEE Trans. Inf. Theory, 53 (2007), 1220-1224. doi: 10.1109/TIT.2006.890723.  Google Scholar [8] Y. J. Kim, H. Y. Song, G. Gong and H. Chung, Crosscorrelation of $q$-ary power residue sequences of period $p$, in Proc. IEEE ISIT, 2006, 311-315. doi: 10.1109/ISIT.2006.261604.  Google Scholar [9] P. V. Kumar, T. Helleseth, A. R. Calderbank and A. R. Hammons, Large families of quaternary sequences with low correlation, IEEE Trans. Inf. Theory, 42 (1996), 579-592. doi: 10.1109/18.485726.  Google Scholar [10] V. M. Sidel'nikov, Some $k$-valued pseudo-random sequences and nearly equidistant codes, Probl. Inf. Transm., 5 (1969), 12-16.  Google Scholar [11] V. M. Sidel'nikov, On mutual correlation of sequences, Soviet Math. Dokl, 12 (1971), 197-201.  Google Scholar [12] D. Wan, Generators and irreducible polynomials over finite fields, Math. Comput., 66 (1997), 1195-1212. doi: 10.1090/S0025-5718-97-00835-1.  Google Scholar [13] Z. Wang and G. Gong, New polyphase sequence families with low correlation derived from the Weil bound of exponential sums, IEEE Trans. Inf. Theory, 59 (2013), 3990-3998. doi: 10.1109/TIT.2013.2243496.  Google Scholar [14] A. Weil, On some exponential sums, Proc. Natl. Acad. Sci. USA, 34 (1948), 204-207. doi: 10.1073/pnas.34.5.204.  Google Scholar [15] L. R. Welch, Lower bounds on the minimum correlation of signal, IEEE Trans. Inf. Theory, 20 (1974), 397-399. Google Scholar [16] N. Y. Yu and G. Gong, Multiplicative characters, the Weil Bound, and polyphase sequence families with low correlation, IEEE Trans. Inf. Theory, 56 (2010), 6376-6387. doi: 10.1109/TIT.2010.2079590.  Google Scholar [17] N. Y. Yu and G. Gong, New construction of $M$-ary sequence families with low correlation from the structure of Sidelnikov sequences, IEEE Trans. Inf. Theory, 56 (2010), 4061-4070. doi: 10.1109/TIT.2010.2050793.  Google Scholar [1] Nam Yul Yu. A Fourier transform approach for improving the Levenshtein's lower bound on aperiodic correlation of binary sequences. Advances in Mathematics of Communications, 2014, 8 (2) : 209-222. doi: 10.3934/amc.2014.8.209 [2] Xing Liu, Daiyuan Peng. Sets of frequency hopping sequences under aperiodic Hamming correlation: Upper bound and optimal constructions. Advances in Mathematics of Communications, 2014, 8 (3) : 359-373. doi: 10.3934/amc.2014.8.359 [3] Nian Li, Xiaohu Tang, Tor Helleseth. A class of quaternary sequences with low correlation. Advances in Mathematics of Communications, 2015, 9 (2) : 199-210. doi: 10.3934/amc.2015.9.199 [4] Hua Liang, Jinquan Luo, Yuansheng Tang. On cross-correlation of a binary $m$-sequence of period $2^{2k}-1$ and its decimated sequences by $(2^{lk}+1)/(2^l+1)$. Advances in Mathematics of Communications, 2017, 11 (4) : 693-703. doi: 10.3934/amc.2017050 [5] Yu Zheng, Li Peng, Teturo Kamae. Characterization of noncorrelated pattern sequences and correlation dimensions. Discrete & Continuous Dynamical Systems, 2018, 38 (10) : 5085-5103. doi: 10.3934/dcds.2018223 [6] Mariusz Lemańczyk, Clemens Müllner. Automatic sequences are orthogonal to aperiodic multiplicative functions. Discrete & Continuous Dynamical Systems, 2020, 40 (12) : 6877-6918. doi: 10.3934/dcds.2020260 [7] Xiaoni Du, Chenhuang Wu, Wanyin Wei. An extension of binary threshold sequences from Fermat quotients. Advances in Mathematics of Communications, 2016, 10 (4) : 743-752. doi: 10.3934/amc.2016038 [8] Aixian Zhang, Zhengchun Zhou, Keqin Feng. A lower bound on the average Hamming correlation of frequency-hopping sequence sets. Advances in Mathematics of Communications, 2015, 9 (1) : 55-62. doi: 10.3934/amc.2015.9.55 [9] Ferruh Özbudak, Eda Tekin. Correlation distribution of a sequence family generalizing some sequences of Trachtenberg. Advances in Mathematics of Communications, 2020  doi: 10.3934/amc.2020087 [10] Wei-Wen Hu. Integer-valued Alexis sequences with large zero correlation zone. Advances in Mathematics of Communications, 2017, 11 (3) : 445-452. doi: 10.3934/amc.2017037 [11] Xing Liu, Daiyuan Peng. Frequency hopping sequences with optimal aperiodic Hamming correlation by interleaving techniques. Advances in Mathematics of Communications, 2017, 11 (1) : 151-159. doi: 10.3934/amc.2017009 [12] Chunlei Xie, Yujuan Sun. Construction and assignment of orthogonal sequences and zero correlation zone sequences for applications in CDMA systems. Advances in Mathematics of Communications, 2020, 14 (1) : 1-9. doi: 10.3934/amc.2020001 [13] Yael Ben-Haim, Simon Litsyn. A new upper bound on the rate of non-binary codes. Advances in Mathematics of Communications, 2007, 1 (1) : 83-92. doi: 10.3934/amc.2007.1.83 [14] Denis Dmitriev, Jonathan Jedwab. Bounds on the growth rate of the peak sidelobe level of binary sequences. Advances in Mathematics of Communications, 2007, 1 (4) : 461-475. doi: 10.3934/amc.2007.1.461 [15] Arne Winterhof, Zibi Xiao. Binary sequences derived from differences of consecutive quadratic residues. Advances in Mathematics of Communications, 2020  doi: 10.3934/amc.2020100 [16] Xiwang Cao, Wun-Seng Chou, Xiyong Zhang. More constructions of near optimal codebooks associated with binary sequences. Advances in Mathematics of Communications, 2017, 11 (1) : 187-202. doi: 10.3934/amc.2017012 [17] Harish Garg. Novel correlation coefficients under the intuitionistic multiplicative environment and their applications to decision-making process. Journal of Industrial & Management Optimization, 2018, 14 (4) : 1501-1519. doi: 10.3934/jimo.2018018 [18] Xiaohui Liu, Jinhua Wang, Dianhua Wu. Two new classes of binary sequence pairs with three-level cross-correlation. Advances in Mathematics of Communications, 2015, 9 (1) : 117-128. doi: 10.3934/amc.2015.9.117 [19] Huaning Liu, Xi Liu. On the correlation measures of orders $3$ and $4$ of binary sequence of period $p^2$ derived from Fermat quotients. Advances in Mathematics of Communications, 2021  doi: 10.3934/amc.2021008 [20] Zhixiong Chen, Vladimir Edemskiy, Pinhui Ke, Chenhuang Wu. On $k$-error linear complexity of pseudorandom binary sequences derived from Euler quotients. Advances in Mathematics of Communications, 2018, 12 (4) : 805-816. doi: 10.3934/amc.2018047 2019 Impact Factor: 0.734
2021-06-17 05:48:32
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.786539614200592, "perplexity": 3889.787902174381}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-25/segments/1623487629209.28/warc/CC-MAIN-20210617041347-20210617071347-00115.warc.gz"}
https://discuss.codechef.com/t/need-help-in-spoj-problem/73226
# Need help in SPOJ Problem I am trying to solve this problem on SPOJ. But I am clueless, how should i proceed? Essentially, we need to find longest path in given undirected weighted graph using dfs perhaps from every node and maybe use dynamic programming,but how to do that? Please help. I would appreciate it. Let’s root the tree at 1. Let dp_u be the largest path from u to a leaf in the subtree of u. We can see that dp_u = max(dp_v + w), where v is adjacent to u and w is the weight of the edge between u and v. This way we can calculate the answer for node 1. Let’s say some node v is adjacent to u with a weight w. We can reroot our tree from u to v. To do that, we must first let dp_u as the set of all possible values of dp_v + w. Now we can remove \max(dp_v) + w from dp_u, and then add \max(dp_u) + w to dp_u. Now we have successfully calculated the answer for v, which is \max(dp_v). Now we must reroot our trees over all nodes. To get this path, Let’s define f(i) as a path that starts at i, and then goes over all nodes in the subtree of i and ends at i. We can notice that f(i) = \sum\limits_{v\in adj_u}(i + f(v)) + i Fun fact : This is exactly how a DFS moves around the graph. You can trace the new DFS to reroot the tree, or memorise the path. Code with 2 DFS #include <iostream> #include <bits/stdc++.h> #define mp make_pair #define pb push_back using namespace std; using ll = long long int; void solve(){ int n; cin>>n; for(int i=0;i<n-1;i++){ int u,v,w; cin>>u>>v>>w; --u;--v; } vector<multiset<ll>> dp(n, {0}); vector<pair<int,int>> path; path.reserve(n<<1); path.pb({0,0}); function<void(int,int)> dp1 = [&](int u,int par){ const auto &v = x.first, &w = x.second; if(v == par){ continue; } dp1(v, u); dp[u].insert(*(--dp[v].end()) + w); } }; dp1(0,0); vector<ll> ans(n); function<void(int,int)> dp2 = [&](int u,int par){ ans[u] = *(--dp[u].end()); const auto &v = x.first, &w = x.second; if(v == par){ continue; } dp[u].erase(dp[u].find(*(--dp[v].end()) + w)); dp[v].insert(*(--dp[u].end()) + w); dp2(v, u); dp[v].erase(dp[v].find(*(--dp[u].end()) + w)); dp[u].insert(*(--dp[v].end()) + w); } }; dp2(0,0); for(const auto &x : ans){ cout<<x<<" "; } cout<<"\n"; } int main() { ios_base::sync_with_stdio(false); cin.tie(NULL); int t; cin>>t; while(t--){ solve(); } } Code with path memorisation #include <iostream> #include <bits/stdc++.h> #define mp make_pair #define pb push_back using namespace std; using ll = long long int; void solve(){ int n; cin>>n; for(int i=0;i<n-1;i++){ int u,v,w; cin>>u>>v>>w; --u;--v; } vector<multiset<ll>> dp(n, {0}); vector<pair<int,int>> path; path.reserve(n<<1); path.pb({0,0}); function<void(int,int)> solve = [&](int u,int par){ const auto &v = x.first, &w = x.second; if(v == par){ continue; } path.pb({v,w}); solve(v, u); path.pb({u,w}); dp[u].insert(*(--dp[v].end()) + w); } }; solve(0,0); vector<ll> ans(n); for(int i=1;i<path.size();i++){ const auto& u = path[i-1], v = path[i]; dp[u.first].erase(dp[u.first].find(*(--dp[v.first].end()) + v.second)); dp[v.first].insert(*(--dp[u.first].end()) + v.second); ans[v.first] = *(--dp[v.first].end()); } for(const auto &x : ans){ cout<<x<<" "; } cout<<"\n"; } int main() { ios_base::sync_with_stdio(false); cin.tie(NULL); int t; cin>>t; while(t--){ solve(); } } 1 Like Hey, thanks for the reply. Can you please explain in little more detail from this line onwards: “Now we can remove \max(dp_v) + w from dp_u, and then add \max(dp_u) + w to dp_u.” EDIT: After analysing the codes, I understood what and why we are doing. Thanks a lot for your help.
2021-04-16 15:31:22
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.7322148084640503, "perplexity": 6100.135685415475}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-17/segments/1618038066981.0/warc/CC-MAIN-20210416130611-20210416160611-00066.warc.gz"}
https://www.mathallstar.org/Practice/SearchByCategory?page=6&Categories=19
###### back to index | new An infinite number of equilateral triangles are constructed as shown on the right. Each inner triangle is inscribed in its immediate outsider and is shifted by a constant angle $\beta$. If the area of the biggest triangle equals to the sum of areas of all the other triangles, find the value of $\beta$ in terms of degrees. For $-1 < r < 1$, let $S(r)$ denote the sum of the geometric series $$12+12r+12r^2+12r^3+\cdots .$$Let $a$ between $-1$ and $1$ satisfy $S(a)S(-a)=2016$. Find $S(a)+S(-a)$. A strictly increasing sequence of positive integers $a_1$, $a_2$, $a_3$, $\cdots$ has the property that for every positive integer $k$, the subsequence $a_{2k-1}$, $a_{2k}$, $a_{2k+1}$ is geometric and the subsequence $a_{2k}$, $a_{2k+1}$, $a_{2k+2}$ is arithmetic. Suppose that $a_{13} = 2016$. Find $a_1$. Initially Alex, Betty, and Charlie had a total of $444$ peanuts. Charlie had the most peanuts, and Alex had the least. The three numbers of peanuts that each person had formed a geometric progression. Alex eats $5$ of his peanuts, Betty eats $9$ of her peanuts, and Charlie eats $25$ of his peanuts. Now the three numbers of peanuts each person has forms an arithmetic progression. Find the number of peanuts Alex had initially. Triangle $ABC_0$ has a right angle at $C_0$. Its side lengths are pariwise relatively prime positive integers, and its perimeter is $p$. Let $C_1$ be the foot of the altitude to $\overline{AB}$, and for $n \geq 2$, let $C_n$ be the foot of the altitude to $\overline{C_{n-2}B}$ in $\triangle C_{n-2}C_{n-1}B$. The sum $\sum_{i=1}^\infty C_{n-2}C_{n-1} = 6p$. Find $p$. The sequences of positive integers $1,a_2, a_3,...$ and $1,b_2, b_3,...$ are an increasing arithmetic sequence and an increasing geometric sequence, respectively. Let $c_n=a_n+b_n$. There is an integer $k$ such that $c_{k-1}=100$ and $c_{k+1}=1000$. Find $c_k$. For $1 \leq i \leq 215$ let $a_i = \dfrac{1}{2^{i}}$ and $a_{216} = \dfrac{1}{2^{215}}$. Let $x_1, x_2, ..., x_{215}$ be positive real numbers such that $\sum_{i=1}^{215} x_i=1$ and $\sum_{i \leq i < j \leq 216} x_ix_j = \dfrac{107}{215} + \sum_{i=1}^{216} \dfrac{a_i x_i^{2}}{2(1-a_i)}$. The maximum possible value of $x_2=\dfrac{m}{n}$, where $m$ and $n$ are relatively prime positive integers. Find $m+n$. Starting at the origin, a bug crawls 1 unit up, 2 units right, 3 units down and 4 units left. From this new point, the bug repeats this entire sequence of four moves 2015 more times, for a total of 2016 times. The coordinates of the bug’s final location are $(a, b)$. What is the value of $a + b$? For each positive integer $n$, $a_n = 9n + 2$ and $b_n = 7n + 3$. If the values common to both sequences are written as a sequence, the $n^{th}$ term of that sequence can be expressed as $pn + q$. What is the value of $p − q$? Let the sum of first $n$ terms of arithmetic sequence $\{a_n\}$ be $S_n$, and the sum of first $n$ terms of arithmetic sequence $\{b_n\}$ be $T_n$. If $\frac{S_n}{T_n}=\frac{2n}{3n+7}$, compute the value of $\frac{a_8}{b_6}$. Suppose every term in the sequence $$1, 2, 1, 2, 2, 2, 1, 2, 2, 2, 2, 2, 1, \cdots$$ is either $1$ or $2$. If there are exactly $(2k-1)$ twos between the $k^{th}$ one and the $(k+1)^{th}$ one, find the sum of its first $2014$ terms. Let $c_1, c_2, c_3, \cdots$ be a series of concentric circles whose radii form a geometric sequence with common ratio as $r$. Suppose the areas of rings which are formed by two adjacent circles are $S_1, S_2, S_3, \cdots$. Which statement below is correct regarding the sequence $\{S_n\}$? A) It is not a geometric sequence B) It is a geometric sequence and its common ratio is $r$ C) It is a geometric sequence and its common ratio is $r^2$ D) It is a geometric sequence and its common ratio is $r^2-1$ Given the sequence $\{a_n\}$ satisfies $a_n+a_m=a_{n+m}$ for any positive integers $n$ and $m$. Suppose $a_1=\frac{1}{2013}$. Find the sum of its first $2013$ terms. Let sequence $\{a_n\}$ satisfy $a_1=2$ and $a_{n+1}=\frac{2(n+2)}{n+1}a_n$ where $n\in \mathbb{Z}^+$. Compute the value of $$\frac{a_{2014}}{a_1+a_2+\cdots+a_{2013}}$$ Let $a_1, a_2,\cdots, a_n > 0, n\ge 2,$ and $a_1+a_2+\cdots+a_n=1$. Prove $$\frac{a_1}{2-a_1} + \frac{a_2}{2-a_2}+\cdots+\frac{a_n}{2-a_n}\ge\frac{n}{2n-1}$$ Suppose all the terms in a geometric sequence $\{a_n\}$ are positive. If $|a_2-a_3|=14$ and $|a_1a_2a_3|=343$, find $a_5$. Suppose no term in an arithmetic sequence $\{a_n\}$ equals $0$. Let $S_n$ be the sum of its first $n$ terms. If $S_{2n-1} = a_n^2$, find the expression for its $n^{th}$ term $a_n$. Let $S_n$ be the sum of first $n$ terms in sequence $\{a_n\}$ where $$a_n=\sqrt{1+\frac{1}{n^2}+\frac{1}{(n+1)^2}}$$ Find $\lfloor{S_n}\rfloor$ where the floor function $\lfloor{x}\rfloor$ returns the largest integer not exceeding $x$. Let $\alpha$ and $\beta$ be the two roots of the equation $x^2 -x - 1=0$. If $$a_n = \frac{\alpha^n - \beta^n}{\alpha -\beta}\quad(n=1, 2, \cdots)$$ Show that - For any positive integer $n$, it always hold $a_{n+2}=a_{n+1}+a_n$ - Find all positive integers $a, b$ $( a < b )$ satisfying $b\mid a_n-2na^n$ holds for any positive integer $n$ Let $\{a_n\}$ be an increasing geometric sequence satisfying $a_1+a_2=6$ and $a_3+a_4=24$. Let $\{b_n\}$ be another sequence satisfying $b_n=\frac{a_n}{(a_n-1)^2}$. If $T_n$ is the sum of first $n$ terms in $\{b_n\}$, show that for any positive integer $n$, it always holds that $T_n < 3$. Given a sequence $\{a_n\}$, if $a_n\ne 0$, $a_1=1$, and $3a_na_{n-1}+a_n+a_{n-1}=0$ for any $n\ge 2$, find the general term of $a_n$. If a sequence $\{a_n\}$ satisfies $a_1=1$ and $a_{n+1}=\frac{1}{16}\big(1+4a_n+\sqrt{1+24a_n}\big)$, find the general term of $a_n$. Let sequence $\{a_n\}$ satisfy $a_0=1$ and $a_n=\frac{\sqrt{1+a_{n-1}^2}-1}{a_{n-1}}$. Prove $a_n > \frac{\pi}{2^{n+2}}$. Show that $1+3+6+\cdots+\frac{n(n+1)}{2}=\frac{n(n+1)(n+2)}{6}$. Show that $1+4+7+\cdots+(3n-2)=\frac{n(3n-1)}{2}$
2022-08-17 08:01:42
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9422594904899597, "perplexity": 83.71810190552364}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-33/segments/1659882572870.85/warc/CC-MAIN-20220817062258-20220817092258-00558.warc.gz"}
https://www.gamedev.net/forums/topic/672360-shaders-not-compiling-with-x64-build-target/
# DX11 Shaders not compiling with x64 build target This topic is 834 days old which is more than the 365 day threshold we allow for new replies. Please post a new topic. ## Recommended Posts Hi guys, I have a basic DX11 framework that I have created. It works perfectly in an x86 build (VS2015), but the x64 build throws an error at the shader compile stage. The shader I am testing with is bare bones. So I am wondering if there is something different that you need to do in a shader file to make it work in an x64 build? float4 VS_Main( float4 pos : POSITION ) : SV_POSITION { return pos; } float4 PS_Main( float4 pos : SV_POSITION ) : SV_TARGET { return float4( 1.0f, 1.0f, 1.0f, 1.0f ); } ##### Share on other sites Always the way isn't it? Right after you post the question you work out the answer. I had neglected to set the working directory for the x64 build. ##### Share on other sites I remember one time I worked on a bug for 3 days straight. After exhausting every possibility I posted on here, only to figure it out like 5 min after posting. Ahh the fun of programming • 10 • 11 • 9 • 16 • 18 • ### Similar Content • I wanted to see how others are currently handling descriptor heap updates and management. I've read a few articles and there tends to be three major strategies : 1 ) You split up descriptor heaps per shader stage ( i.e one for vertex shader , pixel , hull, etc) 2) You have one descriptor heap for an entire pipeline 3) You split up descriptor heaps for update each update frequency (i.e EResourceSet_PerInstance , EResourceSet_PerPass , EResourceSet_PerMaterial, etc) The benefits of the first two approaches is that it makes it easier to port current code, and descriptor / resource descriptor management and updating tends to be easier to manage, but it seems to be not as efficient. The benefits of the third approach seems to be that it's the most efficient because you only manage and update objects when they change. • hi, until now i use typical vertexshader approach for skinning with a Constantbuffer containing the transform matrix for the bones and an the vertexbuffer containing bone index and bone weight. Now i have implemented realtime environment  probe cubemaping so i have to render my scene from many point of views and the time for skinning takes too long because it is recalculated for every side of the cubemap. For Info i am working on Win7 an therefore use one Shadermodel 5.0 not 5.x that have more options, or is there a way to use 5.x in Win 7 My Graphic Card is Directx 12 compatible NVidia GTX 960 the member turanszkij has posted a good for me understandable compute shader. ( for Info: in his engine he uses an optimized version of it ) Now my questions is it possible to feed the compute shader with my orignial vertexbuffer or do i have to copy it in several ByteAdressBuffers as implemented in the following code ? the same question is about the constant buffer of the matrixes my more urgent question is how do i feed my normal pipeline with the result of the compute Shader which are 2 RWByteAddressBuffers that contain position an normal for example i could use 2 vertexbuffer bindings 1 containing only the uv coordinates 2.containing position and normal How do i copy from the RWByteAddressBuffers to the vertexbuffer ? (Code from turanszkij ) Here is my shader implementation for skinning a mesh in a compute shader: • Hi, can someone please explain why this is giving an assertion EyePosition!=0 exception? It looks like DirectX doesnt want the 2nd parameter to be a zero vector in the assertion, but I passed in a zero vector with this exact same code in another program and it ran just fine. (Here is the version of the code that worked - note XMLoadFloat3(&m_lookAt) parameter value is (0,0,0) at runtime - I debugged it - but it throws no exceptions. and here is the repo with the alternative version of the code that is working with a value of (0,0,0) for the second parameter. • Hi, can somebody please tell me in clear simple steps how to debug and step through an hlsl shader file? I already did Debug > Start Graphics Debugging > then captured some frames from Visual Studio and double clicked on the frame to open it, but no idea where to go from there. I've been searching for hours and there's no information on this, not even on the Microsoft Website! They say "open the  Graphics Pixel History window" but there is no such window! Then they say, in the "Pipeline Stages choose Start Debugging"  but the Start Debugging option is nowhere to be found in the whole interface. Also, how do I even open the hlsl file that I want to set a break point in from inside the Graphics Debugger? All I want to do is set a break point in a specific hlsl file, step thru it, and see the data, but this is so unbelievably complicated
2018-01-20 00:18:17
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.24309876561164856, "perplexity": 2479.475633982119}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-05/segments/1516084888302.37/warc/CC-MAIN-20180119224212-20180120004212-00463.warc.gz"}
https://www.physicsoverflow.org/44856/differential-cross-section-m%C3%B8ller-scattering-square-meters?show=44890#a44890
What is the differential cross section of Møller scattering in square meters + 1 like - 0 dislike 1203 views Can anyone provide the formula for the differential cross section in square meters (sic) for Møller scattering? And maybe even give an authoritative reference? I need the formula both for checking my math and for testing some computer programs. I am interested in a formula which uses the International System of Units (SI) in general and m (meter), kg (kilogram), s (second) and A (Ampere) in particular. Most sources can agree that the differential cross section in the Center of Mass (CM) coordinate system for a scattering angle $\theta \in ( 0 , \pi )$ and for incomming electrons which each have momentum $p \in \mathbb{R}_+$ is given by $\frac { d \sigma } { d \Omega } = \beta \frac { 4 ( ( m c ) ^ 2 + 2 p ^ 2 ) ^ 2 + ( 4 p ^ 4 - 3 ( ( m c ) ^ 2 + 2 p ^ 2 ) ^ 2 ) ( \sin \theta ) ^ 2 + p ^ 4 ( \sin \theta ) ^ 4 } { p ^ 4 ( \sin \theta ) ^ 4 }$ for some constant $\beta \in \mathbb{R}_+$ which is measured in square meters in the SI system. Unfortunately, the sources I have found are not very explicit about $\beta$ and may even disagree with one another. So the question is: can anyone provide a formula for $\beta$ using the following physical constants? recategorized Feb 27 can you give a link for where this β appears? $\beta$ is my name for the quantity I ask for. So it probably does not appear anywhere outside this question. Wikipedia https://en.wikipedia.org/wiki/M%C3%B8ller_scattering says: So Wikipedia implicitly says $\beta = \alpha ^ 2 / E ^ 2 _ { C M }$. Unfortunately, that does not answer the question since Wikipedia implicitly sets the speed of light to 1 and thus does not use SI units. Could this lecture page 3 helo you?https://www2.ph.ed.ac.uk/~vjm/Lectures/ParticlePhysics2007_files/Lecture2.pdf You can get the whole crossection in SI units , and you know the values of the constants in your β in SI units. Page 3 (Slide 5) of https://www2.ph.ed.ac.uk/~vjm/Lectures/ParticlePhysics2007_files/Lecture2.pdf indeed says how to restore from natural units to SI units by dividing energy by $c \hbar$ as is also done in the answer by Vladimir Kalitvianski. So if https://www2.ph.ed.ac.uk/~vjm/Lectures/ParticlePhysics2007_files/Lecture2.pdf and https://en.wikipedia.org/wiki/M%C3%B8ller_scattering agree on what natural units mean, and if the latter uses natural units, and if the latter is correct, then we have $\beta = c ^ 2 \hbar ^ 2 \alpha ^ 2 / E _ { \mathrm { CM } } ^ 2$ and my question is answered. But as mentioned in my comment to Kalitvianski's answer, nagging doubts remain. + 0 like - 0 dislike One can see in the grue answer that $d\sigma/d\Omega$ is proportional to $1/E_{\text{CM}}^2$ (the rest is dimensionless and thus unit-independent). Now, there is a center of mass exponential: $\text{e}^{ {-\text{i}E_{\text{CM}}\cdot t/\hbar}}$ in the scattering problem. By multiplying the numerator and denominator in it by $c$, we get $\text{e}^{ {-\text{i}{E_\text{CM}}\cdot ct/c\hbar}}$. The product $ct$ can be expressed in meters, so the ratio $c\hbar/E_{\text{CM}}$ can also be expressed in meters. From here one can easily restore the dimension of the cross section in square meters (knowing that $m_e c^2 \approx 0.5\;MeV$ and $r _ e = \frac { \hbar \alpha } { m c } \approx 2.81794032 \cdot 10 ^ { - 15 } \mbox{meter}$. answered Mar 2 by (92 points) edited Mar 8 I read @VladimirKalitvianski 's answer like this: Define $m = m _ e$ so that we can use $m$ for the mass of the electron. Now $c \hbar / E _ { \mathrm { CM } }$ has units of meter and Wikipedia says $\beta = \alpha ^ 2 / E _ { \mathrm { CM } } ^ 2$ so if we restore units using $c = 1$ and $\hbar = 1$ we get $\beta = \frac { c ^ 2 \hbar ^ 2 \alpha ^ 2 } { E _ { \mathrm { CM } } ^ 2 }$ As suggested by Vladimir Kalitvianski we can compute $E _ { \mathrm { CM } }$ from the momentum $p \in \mathbb { R } _ +$. The formula would be $E _ { \mathrm { CM } } = 2 \sqrt { p ^ 2 c ^ 2 + m ^ 2 c ^ 4 }$ where we can make use of $m c ^ 2 \equiv 0.5 MeV$. That gives $\beta = \frac { \hbar ^ 2 \alpha ^ 2 } { 4 ( p ^ 2 + m ^ 2 c ^ 2 ) }$ That answer could easily be correct. But now problems start. Wikipedia is silent about what unit system is used. One can easily see that Wikipedia uses $c = 1$. A qualified guess would be that Wikipedia also uses $\hbar = 1$. But what about vacuum permittivity $\varepsilon$? Is it eg $1$ or $1 / ( 4 \pi )$? A factor of $4 \pi$ is dimensionless and thus escapes dimensional analysis. If I start from David J Griffiths, Introduction to elementary particles, 2nd rev. version, and if I do a long derivation then I end up with $\beta = \frac { 4 \pi ^ 2 \varepsilon ^ 2 c ^ 2 \hbar ^ 2 \alpha ^ 2 } { E _ { \mathrm { CM } } ^ 2 }$ Since Griffiths uses Gaussian units we have $4 \pi \varepsilon = 1$ so $\beta = \frac { c ^ 2 \hbar ^ 2 \alpha ^ 2 } { 4 E _ { \mathrm { CM } } ^ 2 }$ That is off by a factor 4 from Wikipedia. Since 4 is dimensionless that factor also escapes dimensional analysis. So I end up with the question: who is wrong? Griffiths? Wikipedia? Me? Wikipedia is not an authoritative source and my derivation based on Griffiths could be flawed. So I look for an answer which is independent of both Wikipedia and my own derivations. But what about vacuum permittivity ε? Is it eg 1 or 1/(4π)? A factor of 4π is dimensionless and thus escapes dimensional analysis. Indeed, it should escape the dimentional analysics, but it cannot escape the numerical value. Whatever is used (I guess $\varepsilon=1$), it contributes anyway to dimentionless factors in the cross section. @VladimirKalitvianski Ok, but how can I then know if $\beta = \frac { c ^ 2 \hbar ^ 2 \alpha ^ 2 } { E _ { \mathrm { CM } } }$ or $\beta = \frac { c ^ 2 \hbar ^ 2 \alpha ^ 2 } { 4 E _ { \mathrm { CM } } }$ or both are incorrect? I guess the first expression is right, i.e., without 4 in the denominator, but with $E$ squared. Thanks for pointing out $E$ squared. Also, I made a flaw when copying my results based on Griffiths to this thread. The $\beta$ based on Wikipedia is still $\beta = c ^ 2 \hbar ^ 2 \alpha ^ 2 / E _ { \mathrm { CM } } ^ 2$ but the one based on Griffiths should have been $\beta = c ^ 2 \hbar ^ 2 \alpha ^ 2 / ( 2 E _ { \mathrm { CM } } ^ 2 )$. It turns out both of these $\beta$ are correct. The difference is that Griffiths defines the differential cross section in an unorthodox way: $\frac { d \sigma } { d \Omega } = \left( \frac { \hbar c } { 8 \pi } \right) ^ 2 \frac { S | { \cal M } | ^ 2 } { ( E _ 1 + E _ 2 ) ^ 2 } \frac { | \mathbf { p } _ f | } { | \mathbf { p } _ i | }$ The peculiarity in Griffiths' definition is that it includes $S$ which is $1$ if the two particles in the final state are different and $1 / 2$ if they are identical. That makes the differential cross section of Møller scattering according to Griffiths half as big as the differential cross section according to other sources. As an example, Peskin and Schroeder, An introduction to quantum field theory, 1995, page 108 very explicitly says that Pescin and Schroeder includes $S$ in the total cross section but not in the differential one, which makes perfect sense. What Pescin and Schroeder does seems to be the standard. By the way, I have been able to derive $\varepsilon \hbar c = 1$ from one of the equations in Wikipedia. So there are strong reasons to believe that Wikipedia uses $\varepsilon = \hbar = c = 1$ as you say. There is further evidence that Wikipedia is right in http://www-heaf.astro.hiroshima-u.ac.jp/thesis/ogata2001.pdf which states $\frac { d \sigma } { d \Omega } = \frac { r _ e ^ 2 } { 4 } \left( \frac { m _ e c } { p } \right) ^ 2 \frac { ( 3 + \cos ^ 2 \theta ) ^ 2 } { \sin ^ 4 \theta }$ in the ultrarelativistic limit where $m$ is the rest mass of the electron, $p$ is the momentum of each particle in the CM system and $r _ e$ is the classical electron radius. @VladimirKalitvianski  Could I persuade you to edit you answer to include that $\begin{array}{l} \frac { d \sigma } { d \Omega } = \frac { r _ e ^ 2 } { 4 } \frac { ( m c ) ^ 2 } { p ^ 2 + ( m c ) ^ 2 } \frac { 4 ( ( m c ) ^ 2 + 2 p ^ 2 ) ^ 2 + ( 4 p ^ 4 - 3 ( ( m c ) ^ 2 + 2 p ^ 2 ) ^ 2 ) ( \sin \theta ) ^ 2 + p ^ 4 ( \sin \theta ) ^ 4 } { p ^ 4 ( \sin \theta ) ^ 4 } \end{array}$ where the classical electron radius $r _ e$ is given by $r _ e = \frac { \hbar \alpha } { m c } \approx 2.81794032 \cdot 10 ^ { - 15 } \mbox{meter}$ That would very directly answer the original question. It seems one cannot mark an answer as correct, but at least I can upvote you answer (if my limited 10 point reputation permits). In any case, thanks for your help on sorting all this out. Vladimir I wonder how come you mention only one digit after the dot in the value of the mass of electron, while for r_e you seem to mention more than one digit after the dot. Why is that? Please use answers only to (at least partly) answer questions. To comment, discuss, or ask for clarification, leave a comment instead. To mask links under text, please type your text, highlight it, and click the "link" button. You can then enter your link URL. Please consult the FAQ for as to how to format your post. This is the answer box; if you want to write a comment instead, please use the 'add comment' button. Live preview (may slow down editor)   Preview Your name to display (optional): Email me at this address if my answer is selected or commented on: Privacy: Your email address will only be used for sending these notifications. Anti-spam verification: If you are a human please identify the position of the character covered by the symbol $\varnothing$ in the following word:p$\hbar$ysicsOverfl$\varnothing$wThen drag the red bullet below over the corresponding character of our banner. When you drop it there, the bullet changes to green (on slow internet connections after a few seconds). To avoid this verification in future, please log in or register.
2022-12-03 05:48:03
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 2, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8648610711097717, "perplexity": 472.0515468394565}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-49/segments/1669446710924.83/warc/CC-MAIN-20221203043643-20221203073643-00795.warc.gz"}
https://stats.stackexchange.com/questions/502881/how-do-i-correctly-treat-nested-variables-in-a-regression-given-multicollinearit/502995
# How do I correctly treat nested variables in a regression given multicollinearity of said variables? As per the question, I want to run a regression of variables where those variables are nested within each other and therefore highly correlated. Here is my specific example for context: I study the effects of Extraversion on various outcomes. Theoretically, Extraversion (a personality trait) is itself made up of two lower-level 'personality aspects', being Assertiveness and Enthusiasm. Despite Extraversion being made up of these two lower-level traits, it is still possible for Extraversion to explain additional variance in an Outcome over and above the individual effects of Assertiveness and Enthusiasm. 20 items (questions on a questionnnaire) are typically used to measured all three constructs (10 for Assertiveness, 10 for Enthusiasm, and the full 20 for Extraversion). The variables are therefore highly correlated (usually > 0.70). I would like to know how to correctly run a regression to best figure out what the contribution is of each of these three traits, given that they are necessarily highly correlated. Some made-up data in the form of a correlation matrix to illustrate: #Correlation matrix. MyMatrix <- matrix( c(1.0, 0.7, 0.8, 0.3, 0.7, 1.0, 0.6, 0.4, 0.8, 0.6, 1.0, 0.4, 0.3, 0.4, 0.4, 1.0), nrow=4, ncol=4) rownames(MyMatrix) <- colnames(MyMatrix) <- c("Extraversion", "Assertiveness","Enthusiasm","Outcome") #Assume means and standard deviations as follows: MEAN.Extraversion <- 4.00 MEAN.Assertiveness <- 3.90 MEAN.Enthusiasm <- 4.10 MEAN.Outcome <- 5.00 SD.Extraversion <- 1.01 SD.Assertiveness <- 0.95 SD.Enthusiasm <- 0.99 SD.Outcome <- 2.20 s <- c(SD.Extraversion, SD.Assertiveness, SD.Enthusiasm, SD.Outcome) m <- c(MEAN.Extraversion, MEAN.Assertiveness, MEAN.Enthusiasm, MEAN.Outcome) #Convert to covariance matrix. cov.mat <- diag(s) %*% MyMatrix %*% diag(s) rownames(cov.mat) <- colnames(cov.mat) <- rownames(MyMatrix) names(m) <- rownames(MyMatrix) #Run model. library(lavaan) m1 <- 'Outcome ~ Extraversion + Assertiveness + Enthusiasm' fit <- sem(m1, sample.cov=cov.mat, sample.nobs=300, sample.mean=m, meanstructure=TRUE) summary(fit, standardize=TRUE) • Is extraversion the sum of assertiveness and enthusiasm? If so, the model is not estimable. – Jeremy Miles Dec 31 '20 at 1:38 • Also. you don't seem to have any latent variables, you're just doing regression. – Jeremy Miles Dec 31 '20 at 1:48 • Sometimes extraversion is the average of the underlying items and other times it is the average of the two aspects. Regarding the latent variables, you are right and I have updated my question to remove reference to latent variables. That said, the reason I mentioned latent variables is that all of these variables are latent variables, however are often modelled as if they are not. Outside of measurement issues, I'm not sure how much this fact matters? – aspark2020 Dec 31 '20 at 2:27 • If either of those is true, the model cannot be estimated, you have perfect collinearity. You can't put all three variables in the model. You might consider having the two facets be indicators of an extraversion latent variable. – Jeremy Miles Dec 31 '20 at 2:52 Since the Extraversion score is just the average of the Assertiveness and Enthusiasm scores, each of these variables is a linear function of the other two. Thus, once you already have two of the variables in the model, adding the third gives you non-identifiable effect terms. I recommend you include only the latter variables in your regression model and exclude Extraversion. If you wish to make inferences about the Extraversion variable then you can do so by looking at the average of the coefficients for the other two variables.
2021-02-27 21:27:50
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.45867234468460083, "perplexity": 1879.415816182191}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 5, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-10/segments/1614178359497.20/warc/CC-MAIN-20210227204637-20210227234637-00305.warc.gz"}
https://doc.iohub.dev/jarvis/Ym9vazovLy9jXzIvc18yL2ZfMC5tZA/Odometry_data_reading_from_magnetic_encoder.md?show_toc=false
# Odometry data reading from magnetic encoder Jarvis has two motors with magnetic encoder feedback, each encoder has two pins that are connected to two interrupt pins on the Arduino board (4 pins in total). Normal GPIO pins cannot be used in this case since it will be too slow to capture all the value changes of the encoder in the main loop. Hardware interruption routine is more appropriate in this situation. The table below shows the physical wiring between Arduino Mega interrupt pins and Motors encoder pins: Arduino Encoder 2 Right encoder A 3 Right encoder B 4 Left encoder A 22 Left encoder B Each motor has two encoder pins connected to two hall effect sensors which allow to measure "the voltage difference (the Hall voltage) across an electrical conductor, transverse to an electric current in the conductor and to an applied magnetic field perpendicular to the current". This effect is produced when the motor rotates thanks to the 6 poles magnetic disc attached to the shaft of the motor. The two hall effect sensors produce two square wave outputs which are 90 degrees out of phase. This is called a quadrature output. This out of phase allows us to know both the magnitude and the direction of the motor’s rotation: If output of encoder A pin is ahead of output of encoder B pin, then the motor is turning forward. If output A is behind B, the motor is turning backward. Pretty simple. Implementation of the interruption routine is simple by following exactly this principle, the following example snippet show the reading of left odometry data: static int left_motor_tick = 0; static void left_encoder_event() { { { left_motor_tick++; } else { left_motor_tick--; } } else { { left_motor_tick--; } else { left_motor_tick++; } } } // In the setup method void setup() { ... attachInterrupt(digitalPinToInterrupt(LM_ENCODER_A), left_encoder_event, CHANGE); ... } The odometry data is a part of the 47 bytes COBS data frame which will be sent to ROS for high level control.
2021-06-12 22:41:05
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.17412175238132477, "perplexity": 2107.5799363788274}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-25/segments/1623487586465.3/warc/CC-MAIN-20210612222407-20210613012407-00257.warc.gz"}
https://math.stackexchange.com/questions/779805/show-that-ab-is-singular-if-a-is-singular
# Show that AB is singular if A is singular Actually I need to show that $\det(AB) = \det(A)\det(B)$ if $A$ is a singular matrix. The determinant of $A$ is $0$ if $A$ is singular, so $\det(AB)$ has to be $0$ as well, but I have problems showing that $AB$ is singular if $A$ is singular. How can I show that? • You've exactly written the reason there. The identity $det(AB) = det(A)det(B)$ is the key – BlueBuck May 3 '14 at 16:47 • But that's what I need to prove. – eager2learn May 3 '14 at 16:47 • some proofs are provided here:proofwiki.org/wiki/Determinant_of_Matrix_Product – Fermat May 3 '14 at 16:53 ## 5 Answers One approach is this: That a matrix $C$ is singular gives us in particular that its null space is non-trivial, that is, for some vector $x\ne0$ we have $Cx=0$. That $C$ is nonsingular, on the other hand, gives us in particular that the column space of $C$ has full rank, that is, for any vector $b$ there is a vector $a$ such that $Ca=b$. Now, suppose $A$ is singular. If $B$ is also singular, then for some $x\ne 0$ we have $Bx=0$, but then $(AB)x=A(Bx)=A0=0$, and we conclude that $AB$ is also singular. If, on the other hand, $B$ is nonsingular, use that $A$ is singular to find $b\ne 0$ such that $Ab=0$. Now, use that $B$ is nonsingular to find $a$ such that $Ba=b$. Clearly $a\ne0$ since $b\ne0$. But now we have that $(AB)a=A(Ba)=Ab=0$, and we conclude (again) that $AB$ is singular. This completes the proof. Notice, by the way, that we also showed that 1) $AB$ is singular if $B$ is the one assumed singular. On the other hand, since $A,B$ being nonsingular gives us that $AB$ is nonsingular, then we also have that 2) if $AB$ is singular, then at least one of $A$ and $B$ must be singular as well. • I didn't see your answer!+1 – user63181 May 3 '14 at 17:10 If $A$ is singular then it isn't injective: there's $y\ne0$ such that $$Ay=0$$ Now • if $B$ is invertible then let $x$ such that $Bx=y$ and then $$ABx=Ay=0$$ and • if $B$ is also singular then there's $z\ne0$ such that $Bz=0$ and then $$ABz=0$$ so we prove that $AB$ isn't injective which's equivalent to $AB$ is singular. • This feature request is definitely needed... – Andrés E. Caicedo May 3 '14 at 17:07 • Nice answer(s), Sami and Andres. +1 to you both. – user1551 May 3 '14 at 17:09 • It's wrong to precipitate and say that its inverse is $B^{-1}A^{-1}$. Rather you should say that its inverse say $C$ and prove that $C= B^{-1}A^{-1}$@BCLC – user63181 May 3 '14 at 17:18 • @BCLC To elaborate on Sami's comment, if $AB$ is invertible, it has an inverse $C$. Therefore $I=(AB)C=A(BC)$ and hence $A$ is invertible with its inverse equal to $BC$. – user1551 May 3 '14 at 17:27 • @BCLC To elaborate on user1551's comment, if $C$ is the inverse of $AB$, then indeed $BC$ is a right inverse of $A$. One then needs an additional argument to conclude that also $(BC)A=I$, from which one can finally successfully conclude that $A$ is indeed invertible. Once we have that both $AB$ and $A$ are invertible, then we can conclude that $B$ is also invertible (but this also needs a proof, of course). Once we have that $A$ and $B$ are invertible, and only then, can we conclude that $C=B^{-1}A^{-1}$. – Andrés E. Caicedo May 3 '14 at 19:03 Hint: if $x^TA=0$ for some nonzero vector $x$, then ... • Either A is a zero matrix or all the rows of A are the same. Sorry this doesn't help me. Can you give another hint? – eager2learn May 3 '14 at 16:57 • @eager2learn With the aforementioned $x$, what is $x^TAB$? – user1551 May 3 '14 at 16:59 • I think it's also 0. – eager2learn May 3 '14 at 17:01 • @eager2learn Have you learnt that a matrix $M$ is singular if and only if $x^TM=0$ (or equivalently, $M^Tx=0$) for some nonzero vector $x$? – user1551 May 3 '14 at 17:06 • Well we had defined a matrix A to be singular if rank(A)<n, where we defined the rank as dim(im(f)) where f is the linear map that corresponds to A. So if A is regular then f is injective and so Mx=0 <=> x=0. Then if A is singular it's not injective and there are non-zero vectors x such that Mx=0. So I guess we did indirectly cover that last semester, but I didn't think of this explicitly. So I guess if $x^TA=0$ for some non-zero vector x then it follows that A cannot be injective and thus has to be singular. Is that what you were trying to lead me towards? – eager2learn May 3 '14 at 17:17 Let $A$ be a singular matrix. Suppose by way of contradiction that $AB$, for some matrix $B$, has an inverse. Then, there exists some matrix C such that $(AB)C=C(AB)=I$, where I is the identity matrix. But then, $(AB)C=A(BC)$ by associativity. Then imposing $A(BC)=I$ leads us to $BC=A^{-1}$. Which is a contradiction since A was singular by the problem. Contrapositive: If AB is not singular, then A is not singular. If AB is not singular, then it has an inverse. Its inverse is $B^{-1}A^{-1}$ which implies that B and A are not singular. Expansion of my answer: if AB is not singular, then A is not singular because if AB is not singular, then AB has an inverse. AB's inverse is $B^{-1}A^{-1}$ which implies that B and A are not singular which implies A is not singular. • Why is A not singular then? – eager2learn May 3 '14 at 16:58 • If B and A are not singular then A is not singular. I think? – BCLC May 3 '14 at 16:58 • You wrote if AB is not singular then A is not singular. Why is that the case? You didn't write if A and B are not singular then A is singular. – eager2learn May 3 '14 at 16:59 • This argument is incorrect. To write $B^{-1}A^{-1}$ makes no sense, until you prove that both $A$ and $B$ are indeed invertible, which you did not do. Your argument is very similar to saying that the zero matrix $0$ is invertible because $0^{-1}0=I$. – Andrés E. Caicedo May 3 '14 at 19:00 • "if $A$ does not have an inverse, then we cannot obtain the inverse of $AB$" Well, yes. This is exactly what the question is asking you to prove. So, it is not that your argument is incorrect, but rather that there is no argument. You are just repeating the statement of the question, and claiming that it holds. It may be better to study the sketch we gave you on the other answer. The issue seems to be that you are assuming (as part of the background) too much already, almost as if the relevant statements were axioms, while the question is precisely to verify some of these assumptions. – Andrés E. Caicedo May 4 '14 at 0:24
2021-05-07 01:57:05
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9167127013206482, "perplexity": 205.4634670038067}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": false}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-21/segments/1620243988774.18/warc/CC-MAIN-20210506235514-20210507025514-00613.warc.gz"}
http://training.noi.ph/topics/graph_flow
by Vernon (Adapted from the NOI.PH 2017 IOI training.) ## Introduction Many interesting real-life situations can be modelled as a flow network problem. There are also lots of interesting theoretical problems that can be reduced to problems on flow networks. For these reasons, it is one of the most important and widespread topics in computer science. In fact, due to the sheer number of applications, it can be considered a “paradigm” on its own, along with dynamic programming, greedy, etc. Although this topic is explicitly excluded from the IOI syllabus, most IOI competitors are familiar with it. Many flow algorithms use some basic graph algorithm (such as DFS, BFS, Dijkstra, or Bellman-Ford) as a subroutine. Hence, studying these algorithms forces one to practice these and generally improves your skills in solving graph problems. In addition, bipartite matching is included in the IOI syllabus, and although there are ways to solve bipartite matching without using flow networks, studying the flow network reduction deepens your understanding of bipartite matching. It will thus be helpful to study this topic for the IOI. You may first watch these lectures from MIT or these lectures from Stanford to get an intuition for the concepts before reading the text below for details and rigor. The MIT lectures are based on the CLRS formulation of network flows, while the Stanford lectures are roughly based on the Kleinberg and Tardos formulation. The MIT/CLRS material go for rigorous proofs while the Stanford material discuss more algorithms, applications, and generalizations. I personally prefer the Stanford material, but the MIT material is also valuable for exposure to a slightly different approach. The text below tries to be as complete as the Stanford material while being as rigorous as the MIT/CLRS material, but is independent of either. In addition, it also goes into the practical aspects of implementation and solving programming contest problems. It relies on and is meant to be read along with these slides from Princeton. Just follow the text, and it will prompt you to open a link to look at the slides with the specified pages for illustrations. The empirical comparisons and conclusions below are based on this article from TopCoder. There are several factual errors in that article, but I trusted the validity of the experiments. The conclusions below can be wrong if the experiments turn out to be flawed. ## The Maximum Flow Problem and the Minimum Cut Problem ### Basic Definitions Let's say we have a directed graph $G = (V, E)$ with a special structure. There is a special node $s \in V$ called the source, whose in-degree is 0, and another special node $t \in V$ called the sink, whose out-degree is 0. We will further assume that our graph is simple (there are no self-loops and parallel edges between the same nodes), and that there is at least one edge incident to every node. (A practice problem later will force you to think about how to get around some of these assumptions.) Additionally, for each edge $e \in E$, we assign a non-negative capacity $c(e)$. A graph that satisfies all these properties is called a flow network. Take a look at I.3 for a visualization of a flow network. Intuitively, a flow network represents “stuff” (water, electricity, etc.) that can flow from one location to another via pipes. Stuff is produced at a special location called the source, and is consumed at another special location called the sink. Each pipe imposes a certain limit on the volume of stuff that can pass through it at any given point in time. A natural question to ask is the following: what is the maximum rate of flow from the source to the sink? More formally, let us define an st-flow (or just a flow) as a function $f: E \mapsto N$ that assigns an integer to each edge, respecting the following two constraints: • Capacity constraint: for each $e \in E$, $0 \leq f(e) \leq c(e)$ • Flow conservation constraint: for each $v \in V - \{s, t\}$, $\sum\limits_{e \text{ in to } v} f(e) = \sum\limits_{e \text{ out of } v} f(e)$ If $f(e) = c(e)$, we say that edge $e$ is saturated with respect to flow $f$. Take a look at I.7 for a visualization of a valid flow defined on the network you just saw. You can check that the capacities are respected and flow is conserved for the edges and the node $v$ highlighted in blue. We define the value of the flow as the sum of the flow values assigned to all edges going out of the source, $val(f) = \sum\limits_{e \text{ out of } s} f(e)$. The maximum flow problem is to find a flow of maximum value. Exercise. What is the value of the flow we just defined? See I.8 for the answer. Exercise. Is this the maximum possible value for this network? If no, what is? See I.9 for the answer. Let's momentarily turn our attention to another problem. Interpret the capacity of each edge as the cost of removing the edge from the graph. Another interesting question would be: what is the least total cost to hijack the network, to disconnect the source from the sink and completely prevent the stuff from flowing from the source to the sink? More formally, let us define an st-cut (or just a cut) as a partition $(A, B)$ of the vertices with $s \in A$ and $t \in B$. The capacity of the cut $cap(A, B)$ is the sum of the capacities of the edges from $A$ to $B$: Take a look at I.4-I.5 to see examples of cuts. The minimum cut problem is to find a cut of minimum capacity. Exercise. What is the minimum capacity cut of our graph? See I.6 for the answer. On first glance, these two problems appear to be unrelated. But in fact, we shall see later that they are essentially the same problem, both solvable using the same algorithms. It is no coincidence that the value of the maximum flow and the capacity of the minimum cut are the same for our given graph. We will solve the max-flow problem first, and then see how to apply the same solution to the min-cut problem. Exercise. Before looking at the algorithm below, think about how you might approach this problem. ### Incremental Improvement: Augmenting Paths (Ford-Fulkerson) A very natural approach to getting the maximum flow would be the following: 1. Start with an empty flow: let $f(e) = 0$ for all $e \in E$. 2. Find an $s \sim t$ path $P$, where for each edge $e$ along $P$, $f(e) < c(e)$. (Find a path of unsaturated edges.) 3. Augment the flow along $P$: for each edge $e$ along $P$, increase $f(e)$ by the bottleneck capacity $\min\limits_{e \in P} c(e) - f(e)$. (Saturate the minimum-remaining-capacity edges (bottleneck edges) in $P$.) 4. Repeat until there are no more such paths. The idea is that we start with an obviously non-optimal answer but which surely satisfies the constraints, and then converge towards the optimum by incremental improvements that still respect the constraints. Take a look at I.13-I.15 for a sample run of this algorithm. Exercise. Does this correctly find the maximum flow? Find a counterexample. See I.16 for an answer. Our approach fails because it is too greedy. Ideally, what we want is to have some way to “undo” certain flow increases. Look at I.15 again. If we can somehow increase the flow by one unit across the edge going out of $s$ labeled 6/10, undo pushing one unit of flow across the edge labeled 2/2, and finally redirect that one unit of flow towards the edge labeled 0/4 and then towards the edge going into $t$ labeled 6/10, we still satisfy the capacity and flow conservation constraints, but strictly increase the flow value by one. To reach the optimal answer, we can do this once more, and then another time involving the edge labeled 8/8 instead. In other words, what we really want is to be able to push flow “forward” normally, but also to be able to push flow “backward” along the reverse of edges that already have flow going forward. We still successively augment along $s \sim t$ paths, but we now allow the usage of backward edges as part of the paths. The intuitive reason why this works is as follows. Performing a “backward” push along some edge $e = (u, v)$ amounts to splitting a previously constructed path $P = s \sim t$ into two parts: $P_1 = s \sim u$ and $P_2 = v \sim t$. It similarly splits our new path $P' = s \sim t$ into $P'_1 = s \sim v$ and $P'_2 = u \sim t$. In order for the augmentation along $P'_1$ to be valid, respecting the conservation constraint on $v$, the flow from $u$ to $v$ is undone and redirected elsewhere, in particular to $P'_2$. This allows us to “reconstruct” an augmenting path $P'_1 + P_2$. But since we take away $P_2$ from $P$, we have to ensure that $P_1$ can still connect to the sink to be a valid augmenting path, and that the conservation constraint on $u$ is respected. Redirecting the flow to $P'_2$ achieves this for us, allowing us to “reconstruct” another augmenting path $P_1 + P'_2$. The concept of a residual graph gives us a clean way to keep track of these forward and backward pushes. Given a flow network $G = (V, E)$ with edge capacities $c$ and a flow $f$, we define the residual capacity $c_f(e)$ of some $e = (u, v)$ with respect to $f$ as follows: $$c_f(e) = \begin{cases} c(e) - f(e) & e = (u, v) \in E\\ f(e) & e^R = (v, u) \in E \end{cases}$$ An edge $e$ is a residual edge with respect to $f$ iff $c_f(e)$ is defined. Finally, we define the residual graph $G_f = (V, E_f)$ of $G$ with respect to $f$ as the graph with the same node set, and whose edge set is the set of residual edges $e \in E_f$ iff $c_f(e)$ is defined. Intuitively, the residual graph consists of edges which we can still use for augmentation: forward edges $e$ with “leftover” capacity $c(e) - f(e)$, and backward edges $e$ through which we can “undo” $f(e)$ units of flow that have previously been pushed forward in the opposite direction $e^R$. See I.17 for an example. We now have a simple algorithm, invented by Ford and Fulkerson in 1955, for finding the maximum flow. 1. Start with an empty flow: let $f(e) = 0$ for all $e \in E$. 2. Find an $s \sim t$ path $P$ in the residual graph $G_f$, where for each edge $e$ along $P$, $c_f(e) > 0$. We call this an augmenting path. 3. Let $b = \min\limits_{e \in P} c_f(e)$ be the bottleneck capacity of $P$. 4. Augment the flow along $P$: for each edge $e$ along $P$, if $e$ is a forward edge ($e \in E$), increase $f(e)$ by $b$, otherwise decrease $f(e^R)$ by $b$ (as $e$ is a backward edge). 5. Repeat until there are no more such paths. See I.23-I.25 for a demo. Is this algorithm correct? To prove it, we need to prove three things: 1. Augmentation never violates the capacity and conservation constraints. 2. The algorithm always terminates. 3. At termination, the value of the flow is maximum. Statements 1 and 2 are quite easy to prove. Statement 3 is more subtle, and leads us into proving the equivalence of max-flow and min-cut, and that we also now have an algorithm for solving the min-cut problem. But first, try proving statements 1 and 2 and try implementing the algorithm. Problem. Given a flow network $G$, prove that if $f$ is a flow, then the function $f' = Augment(f, P)$ obtained by augmenting the flow $f$ along an augmenting path $P$ in the residual graph $G_f$ is also a flow. In particular, verify that the capacity and conservation constraints are respected. (Hint: Since the $f$ is changed only for the edges along $P$, you only need to verify that the constraints are respected for these edges. Consider forward and backward edges separately.) Problem. Prove that the flow value strictly increases at every augmentation. Problem. Let $C = \sum\limits_{e \text{ out of } s} c(e)$. Prove that the Ford-Fulkerson algorithm can be implemented to run in $O(EC)$ time. (Hint: Recall that we assumed that there is at least one edge incident to every node.) Exercise. Try implementing your own version of Ford-Fulkerson first before comparing it with the implementation below. Test it on this problem: UVa 820 - Internet Bandwidth. Don't forget to print a blank line after each test case. Here is a simple implementation of the Ford-Fulkerson algorithm. Either DFS or BFS can be used to find augmenting paths. This implementation uses DFS, chosen arbitrarily. #include <bits/stdc++.h> #define MAX_N 100 #define INF 1'000'000 // a VERY useful C++14 feature using namespace std; int n, m, s, t; int c[MAX_N+1][MAX_N+1]; int f[MAX_N+1][MAX_N+1]; int p[MAX_N+1]; // "parent" in the DFS tree, needed for retrieving the augmenting path bool dfs(int u) { if(u == t) return true; // p[v] == -1 implies not discovered, c[u][v] - f[u][v] is the residual capacity if(p[v] == -1 && c[u][v] - f[u][v] > 0) { p[v] = u; if(dfs(v)) return true; } } return false; } bool find_aug_path() { memset(p, -1, sizeof p); p[s] = 0; // dummy to mark the source as discovered return dfs(s); } int main() { memset(c, 0, sizeof c); memset(f, 0, sizeof f); // assume input is of the following format: // $n$ (number of vertices) $s$ (source) $t$ (sink) $m$ (number of edges) // $u_1$ $v_1$ $capacity_1$ // $u_2$ $v_2$ $capacity_2$ // ... // $u_m$ $v_m$ $capacity_m$ cin >> n >> s >> t >> m; for(int i = 0; i < m; i++) { int u, v, capacity; cin >> u >> v >> capacity; adj[v].insert(u); // so that backward edges are included in the DFS // parallel edges are handled by just combining them into a single edge, // whose capacity equals the total of the capacities of the parallel edges c[u][v] += capacity; // c[v][u] += capacity; // for undirected graphs } int max_flow_value = 0; while(find_aug_path()) { int b = INF; for(int v = t, u = p[v]; v != s; v = u, u = p[v]) b = min(b, c[u][v] - f[u][v]); for(int v = t, u = p[v]; v != s; v = u, u = p[v]) f[u][v] += b, f[v][u] -= b; max_flow_value += b; } cout << max_flow_value << endl; return 0; } Here it is again, with the comments removed, just to highlight how short and simple the algorithm really is. #include <bits/stdc++.h> #define MAX_N 100 #define INF 1'000'000 using namespace std; int n, m, s, t; int c[MAX_N+1][MAX_N+1]; int f[MAX_N+1][MAX_N+1]; int p[MAX_N+1]; bool dfs(int u) { if(u == t) return true; if(p[v] == -1 && c[u][v] - f[u][v] > 0) { p[v] = u; if(dfs(v)) return true; } } return false; } bool find_aug_path() { memset(p, -1, sizeof p); p[s] = 0; return dfs(s); } int main() { memset(c, 0, sizeof c); memset(f, 0, sizeof f); cin >> n >> s >> t >> m; for(int i = 0; i < m; i++) { int u, v, capacity; cin >> u >> v >> capacity; c[u][v] += capacity; } int max_flow_value = 0; while(find_aug_path()) { int b = INF; for(int v = t, u = p[v]; v != s; v = u, u = p[v]) b = min(b, c[u][v] - f[u][v]); for(int v = t, u = p[v]; v != s; v = u, u = p[v]) f[u][v] += b, f[v][u] -= b; max_flow_value += b; } cout << max_flow_value << endl; return 0; } Let's notice a few things about this implementation. First, notice that we do not have to explicitly construct $G_f$, as just by keeping track of $c$ and $f$, we can easily infer $c_f$ for path finding. Second, notice that the way we define the residual capacity is slightly different here. We simply use $c(e) - f(e)$ and do not discriminate the forward and backward directions. We recover the original definition by defining $c(e)$ to be $0$ and by allowing $f(e)$ to be negative on backward edges. The augmentation procedure is also slightly modified to always increment the forward direction and to always decrement the backward direction. We do this to simplify the implementation and also to be able to handle anti-parallel edges (edges between the same nodes but in opposite directions). Take a moment to convince yourself that this works, and that the previous definition does not work for anti-parallel edges but this one does. There are other ways to deal with parallel and anti-parallel edges but this is the one I find the simplest, though slightly non-intuitive. Also notice that we use an adjacency set instead of an adjacency list here, to avoid storing duplicate neighbors due to parallel edges. Finally, this implementation is not the most memory-efficient possible one, but as we will see later, the running time of all practical maximum flow algorithms are all $\Omega(n^3)$, where $n$ is the number of nodes in the graph. This means that max-flow approaches to a problem are practical in terms of time iff using $O(n^2)$ memory is practical. Hence, it does not matter that we use $O(n^2)$ memory here. It makes the implementation simpler and require less time to run (which is more important for max-flow-related problems). In cases where the memory limit is really tight (e.g. max-flow is only part of the problem, and the other parts need the memory), it is fairly trivial to change this implementation to use a linear amount of memory. ### The Max-Flow Min-Cut Theorem We now prove that the Ford-Fulkerson algorithm yields the maximum flow. As a side effect, we also prove the equivalence of max-flow and min-cut. The idea is to find a tight upper bound on what the value of the max-flow can be, and to show that the Ford-Fulkerson algorithm indeed reaches this bound. One such bound is an obvious one: the value of a flow is always less than or equal to the sum of the capacities of the edges going out of the source, $val(f) \leq \sum\limits_{e \text{ out of } s} c(e)$. It is not tight enough to be helpful for proving anything, but the intuition behind this bound is helpful and can be generalized. Rather than considering just the sum of the capacities going out of the source, let's generalize to any “moat” around the source, and consider the sum of the capacities going out of this “moat” (more formally, a cut). It makes intuitive sense that the value of a flow must be smaller than this sum, and we state this as Lemma 2. To prove it formally, we need the following lemma first. Lemma 1. (Flow Conservation Across Cuts) Let $f$ be any flow and let $(A, B)$ be any cut. Then the net flow across $(A, B)$ equals the value of $f$. $$\sum_{e \text{ out of } A} f(e) - \sum_{e \text{ in to } A} f(e) = val(f)$$ See I.28-I.30 for examples of what this lemma is saying. Exercise. Prove Lemma 1. (Hint: Use the flow conservation constraint.) See I.31 for the answer. Problem. As a simple application of Lemma 1, prove that the value of the flow is equivalent to (and thus may also be alternatively defined as) the sum of the flows of the edges going into the sink: $val(f) = \sum\limits_{e \text{ out of } s} f(e) = \sum\limits_{e \text{ in to } t} f(e)$. Using Lemma 1, we can now prove a stronger bound on the value of the flow. Lemma 2. (Weak Duality Between Flows and Cuts) Let $f$ be any flow and let $(A, B)$ be any cut. Then the value of the flow is less than or equal to the capacity of the cut. $$val(f) \leq cap(A, B)$$ Let's think carefully about what Lemma 2 is saying. It is actually saying something quite strong: the value of any flow is always less than or equal to the capacity of any cut. In particular, this means that the max-flow value is less than the min-cut capacity. If we can somehow produce a flow $f$ and a cut $(A, B)$ where $val(f) = cap(A, B)$, then we know that $f$ is a max-flow, and $(A, B)$ is a min-cut. It turns out that the Ford-Fulkerson algorithm indeed produces a flow with this property. If we prove this, we both prove that the Ford-Fulkerson algorithm correctly finds the maximum flow and that max-flow is equivalent to min-cut (and by extension, that the Ford-Fulkerson algorithm also allows us to find the minimum cut). Lemma 3. (No Augmenting Paths Implies Existence of Cut Equivalent to Flow) Let $f$ be a flow such that there are no $s \sim t$ paths in the residual graph $G_f$. Then there is a cut $(A, B)$ where $val(f) = cap(A, B)$. Problem. Let $A$ be the set of nodes reachable from the source using residual edges in $G_f$ above, and let $B = V - A$. Prove that $(A, B)$ is a cut (i.e. that they are disjoint, that $s \in A$, and that $t \in B$). Problem. Consider an edge $e = (u, v)$ where $u \in A$ and $v \in B$. Prove that $f(e) = c(e)$. Problem. Consider an edge $e = (u, v)$ where $v \in A$ and $u \in B$. Prove that $f(e) = 0$. The above two statements imply that all edges out of $A$ are completely saturated with flow, while all edges in to $A$ are completely empty. Problem. Use the above facts, together with Lemma 1, to prove Lemma 3. Lemma 2 and Lemma 3 easily imply the following corollary. Corollary 4. (No Augmenting Paths Implies Maximum Flow, Minimum Cut) Let $f$ be a flow such that there are no $s \sim t$ paths in the residual graph $G_f$. The value of $f$ is maximum over all possible flows in $G$. The capacity of the cut $(A, B)$ whose existence is guaranteed by Lemma 3 and whose capacity is equal to the value of $f$ is minimum over all possible cuts in $G$. Since the Ford-Fulkerson algorithm only terminates when there are no more $s \sim t$ paths in the residual graph, its correctness easily follows from Corollary 4. Theorem 5. (Correctness of Ford-Fulkerson) The flow produced by the Ford-Fulkerson algorithm is a maximum flow. The Ford-Fulkerson algorithm guarantees that in every flow network, there is a flow $f$ and a cut $(A, B)$ where $val(f) = cap(A, B)$, which immediately implies the following famous theorem. Theorem 6. (Max-Flow Min-Cut Theorem) In every flow network, the maximum value of a flow is equal to the minimum capacity of a cut. This beautiful relationship between flows and cuts is an example of the more general mathematical principle of duality}. See I.33-I.35 for a slightly different proof. Exercise. After applying the Ford-Fulkerson algorithm to find the maximum flow, how would you produce the actual minimum-capacity cut (partition) $(A, B)$? ### Finding Augmenting Paths Smartly: Shortest Augmenting Paths (Edmonds-Karp) We have seen a very simple algorithm for solving the max-flow min-cut problem. Unfortunately, we have only proven that it runs in $O(EC)$. Exercise. Prove another bound on the running time of Ford-Fulkerson: $O(EF)$, where $F$ is the output maximum flow value. Both of these bounds can be bad for certain instances of the problem where the edge capacities are large. In fact, we technically do not yet have a polynomial-time algorithm, as time complexity is measured in the number of bits of input, and $C$ (likewise $F$) is exponential in $\lg C$ (likewise $\lg F$), which is the number of bits required to represent the edge capacities. (Don't fret if you don't quite understand why time complexity is measured this way. This is a fine technical point.) Exercise. Come up with an instance of the max-flow problem that causes the Ford-Fulkerson algorithm to actually require $\Omega (EC)$ amount of time. See I.38 for the answer. An interesting but completely useless side note: the Ford-Fulkerson algorithm is not even guaranteed to terminate if the capacities are irrational numbers! To improve the Ford-Fulkerson algorithm, we need to have a good way of finding augmenting paths. Intuitively, it makes sense that we need both an efficient path-finding algorithm, and one which leads to the fewest possible iterations of augmentation. The first condition leads us to consider finding augmenting paths with the fewest number of edges, the shortest augmenting paths. Fortunately, this is very simple to achieve. Just use BFS to find augmenting paths. The idea is, unlike with DFS, where the path to the sink can be long, with BFS, we can stop early when we discover the sink, and maybe that leads to a faster algorithm. Surprisingly, it is the second of the two conditions above (fewest possible iterations of augmentation) which we actually fulfill. The difference between DFS and BFS turns out to not matter too much for any single particular iteration (and hence there is no real need to stop the BFS early), but makes a difference globally, when we consider the running time of all the iterations together. This was invented by Edmonds and Karp in 1972. The improvement itself is not hard to invent. It is the proof that it actually works and makes the running time strictly polynomial that is difficult and which these two guys got credit for. If you randomly picked DFS instead of BFS on your first attempt to implement the Ford-Fulkerson algorithm, now is your chance to redo and solve UVa 820 - Internet Bandwidth, before comparing with the implementation below. Here is an implementation of the Edmonds-Karp algorithm. Notice that the only thing that changes from above is the usage of BFS instead of DFS. Everything else (residual capacities, finding the bottleneck, updating the flow) stay the same. #include <bits/stdc++.h> #define MAX_N 100 #define INF 1'000'000 using namespace std; int n, m, s, t; int c[MAX_N+1][MAX_N+1]; int f[MAX_N+1][MAX_N+1]; int p[MAX_N+1]; bool bfs() { queue<int> q; q.push(s); while(!q.empty()) { int u = q.front(); q.pop(); if(p[v] == -1 && c[u][v] - f[u][v] > 0) { p[v] = u; q.push(v); } } } return p[t] != -1; } bool find_aug_path() { memset(p, -1, sizeof p); p[s] = 0; return bfs(); } int main() { memset(c, 0, sizeof c); memset(f, 0, sizeof f); cin >> n >> s >> t >> m; for(int i = 0; i < m; i++) { int u, v, capacity; cin >> u >> v >> capacity; c[u][v] += capacity; } int max_flow_value = 0; while(find_aug_path()) { int b = INF; for(int v = t, u = p[v]; v != s; v = u, u = p[v]) b = min(b, c[u][v] - f[u][v]); for(int v = t, u = p[v]; v != s; v = u, u = p[v]) f[u][v] += b, f[v][u] -= b; max_flow_value += b; } cout << max_flow_value << endl; return 0; } Why does this simple change make a big difference? Let's now try to analyze the running time of the shortest augmenting paths algorithm. First, we need to prove some lemmas. Lemma 7. (Monotonically Non-Decreasing Distances) Throughout the shortest augmenting paths algorithm, the distance from the source to any node in the residual graph never decreases from one iteration to the next. You can convince yourself of this by running the algorithm on a few graphs and printing out the paths found by the algorithm in each iteration, but it's nice to see a (75%) formal proof. Proof. Consider the residual graphs $G_f$ and $G_{f'}$ associated with flows $f$ and $f'$ before and after applying an augmentation through augmenting path $P$. The bottleneck edges in $P$ are present in $G_f$ but absent from $G_{f'}$ (as for each bottleneck edge $e \in P$, $e$ is saturated if it is a forward residual edge, or $e^R$ is emptied if $e$ is a backward residual edge). In addition, new edges which are anti-parallel to the bottleneck edges in $G_f$ are created in $G_{f'}$. Let's assume there is only one bottleneck edge $(u, v) \in P$ and compare the distances of $u$ and $v$ in $G_f$ to their distances in $G_{f'}$. Denote the distance of a node $u$ in a particular residual graph $G_f$ as $d_f(u)$. Since $(u, v)$ is an edge in $P$, $d_f(v) = d_f(u) + 1$. What can we say about the distances of $u$ and $v$ in $G_{f'}$? Since $(u, v)$ is a bottleneck edge, it is absent from $G_{f'}$. Note that the distance to $v$ can never decrease from one iteration to the next by removing edges pointing into $v$, and hence $d_{f'}(v) \geq d_f(v)$. What about the distance to $u$? Since $(u, v)$ is a bottleneck edge, it is replaced by an anti-parallel edge $(v, u)$ in $G_{f'}$. Is it possible that the distance to $u$ decreases because of a new edge pointing into it? The answer is no. To see why, suppose that the distance to $u$ does decrease from one iteration to the next; that is $$$$d_f(u) > d_{f'}(u)$$$$ If the distance to $u$ decreases, it can only possibly decrease by using the edge $(v, u)$. Thus $d_{f'}(u) = d_{f'}(v) + 1$, and $$$$d_{f'}(u) > d_{f'}(v)$$$$ Taking these two inequalities together, we have $$$$d_f(u) > d_{f'}(v)$$$$ We have just argued that the distance to $v$ can never decrease. Hence $$$$d_{f'}(v) \geq d_f(v)$$$$ Again, taking the two previous inequalities together, we have $$$$d_f(u) \geq d_f(v)$$$$ But this contradicts the fact that $d_f(v) = d_f(u) + 1$. Therefore, the distance to $u$ cannot decrease from one iteration to the next. We can repeatedly apply the same argument for the case when there are many bottleneck edges. Just consider edges in $P$ in increasing order of their nodes' distance from the source and proceed by induction. Claim. Prove this part more rigorously. From this, we can conclude that $d_{f'}(v) \geq d_f(v)$ for all nodes $v$ and for all augmentation steps $f' = Augment(f, P)$. In particular, the distance from the source to the sink never decreases from one iteration to the next. Corollary 8. (Monotonically Non-Decreasing Augmenting Path Lengths) Throughout the shortest augmenting paths algorithm, the length of an augmenting path in the residual graph never decreases from one iteration to the next. That is, for all augmentation steps $f' = Augment(f, P)$, $$d_{f'}(t) \geq d_f(t)$$ Armed with Lemma 7, we can now prove the following. Lemma 9. (Using Reverse Edges Increases Augmenting Path Length) Suppose that at some iteration $i$ of the algorithm, $(u, v)$ is a bottleneck edge in the augmenting path $P$. At some later iteration $i' > i$, the residual edge $(v, u)$ may be in the residual graph. At that point, if the augmenting path $P'$ includes $(v, u)$, then the length of $P'$ is strictly greater than the length of $P$. Again, you can convince yourself by observing the augmenting paths found by the algorithm on several different graphs, but let's see a (75%) formal proof. Proof. There may be many bottleneck edges in $P$, but as before, we can first assume there is only one and later generalize by induction. Call this bottleneck edge $(u, v)$. Let $f$ and $f'$ denote the flows (before applying the augmentation) at the earlier and the later iteration respectively. Note that if $(v, u)$ does not exist in $G_{f'}$ then this discussion is moot. So let's assume that it exists. If the shortest path $P'$ goes through $(v, u)$, then $$$$d_{f'}(u) = d_{f'}(v) + 1 > d_{f'}(v)$$$$ We know from Lemma 7 that $$$$d_{f'}(v) \geq d_f(v)$$$$ Since $(u, v)$ is an edge in $P$, $$$$d_f(v) = d_f(u) + 1 > d_f(u)$$$$ Putting these three inequalities together, we conclude that $d_{f'}(u) > d_f(u)$. Note that this is a stronger statement than Lemma 7 implies, since here we have a strict inequality. From here, it is not hard to conclude that the distances to all nodes $w$ in the path from $u$ to $t$ are strictly larger in $G_{f'}$ than in $G_f$. More briefly, $d_{f'}(w) > d_f(w)$. In particular, $d_{f'}(t) > d_f(t)$. All of this assumes we included the reverse edge $(v, u)$ in $P'$. How many times can we avoid using the reverse of a bottleneck edge before we are forced to use one to find an augmenting path? We have $O(E)$ possible bottleneck edges to exhaust before we are forced to use the reverse of any one of them. Therefore, the shortest augmenting path must strictly increase after $O(E)$ iterations. Corollary 10. (Bound on How Long the Augmenting Path Length Remains Constant) The length of the shortest augmenting path increases after at most $O(E)$ iterations. Using the above facts, it is not that hard to prove the following theorem. Theorem 11. (Efficiency of Shortest Augmenting Paths) The shortest augmenting paths algorithm solves the maximum flow problem in $O(E^2V)$ time. Exercise. Prove Theorem 11. (Hint: How many times can the length of the shortest augmenting path increase?) See I.53 for the answer. Problem. In the proof of Lemma 9 above, we did not care whether or not the bottleneck edge $(u, v)$ reappears in the residual graph between iterations $i$ and $i'$, since we know that if it does reappear in iteration $i^\star < i'$, then $(v, u)$ had to be a bottleneck edge of some augmenting path for some other intermediate iteration $i^* < i^\star$. In any case, the augmenting path at iteration $i'$ is still longer than the augmenting path at iteration $i$, since $d_{i'}(t) \geq d_{i^\star}(t) \geq d_{i^*}(t) > d_i(t)$. In addition though, we also know that $d_{i^\star}(u) > d_i(u)$. This means that whenever $(u, v)$ appears as a bottleneck edge in the residual graph, the distance to $u$ strictly increases. Using bounds on the number of times the distance to a node can increase, give an alternative proof of the efficiency of the shortest augmenting paths algorithm. See I.48-I.53 for another, slightly different proof. Ford and Fulkerson did not really specify what method must be used to find augmenting paths. We can think of the Ford-Fulkerson algorithm as not really an algorithm, but more of a template, where the actual method for finding the paths can be plugged in to create a full-fledged algorithm. Ford and Fulkerson's contribution was simply to establish the paradigm of “successively find augmenting paths,” and left it to future generations of computer scientists to extend and refine this paradigm. The Edmonds-Karp improvement plugs in the “Shortest Augmenting Paths” method into this template. Although Edmonds-Karp is significantly better than vanilla Ford-Fulkerson, it is still quite bad, requiring $\Omega(V^5)$ in dense graphs. We need faster improvements. This and the rest of the algorithms in this section, except for the last, are merely different variations on how to find the augmenting paths, with different time complexities, but the basic idea is the same. (Hence, correctness of each of these easily follows from the correctness of Ford-Fulkerson.) In practice, however, the most efficient max-flow algorithms today use a completely different approach (cue dramatic pondering on the nature of scientific progress): the pre-flow push-relabel approach, which we will see in the last part of this section. But first, let's develop our intuition using simple algorithms before diving into the more complicated approach. ### Finding Augmenting Paths Smartly: Fattest Augmenting Paths (Edmonds-Karp) In the same paper where Edmonds and Karp introduce their algorithm above, they also describe another intuitive way to improve the Ford-Fulkerson algorithm: take augmenting paths with the largest bottleneck capacity, the fattest augmenting paths. This makes sense because increasing the flow by as much as possible per iteration leads to lessening the number of iterations of augmentation required. We can do this using some simple modification of Prim's/Dijkstra's algorithm. It can be shown that this method requires $O(E \lg (EC))$ augmentations in total, and therefore $O(E^2 \lg V \lg (EC))$ total time, though we do not prove it here. This looks like an improvement over the shortest augmenting paths method. However, on dense graphs, the logarithmic terms and the constant factor overhead of using a priority queue for Prim's/Dijkstra's become significant. Doing Prim's/Dijkstra's without a priority queue turns out to not help either even with dense graphs. Compared with the shortest augmenting paths method, for most problems, the slight improvement in the sparse graph case only is not worth the extra implementation effort. ### Finding Augmenting Paths Smartly: Capacity-Scaling (Gabow) A slightly different idea for improving the Ford-Fulkerson algorithm was proposed by Gabow in 1985: maintain a scaling parameter $\Delta$ and consider only residual edges whose capacities are at least $\Delta$. This $\Delta$ is initialized to be the largest power of two smaller than the maximum capacity of any edge going out of the source. A phase of several augmentations using this fixed $\Delta$ is performed, until no more augmentations can be made, and then $\Delta$ is divided by $2$. This process is repeated until $\Delta$ reaches $0$. See I.42 for pseudocode. This algorithm runs in $O(E^2 \lg C)$ time. We will skip the proof. You can look at I.44-I.45 for it. We will also skip the implementation. It is not too hard to try it on your own. In practice, this algorithm is significantly better than the shortest augmenting paths algorithm for sparse graphs, but only marginally better for dense graphs. Be forewarned though, that an implementation of capacity-scaling using DFS performs significantly more poorly than one that uses BFS. ### Finding Augmenting Paths Smartly: Level Graph Blocking Flows (Dinitz) The previous two improvements we have seen are theoretically interesting, but they are not significantly better than the shortest augmenting paths algorithm to be worth using. This one is though. Dinitz invented it in 1970, and proved independently of Edmonds and Karp that the max-flow problem can be solved in polynomial time. Interestingly, this algorithm is more commonly known today as “Dinic's” algorithm, because the guy who gave the initial lectures about this algorithm kept mispronouncing Dinitz' name. The idea behind the algorithm is not that difficult. Like Edmonds and Karp's algorithm, Dinic's algorithm will find the shortest augmenting paths, but it will find all augmenting paths of a fixed length in one go. We previously discussed two natural strategies for improving the running time of augmenting path algorithms: find paths efficiently, and reducing the number of iterations of augmentation. They are not mutually exclusive. Dinic's algorithm does both. By simultaneously augmenting along all shortest paths with the same length, Dinic's algorithm will require only $O(V)$ phases of augmentation. (Why?) Using the idea of a level graphs and blocking flows, Dinic's algorithm can find and augment along all paths with the same length in $O(VE)$. This makes the total running time $O(V^2E)$, which is a significant improvement over Edmonds and Karp's $O(VE^2)$ for dense graphs, and which in practice happens to also work significantly better than Edmonds-Karp in general for graphs of different densities. Let's make this intuition more formal. First, we need the concept of a level graph. The level graph of a given graph $G = (V, E)$ is the subgraph containing only edges that can possibly be part of a shortest path from the source $s$ to the sink $t$. Specifically, denote $d(v)$ as the distance of a node $v$ from $s$, that is, the number of edges in the shortest path from $s$ to $v$. The level graph $L(G) = (V, E_L)$ contains only those edges $(u, v) \in E$ where $d(v) = d(u) + 1$. See I.49 for an example. Note that for some residual graph $G_f$, a shortest augmenting path only contains edges in $L(G_f)$. The level graph is closely related to the BFS tree. Next, let's introduce the idea of a blocking flow. A flow $f$ is a blocking flow for flow network $G$ if there are no $s \sim t$ paths in the subgraph obtained by removing saturated edges from $G$. Exercise. Prove or disprove: every maximum flow is a blocking flow. Exercise. Prove or disprove: every blocking flow is a maximum flow. Exercise. Prove or disprove: If $f$ is a blocking flow for $G$, then there are no augmenting paths in $G_f$. Stated another way, a blocking flow is just a flow which prevents augmentation using only forward residual edges. Notice that our first algorithm was simply: find and augment along $s \sim t$ paths until the current flow is a blocking flow. Each blocking flow can represent the flow produced by a bunch of augmenting paths. Dinic's algorithm will repeatedly find blocking flows and update the global flow using these blocking flows instead of individual augmenting paths. Let's make this notion of “augmenting a flow with another flow” more formal. Let $f$ and $b$ be two flows on $G$. Define the augmentation of $f$ by $b$ (or more simply, just the sum of $f$ and $b$) as flow produced by combining the two flows for each edge: $f' = f + b$ iff $f'(e) = f(e) + b(e)$ for all $e \in G$. At a very high level, Dinic's algorithm can be described as follows. Denote the flow at iteration $i$ as $f_i$. Let $f_0$ initially be an empty flow. Perform $O(V)$ phases of augmentation. In each phase, do the following: 1. Construct the level graph $L(G_{f_i})$ of the residual graph $G_{f_i}$. 2. Compute a blocking flow $b_i$ of $L(G_{f_i})$. 3. Update the flow by augmenting it with the blocking flow: let $f_{i+1} = f_i + b_i$. This description does not look intuitive at all. Weren't we trying to find all augmenting paths with the same length all in one phase? Why these notions of level graph and blocking flow? The reason why “find all augmenting paths with the same length” is the same as “find a blocking flow in the level graph” is made clear by the following lemma. Lemma 12. (Augmentation by Level Graph Blocking Flow Increases Augmenting Path Length) Let $f$ be a flow and $b$ be a blocking flow in $L(G_f)$. The distance from the source to the sink is strictly greater in $G_{f+b}$ than in $G_f$: $$d_{f+b}(t) > d_f(t)$$ Problem. Prove Lemma 12. (Hint: Consider Lemma 9.) If you understand the proofs for the efficiency of Edmonds-Karp algorithm and the concepts above, it is actually not impossible to complete with Dinic's algorithm on your own. All you need is to find a way to perform each phase of augmentation in $O(VE)$ time. Exercise. Attempt to complete Dinic's algorithm on your own. (Hints: For a single phase, how much time is needed to construct the level graph? At most how many individual augmenting paths can make up a blocking flow of the level graph? Using the level graph, can we find one such augmenting path in $O(V)$? What if we allow the level graph to be modified every time we augment along a path? In particular, what if we can delete nodes and edges from the level graph?) Did you figure it out? It is easy to perform steps 1 and 3 of each phase both in $O(E)$ time. Step 2 is trickier. Simply performing a DFS/BFS to find individual augmenting paths in the level graph to compute the blocking flow still requires $O(E)$ per path. By Corollary 10, this makes each phase require $O(E^2)$ time. This is really just Edmonds-Karp stated in an unnecessarily fancier way. However, if we can somehow find individual augmenting paths in the level graph in $O(V)$ time, then we are done. DFS happens to help us in this case. Let's assume that we get lucky, and picking the first outgoing edge in every DFS call leads us to the sink. Then, we can find one augmenting path (plus update the blocking flow and delete bottleneck edges from the level graph) in $O(V)$. The problem is, we can be unlucky in the DFS, and reach a dead end, say $v$, that has no path to the sink, causing us to backtrack and to require $O(E)$ time to find one augmenting path. In this case, however, we are sure that no augmenting paths will ever pass through $v$ until the next phase, so we can delete $v$ (and all edges incident to it) from the level graph before backtracking. Since there are only $V$ nodes in the graph, this unlucky case will only happen $O(V)$ times. We're done. See I.56-I.69 for illustrations, pseudocode, and a more detailed proof. Exercise. Before looking at the implementation of Dinic's algorithm below, re-solve UVa 820 - Internet Bandwidth, this time using your own implementation of Dinic's algorithm. Compare the actual running times of Edmonds-Karp's and Dinic's algorithms for this problem. Some care is needed to ensure that deleting a node or edge from the level graph can actually be done in constant time, to ensure the overall running time of the algorithm is $O(V^2E)$. Here is a clean implementation of Dinic's algorithm. #include <bits/stdc++.h> #define MAX_N 100 #define INF 1'000'000 using namespace std; int n, m, s, t; unordered_set<int> L_adj_rev[MAX_N+1]; // to avoid extra linear factor for node deletion int c[MAX_N+1][MAX_N+1]; int f[MAX_N+1][MAX_N+1]; int d[MAX_N+1]; // distance for level graph int p[MAX_N+1]; // parent for blocking flow bool make_level_graph() { memset(d, -1, sizeof d); d[s] = 0; queue<int> q; q.push(s); while(!q.empty()) { int u = q.front(); q.pop(); if(c[u][v] - f[u][v] > 0) { if(d[v] == -1) { d[v] = d[u] + 1; q.push(v); } if(d[v] == d[u] + 1) { } } } } return d[t] != -1; } bool dfs(int u) { if(u == t) return true; if(dfs(v)) { p[v] = u; return true; } } // node $u$ has no path to the sink, delete it from level graph } return false; } bool find_aug_path() { memset(p, -1, sizeof p); p[s] = 0; return dfs(s); } int main() { memset(c, 0, sizeof c); memset(f, 0, sizeof f); cin >> n >> s >> t >> m; for(int i = 0; i < m; i++) { int u, v, capacity; cin >> u >> v >> capacity; c[u][v] += capacity; } int max_flow_value = 0; while(make_level_graph()) { while(find_aug_path()) { int b = INF; for(int v = t, u = p[v]; v != s; v = u, u = p[v]) b = min(b, c[u][v] - f[u][v]); for(int v = t, u = p[v]; v != s; v = u, u = p[v]) { if(c[u][v] - f[u][v] == b) { // delete bottleneck edges from the level graph } f[u][v] += b, f[v][u] -= b; } max_flow_value += b; } } cout << max_flow_value << endl; return 0; } Unfortunately, if you try using this implementation to solve UVa 820 - Internet Bandwidth, you will get TLE. Explicitly maintaining a level graph significantly degrades the running time. Instead, let's try to retrieve the level graph implicitly using only the distance information for each node. Implementing this properly is not trivial, and a buggy implementation can very easily make the running time degenerate to $O(VE^2)$. The idea for the implementation is, when we run DFS, we use the original adjacency list to get the neighbors of a node. However, to ensure that a neighbor $v$ is actually a neighbor of $u$ in the current level graph, we need to check three things. 1. The distances are correct, namely: $d_v = d_u + 1$. 2. We haven't yet deleted the edge $e = (u, v)$ from the level graph in a previous augmentation step. In other words, $c(e) - f(e) > 0$. 3. We haven't yet deleted node $v$ from the level graph. We can delete a node by marking its distance to be some dummy value like -1, so that the first condition subsumes this one. We ignore $v$ if it fails to satisfy any of these three properties. In addition to checking these conditions, we have to ensure that we don't repeatedly visit some edge or node that no longer exists in the level graph. Otherwise, all iterations of DFS will degenerate to $O(E)$, causing the entire algorithm to degenerate to $O(VE^2)$. Observe that once we discover a neighbor $v$ that we should ignore, we should continue to ignore it for the rest of the augmentation phase. We can therefore maintain for each node $u$ a pointer to its first neighbor that still satisfies the three conditions above. Initially, this pointer points to its first neighbor in the original adjacency list. Starting from node $u$, if we are in a lucky case, its neighbor $v$ satisfies the three conditions above, and the DFS from its neighbor $v$ succeeds. Otherwise, we are in an unlucky case. In this case, the neighbor $v$ will never be a valid neighbor of $u$ in the level graph until the end of the current augmentation phase, so we move the pointer of $u$ to its next neighbor. This effectively deletes $(u, v)$ from the level graph and helps us avoid repeated visits. Carefully read the code below for the details, and convince yourself that this implementation is $O(V^2E)$. #include <bits/stdc++.h> #define MAX_N 100 #define INF 1'000'000 using namespace std; int n, m, s, t; vector<int> adj[MAX_N+1]; // revert to adj list to make it easy to keep track of ignored neighbors int adj_ptr[MAX_N+1]; // the index of the first neighbor of a node which is not yet ignored int c[MAX_N+1][MAX_N+1]; int f[MAX_N+1][MAX_N+1]; int d[MAX_N+1]; // distance for level graph int p[MAX_N+1]; // parent for blocking flow bool make_level_graph() { memset(d, -1, sizeof d); d[s] = 0; queue<int> q; q.push(s); while(!q.empty()) { int u = q.front(); q.pop(); if(c[u][v] - f[u][v] > 0 && d[v] == -1) { d[v] = d[u] + 1; q.push(v); } } } return d[t] != -1; } bool dfs(int u) { if(u == t) return true; if(d[v] == d[u] + 1 && c[u][v] - f[u][v] > 0 && dfs(v)) { // lucky case: we immediately return, and adj_ptr[u] remains untouched p[v] = u; return true; } // Unlucky case: Either edge $(u, v)$ or node $v$ doesn't exist in the level graph. // Moving to the next neighbor increments adj_ptr[u] along with $i$, // because we assigned $i$ by reference. // This effectively removes $v$ from the level graph neighbor list of $u$ // until the end of the current phase. } // node $u$ has no path to the sink, "delete" it from level graph d[u] = -1; return false; } bool find_aug_path() { memset(p, -1, sizeof p); p[s] = 0; return dfs(s); } int main() { memset(c, 0, sizeof c); memset(f, 0, sizeof f); cin >> n >> s >> t >> m; for(int i = 0; i < m; i++) { int u, v, capacity; cin >> u >> v >> capacity; // adj list will have duplicates, but this doesn't hurt running time too much // BFS for making the level graph will still be $O(E)$ c[u][v] += capacity; } int max_flow_value = 0; while(make_level_graph()) { // with each new phase, the first non-ignored neighbor is // its first neighbor in the original adjacency list while(find_aug_path()) { int b = INF; for(int v = t, u = p[v]; v != s; v = u, u = p[v]) b = min(b, c[u][v] - f[u][v]); for(int v = t, u = p[v]; v != s; v = u, u = p[v]) f[u][v] += b, f[v][u] -= b; max_flow_value += b; } } cout << max_flow_value << endl; return 0; } Interestingly, using a data structure called a link-cut tree (invented by Sleator and Tarjan in 1982), the time required to find a blocking flow in the level graph can be reduced from $O(VE)$ to $O(E \lg V)$, making the total time $O(VE \lg V)$. With some additional techniques and data structures, each blocking flow can be found in $O(E \lg (V^2/E))$, making the total time $O(VE \lg (V^2/E))$. These is quite close to the best bounds for the max-flow problem known today. However, because the data structures are quite complicated and the constant factors in the running time are quite large, this is not used in practice. The best bounds are $O(VE)$, due to an algorithm made by Orlin in 2013. (Note how recent it is! Research on network flows is still very active.) But that algorithm is extremely complicated and impractical today. Let's now turn our attention to an algorithm of moderate difficulty to understand, and which works very well in practice. ### Push-Relabel Approach to the Maximum Flow Problem A radically different approach to the max flow problem was introduced by Goldberg and Tarjan in 1988, called the push-relabel approach. Similar to the Ford-Fulkerson algorithm, the basic skeleton of the approach is not an algorithm itself. There is a part which needs to be specified in full, and the overall algorithm can easily be improved by simple tweaks to this part. Since then, a number of different approaches with strictly better asymptotic complexities have been invented, but they have not been proven to be more efficient in practice than push-relabel algorithms. Hence, push-relabel algorithms are still the gold standard for maximum flow. Similar to our discussion of the augmenting paths method, we will first describe and implement the simplest version, and add the improvements later. ### Practice Problems Before moving on to the next section, I recommend practicing what you just learned with the following problems first (submission is not required for these problems): Try to use all three algorithms (shortest augmenting paths, blocking flow, and pre-flow push-relabel) to solve each problem. ## Bipartite Matching and Hall's Marriage Theorem ### Reduction to Maximum Flow At first glance, bipartite matching and maximum flow appear to be completely different problems. However, there is a very simple reduction from the former to the latter. ### Analysis of Maximum Flow Algorithms on Unit-Capacity Simple Networks (This section is under construction) ### Alternating Chains (Kuhn) and Berge's Lemma (This section is under construction) ### Multiple Alternating Chains (Hopcroft-Karp) (This section is under construction) ### Practice Problems Before moving on to the next section, I recommend practicing what you just learned with the following problems first (submission is not required for these problems): ## Disjoint Paths and Menger's Theorem (This section is under construction) ## Special Cases of NP Complete Problems Reduced to Bipartite Matching (This section is under construction) ## Minimum Cost Flows (This section is under construction) Next
2019-07-22 16:57:25
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 8, "x-ck12": 0, "texerror": 0, "math_score": 0.758134663105011, "perplexity": 595.4270293996516}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-30/segments/1563195528141.87/warc/CC-MAIN-20190722154408-20190722180408-00012.warc.gz"}
https://www.snapsolve.com/solutions/Whichof-the-following-is-in-direct-proportion-A-One-side-of-a-cuboid-and-its-vol-1672378031836162
Home/Class 8/Maths/ Which of the following is in direct proportion ?( ) A. One side of a cuboid and its volume. B. Speed of a vehicle and the distance travelled in a fixed time interval. C. Change in weight and height among individuals. D. Number of pipes to fill a tank and the time required to fill the same tank. Speed 00:00 03:11 ## QuestionMathsClass 8 Which of the following is in direct proportion ?( ) A. One side of a cuboid and its volume. B. Speed of a vehicle and the distance travelled in a fixed time interval. C. Change in weight and height among individuals. D. Number of pipes to fill a tank and the time required to fill the same tank. B 4.6 4.6 ## Solution We know that , distance $$=$$ speed $$\times$$ time. For fixed time interval , distance $$\propto$$ speed. Thus for fixed time interval distance is directly proportional to speed. Hence option $$(B)$$ is correct. And remaininge all three option are Inverse proportion .
2022-05-20 07:50:15
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8334859013557434, "perplexity": 2254.793274177261}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-21/segments/1652662531762.30/warc/CC-MAIN-20220520061824-20220520091824-00621.warc.gz"}
https://gmatclub.com/forum/if-x-is-positive-what-is-the-value-of-x-265914.html
GMAT Question of the Day: Daily via email | Daily via Instagram New to GMAT Club? Watch this Video It is currently 30 May 2020, 22:41 ### GMAT Club Daily Prep #### Thank you for using the timer - this advanced tool can estimate your performance and suggest more practice questions. We have subscribed you to Daily Prep Questions via email. Customized for You we will pick new questions that match your level based on your Timer History Track every week, we’ll send you an estimated GMAT score based on your performance Practice Pays we will pick new questions that match your level based on your Timer History # If x is positive, what is the value of x^(1/2)? Author Message TAGS: ### Hide Tags Intern Joined: 25 Mar 2018 Posts: 2 If x is positive, what is the value of x^(1/2)?  [#permalink] ### Show Tags Updated on: 18 May 2018, 23:28 4 00:00 Difficulty: 5% (low) Question Stats: 76% (00:45) correct 24% (00:42) wrong based on 208 sessions ### HideShow timer Statistics If x is positive, what is the value of $$\sqrt{x}$$ (1) $$\sqrt[3]{x}=2$$ (2) x^2=64 Comment: The official answer is D. However, since the question stem doesn't state anything about the sign of x^1/2 (only that x is positive), i am not convinced that there is a unique answer since x^1/2 can be +- 2 {(2)^1/2} PS: I tried using the math formula buttons. Didnt work for me. Apologize for the format Originally posted by vhsneha on 18 May 2018, 18:25. Last edited by Bunuel on 18 May 2018, 23:28, edited 1 time in total. Renamed the topic and edited the question. Current Student Affiliations: All Day Test Prep Joined: 08 May 2018 Posts: 79 Location: United States (IL) Schools: Booth '20 (A) GMAT 1: 770 Q51 V49 GRE 1: Q167 V167 GPA: 3.58 Re: If x is positive, what is the value of x^(1/2)?  [#permalink] ### Show Tags 18 May 2018, 18:33 1 1 If x is positive, what is the value of sqrt x (1) (x^1/3)=2 (2) x^2=64 You can't have the square root of a negative number. Irrational numbers are way beyond the scope of GMAT. If a question ask for $$\sqrt{x}$$ you can assume x is positive or zero. _________________ Founder, All Day Test Prep Unlimited private GMAT Tutoring for less than the cost a generic prep course. Online or in-person Intern Joined: 25 Mar 2018 Posts: 2 Re: If x is positive, what is the value of x^(1/2)?  [#permalink] ### Show Tags 18 May 2018, 18:45 2 1 I agree. And I am considering x to be positive (8). However the question asks for sqrt of x. It doesn't suggest that sqrt of x is positive. hence it can be +- 2(2)^1/2. Am I missing something? Manager Joined: 03 Oct 2016 Posts: 118 Re: If x is positive, what is the value of x^(1/2)?  [#permalink] ### Show Tags 18 May 2018, 18:46 2 vhsneha wrote: If x is positive, what is the value of sqrt x (1) (x^1/3)=2 (2) x^2=64 In GMAT, we only consider the +ve root value of a square root. Example: x= $$\sqrt{4}$$ -> x=2, whereas $$x^2$$ = 4 -> x =+2 0r -2. Now coming to the question, as we are asked about $$\sqrt{x}$$. St1: $$x^\frac{1}{3}$$=2 -> x=$$2^3$$ so, $$\sqrt{x}$$ = $$\sqrt{2^3}$$. We can definitely get the value of x. Sufficient. St2: $$x^2$$=64 -> x=+8 0r -8. Note: We cannot have square root of a -ve number so, $$\sqrt{x}$$ = $$\sqrt{8}$$. We can definitely get the value of x. Sufficient. Manager Joined: 08 Sep 2008 Posts: 122 Location: India Concentration: Operations, General Management Schools: ISB '20 GPA: 3.8 WE: Operations (Transportation) Re: If x is positive, what is the value of x^(1/2)?  [#permalink] ### Show Tags 18 May 2018, 18:52 When the GMAT gives you a square root symbol, it’s referring to one specific value: the positive square root. So from statement 1: x=2^3=8. Sufficient. Statement 2: x^2=64 IxI=8; x can be +8 or,-8. As per question stem x is positive. So only x=8 satisfied. Hence sufficient Ans: D Sent from my ASUS_Z010D using GMAT Club Forum mobile app Math Expert Joined: 02 Sep 2009 Posts: 64248 Re: If x is positive, what is the value of x^(1/2)?  [#permalink] ### Show Tags 19 May 2018, 00:05 vhsneha wrote: If x is positive, what is the value of $$\sqrt{x}$$ (1) $$\sqrt[3]{x}=2$$ (2) x^2=64 Comment: The official answer is D. However, since the question stem doesn't state anything about the sign of x^1/2 (only that x is positive), i am not convinced that there is a unique answer since x^1/2 can be +- 2 {(2)^1/2} PS: I tried using the math formula buttons. Didnt work for me. Apologize for the format If x is positive, what is the value of $$\sqrt{x}$$? (1) $$\sqrt[3]{x}=2$$ --> take to the third power: x = 8 --> $$\sqrt{x}=\sqrt{8}$$. Sufficient. (2) x^2=64 --> x = 8 or x = -8. Since we are told that x is positive, then x = 8 and $$\sqrt{x}=\sqrt{8}$$. Sufficient. vhsneha wrote: Comment: The official answer is D. However, since the question stem doesn't state anything about the sign of x^1/2 (only that x is positive), i am not convinced that there is a unique answer since x^1/2 can be +- 2 {(2)^1/2} PS: I tried using the math formula buttons. Didnt work for me. Apologize for the format First of all, we are told that x is positive, so x cannot be $$-2\sqrt{2}$$. Next, the square root cannot give a negative result, that is $$\sqrt{4}=2$$ NOT +2 and -2. (In contrast the equation x^2 = 4 has TWO solutions x = 2 and x = -2). arosman wrote: You can't have the square root of a negative number. Irrational numbers are way beyond the scope of GMAT. If a question ask for $$\sqrt{x}$$ you can assume x is positive or zero. Yes, even roots from negative numbers are not defined for the GMAT ($$\sqrt[even]{negative}$$ is undefined). So, you don't need complex numbers for the GMAT. GMAT deals with only real numbers: integers (-3, -2, -1, 0, 1, 2, 3, ...), fractions/decimals (3/2, 4/3, 0.7, 17.5, ...) and irrational numbers ($$\sqrt{3}$$, $$\sqrt{2}$$, $$\pi$$, ...). Check for more below: 2. Properties of Integers For other subjects: ALL YOU NEED FOR QUANT ! ! ! _________________ Non-Human User Joined: 09 Sep 2013 Posts: 15022 Re: If x is positive, what is the value of x^(1/2)?  [#permalink] ### Show Tags 15 Dec 2019, 06:12 Hello from the GMAT Club BumpBot! Thanks to another GMAT Club member, I have just discovered this valuable topic, yet it had no discussion for over a year. I am now bumping it up - doing my job. I think you may find it valuable (esp those replies with Kudos). Want to see all other topics I dig out? Follow me (click follow button on profile). You will receive a summary of all topics I bump in your profile area as well as via email. _________________ Re: If x is positive, what is the value of x^(1/2)?   [#permalink] 15 Dec 2019, 06:12
2020-05-31 06:41:23
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.7689672112464905, "perplexity": 2159.96259855823}, "config": {"markdown_headings": true, "markdown_code": false, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-24/segments/1590347411862.59/warc/CC-MAIN-20200531053947-20200531083947-00184.warc.gz"}
https://www.key2physics.org/uniform-circular-motion
Uniform Circular Motion A particle or an object is said to be in circular motion if it follows a path around the circle or a circular arc. If it has constant speed it is said to be in uniform motion. When both of these conditions are met then the object is in uniform circular motion. The velocity and the acceleration of the object have constant value but they change direction. The picture below shows the relationship between acceleration and velocity. The acceleration is always pointing towards the center of the circle. It is called centripetal acceleration. The velocity is a tangent to the circle on point where the object is. Following the logic from linear motion we can find the equation for centripetal acceleration. $$a= \frac {v^2}{R}$$ When going around the circle the objects travel the entire circumference. We can use this to find the time needed. $$T=\frac {2*\pi*R}{v}$$ Each body that has acceleration has some sort of force acting on it. In this case we have centripetal force. We can use Newton's second law to calculate the force. We have: $$F=m*a$$ Replacing $$a$$ we get: $$F=\frac {m*v^2}{R}$$ Example 1 A military plane is flying with a constant speed of 200 m/s. Then it enters into the loop with a radius of 400 m. What is the value of centripetal acceleration of the plane? Given: v=200 m/s R=400 m a=? Solution: $$a=\frac {v^2}{r}=\frac {(200\;m/s)^2}{400\;m}= 100\; m/s^2$$ This value is equal to 100/9.81=10.19g. This is a huge acceleration and if a pilot experiences it, he would die instantly. This is the reason why the special g-suite was invented to keep pilots safe. Example 2. The International space station is flying at 520 kilometers above Earth. It needs 90 minutes to make a trip around Earth. Assuming the path is circular calculate: a. The radius of the circle b. The total distance traveled in one day c. The velocity of the station d. The acceleration of the station e. The centripetal force acting on the astronaut that has a mass of 80 kg Solution: a. The station is circling at the height of 520 km above Earth. We know that the center of this circle is at Earth's center. The radius of Earth is $$6.37 \times 10^6\; m$$. So the radius of the circle is $$R=\text{radius of earth} + \text{height above earth}$$ $$R=6.37\times10^6\;m+5.20\times10^5\;m=6.89\times10^6\;m$$ b. To calculate the total trips traveled in one day first we need to calculate the number of trips in one day. $$24\;h \times {60\;min\over 1\;h}= 1440\;minutes$$ $$1440/90=16$$ Thus, the station completes 16 trips during one day. Next, we need to calculate distance traveled during one trip. It is equal to the circumference of the circle that the station is on. $$d=2\pi R = 2\pi( 6.89\times10^6\;m)= 43,291,146\;m$$ Now we multiply that with number of the trips to get the total distance $$D=16d=16 \times 43,291,146\;m= 692,658,336\;m$$ c. To calculate the velocity of the station we use the formula: $$v=\frac {d}{T}$$ where $$d$$ is the distance traveled during one trip and $$T$$ is the time needed for that trip. $$T=90\;mins \times {60\;s \over 1\;min}=5400\;s\\$$ Therefore, the speed is $$v={43,291,146\;m \over 5400\;s}\\ v=8,016.9\;m/s$$ d. To calculate the acceleration of the station we use the formula: $$a=\frac{v^2}{R}$$ $$a=\frac {(8016.9\;m/s)^2}{6.89\times 10^6\;m} = 9.33\;m/s^2$$ e. Given m=80kg, the force can be solved by $$F=ma$$ $$F=(80\;kg)(9.33\;m/s^2)$$ $$F=746.4\;N$$
2021-07-26 04:47:34
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8138529062271118, "perplexity": 289.06542355998414}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-31/segments/1627046152000.25/warc/CC-MAIN-20210726031942-20210726061942-00719.warc.gz"}
https://openmdao.github.io/dymos/features/phases/segments.html
# Segments# All phases in Dymos are decomposed into one or more segments in time. These segments serve the following purposes: • Gauss-Lobatto collocation and the Radau Pseudospectral method model each state variable as a polynomial segment in nondimensional time within each segment. • Each control is modeled as a polynomial in nondimensional time within each segment. The order of the state polynomial segment is given by the phase argument transcription_order. In Dymos the minimum supported transcription order is 3. State-time histories within a segment are modelled as a Lagrange polynomial. Continuity in state value may be enforced via linear constraints at the segment boundaries (the default behavior) or by specifying a compressed transcription whereby the state value at a segment boundary is provided as a single value. The default compressed transcription yields an optimization problem with fewer variables, but in some situations using uncompressed transcription can result in more robust convergence.
2022-10-04 17:38:27
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.5804449915885925, "perplexity": 1628.1041017132513}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337516.13/warc/CC-MAIN-20221004152839-20221004182839-00308.warc.gz"}
http://cpr-quantph.blogspot.com/2013/07/13076541-ahmad-nawaz.html
## Thursday, July 25, 2013 In its normal form prisoners' dilemma (PD) is represented by a payoff matrix showing players strategies and payoffs. To obtain distinguishing trait and strategic form of PD certain constraints are imposed on the elements of its payoff matrix. We quantize PD by generalized quantization scheme to analyze its strategic behavior in quantum domain. The game starts with general entangled state of the form $\left}\psi\right\rangle =\cos\frac{\xi}% {2}\left|00\right\rangle +i\sin\frac{\xi}{2}\left|11\right\rangle$ and the measurement for payoffs is performed in entangled and product bases. We show that for both measurements there exist respective cutoff values of entanglement of initial quantum state up to which strategic form of game remains intact. Beyond these cutoffs the quantized PD behaves like chicken game up to another cutoff value. For the measurement in entangled basis the dilemma is resolved for\ $\sin\xi>\frac{1}{7}$ with $Q\otimes Q$ as a NE but the quantized game behaves like PD when $\sin\xi>\frac{1}{3}$; whereas in the range $\frac{1}{7}<\sin\xi<\frac{1}{3}$ it behaves like chicken game (CG)\ with $Q\otimes Q$ as a NE. For the measurement in product basis the quantized PD behaves like classical PD for $\sin^{2}\frac{\xi}{2}<\frac{1}{3}$ with $D\otimes D$ as a NE. In region $\frac{1}{3}<\sin^{2}\frac{\xi}{2}% <\frac{3}{7}$ the quantized PD behaves like classical CG with $C\otimes D$ and $D\otimes C$ as NE.
2017-06-26 15:40:06
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.572350025177002, "perplexity": 988.2048398840583}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-26/segments/1498128320823.40/warc/CC-MAIN-20170626152050-20170626172050-00467.warc.gz"}
https://socratic.org/questions/58e7c6be7c01490e3eb3cd1d
# Question #3cd1d You can't convert ${\text{g/cm}}^{3}$ directly to $\text{g/mol}$, if that is really what you mean there must be more information provided. If meant to convert to $\text{g/mL}$ there's a simple conversion factor $1 \text{cm"^3=1"mL}$, so $X \text{g/cm"^3times(1"cm"^3)/(1"mL")=X"g/mL}$
2019-11-14 13:17:47
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 5, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.6262333393096924, "perplexity": 329.71224070031593}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-47/segments/1573496668525.62/warc/CC-MAIN-20191114131434-20191114155434-00380.warc.gz"}
https://excellup.com/seventh_math/7_math_chapter_8_2.aspx
# Comparing Quantities ## Exercise 8.2 Question 1: Convert the given fractional numbers to per cents. (a) 1/8 Answer: 1/8 xx 100 = 12.5% (b) 5/4 Answer: 5/4 xx 100 = 125% (c) (3)/(40) Answer: (3)/(40) xx 100 = 7.5% (d) 2/7 Answer: 2/7 xx 100 = 28.57% Question 2: Convert the given decimal fractions to per cents. (a) 0.65 Answer: 0.65 xx 100 = 65% (b) 2.1 Answer: 2.1 × 100 = 210% (c) 0.02 Answer: 0.02 × 100 = 2% (d) 12.35 Answer: 12.35 xx 100 = 1235% Question 3: Estimate what part of the figures is coloured and hence find the per cent which is coloured. Answer: (i) 25% (ii) 60% (ii) 37.5% Question 4: Find: (a) 15% of 250 Answer: (250 xx 15)/(100) = 37.5 (b) 1% of 1 hour Answer: Let us first convert 1 hour into seconds 1 hr = 60 xx 60 = 3600 second 3600 xx 1% = (3600 xx1)/(100) = 36 second (c) 20% of Rs. 2500 Answer: (2500 xx20)/(100) = Rs. 500 (d) 75% of 1 kg Answer: Let us first convert 1 kg into g 1 kg = 1000 g (1000 xx 75)/(100) = 750 g Question 5: Find the whole quantity if (a) 5% of it is 600. Answer: Let us assume the whole quantity = x x\xx(5)/(100)=600 Or, x=(600xx100)/(5)=12000 (b) 12% of it is Rs. 1080. Answer: Let us assume the whole quantity = x x\xx(12)/(100)=1080 Or, x=(1080xx100)/(12)=9000 (c) 40% of it is 500 km. Answer: Let us assume the whole quantity = x x\xx(40)/(100)=500 Or, x=(500xx100)/(40)=1250 km (d) 70% of it is 14 minutes. Answer: Let us assume the whole quantity = x x\xx(70)/(100)=14 Or, x=(14xx100)/(70)=20 min (e) 8% of it is 40 litres. Answer: Let us assume the whole quantity = x x\xx(8)/(100)=40 Or, x=(40xx100)/(8)=500 liter Question 6: Convert given per cents to decimal fractions and also to fractions in simplest forms: (a) 25% 25%=(25)/(100)=1/4 (b) 150% 150%=(150)/(100)=3/2=1\1/2 (c) 20% 20%=(20)/(100)=1/5 (d) 5% 5%=(5)/(100)=(1)/(20) Question 7: In a city, 30% are females, 40% are males and remaining are children. What per cent are children? Answer: Total = 100, females = 30%, males = 40% and children = ? Or, 30% + 40% + children = 100% Or, 70% + children = 100% Or, children = 100% - 70% = 30% Question 8: Out of 15,000 voters in a constituency, 60% voted. Find the percentage of voters who did not vote. Can you now find how many actually did not vote? Answer: Total = 100%, voted = 60% not voted = ? Or, 60% + Did not vote = 100% Or, Did not vote = 100% - 60% = 40% Number of people who did not vote: =15000xx40% =15000xx(40)/(100)=6000 Question 9: Meeta saves Rs. 400 from her salary. If this is 10% of her salary. What is her salary? Answer: Meeta’s salary can be calculated as follows: (400xx100)/(10)=Rs. 4000 Alternate Method: When saving is Rs. 10 then salary is Rs. 100. Hence, when saving is Rs. 1 then salary is 100/10 Hence, when saving is Rs. 400 then salary is (400xx100)/(10)=Rs. 4000 Question 10: A local cricket team played 20 matches in one season. It won 25% of them. How many matches did they win? Answer: Number of matches won can be calculated as follows: 20xx25%=20xx(25)/(100)=5`
2022-12-09 03:35:15
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.46333402395248413, "perplexity": 7926.587564652623}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-49/segments/1669446711376.47/warc/CC-MAIN-20221209011720-20221209041720-00289.warc.gz"}