url
stringlengths
14
2.42k
text
stringlengths
100
1.02M
date
stringlengths
19
19
metadata
stringlengths
1.06k
1.1k
https://www.scienceopen.com/document?vid=a367b895-e7d0-4c73-948f-6476b949ac86
0 views 0 recommends +1 Recommend 1 collections 0 shares • Record: found • Abstract: found • Article: found # Primary Failure of Arteriovenous Fistulae in Auto-Immune Disease S. Karger AG ScienceOpenPublisherPubMed Bookmark There is no author summary for this article yet. Authors can add summaries to their articles on ScienceOpen to make them more accessible to a non-specialist audience. ### Abstract Background/Aim: Chronic haemodialysis depends on an arteriovenous fistula. Primary failure of vascular access is a common problem which is mainly related to thrombosis. As ambulatory surgery is common, it is mandatory to identify patients with a high thrombophilic risk to allow better prevention (anticoagulation) and direct re-intervention after thrombosis. The purpose of this study was to determine thrombophilic risk factors for primary access failure in order to identify patients at risk before the operation. Methods: We performed a retrospective study on 62 chronic haemodialysis patients who received permanent vascular access. We evaluated established risk factors for chronic access failure as well as the number of earlier shunt operations in these patients. Results: The patients predominantly suffered from auto-immune diseases. The frequency of a successful first vascular access was above average (92.5%). We identified four major risk factors for primary access failure: number of previous vascular access thromboses (p < 0.01; R = 0.96), pre-existing thrombophilic risk factors (p < 0.01), pre-operative fibrinogen (p < 0.02), and vasculitis (p < 0.01). Conclusions: We identified four risk factors which allowed an individual risk evaluation. Among the factors investigated, the activity of the auto-immune disease was the most striking. Our data suggest not to perform a vascular access during an active period of vasculitis. ### Most cited references4 • Record: found • Abstract: found ### Cost analysis of ongoing care of patients with end-stage renal disease: the impact of dialysis modality and dialysis access. (2002) Care of patients with end-stage renal disease (ESRD) is important and resource intense. To enable ESRD programs to develop strategies for more cost-efficient care, an accurate estimate of the cost of caring for patients with ESRD is needed. The objective of our study is to develop an updated and accurate itemized description of costs and resources required to treat patients with ESRD on dialysis therapy and contrast differences in resources required for various dialysis modalities. One hundred sixty-six patients who had been on dialysis therapy for longer than 6 months and agreed to enrollment were followed up prospectively for 1 year. Detailed information on baseline patient characteristics, including comorbidity, was collected. Costs considered included those related to outpatient dialysis care, inpatient care, outpatient nondialysis care, and physician claims. We also estimated separately the cost of maintaining the dialysis access. Overall annual cost of care for in-center, satellite, and home/self-care hemodialysis and peritoneal dialysis were US $51,252 (95% confidence interval [CI], 47,680 to 54,824),$42,057 (95% CI, 39,523 to 44,592), $29,961 (95% CI, 21,252 to 38,670), and$26,959 (95% CI, 23,500 to 30,416), respectively (P < 0.001). After adjustment for the effect of other important predictors of cost, such as comorbidity, these differences persisted. Among patients treated with hemodialysis, the cost of vascular access-related care was lower by more than fivefold for patients who began the study period with a functioning native arteriovenous fistula compared with those treated with a permanent catheter or synthetic graft (P < 0.001). To maximize the efficiency with which care is provided to patients with ESRD, dialysis programs should encourage the use of home/self-care hemodialysis and peritoneal dialysis. Copyright 2002 by the National Kidney Foundation, Inc. Bookmark • Record: found • Abstract: found ### Prevalence and risk factors of carotid plaque in women with systemic lupus erythematosus. (1998) To determine the prevalence of carotid atherosclerosis and associated risk factors in women with systemic lupus erythematosus (SLE). Carotid plaque and intima-media wall thickness (IMT) were measured by B-mode ultrasound in women with SLE. Risk factors associated with carotid plaque and IMT were determined at the time of the ultrasound scan and included traditional cardiovascular risk factors, SLE-specific variables, and inflammation markers. The 175 women with SLE were predominantly white (87%), with a mean age of 44.9 years (SD 11.5). Twenty-six women (15%) had a previous arterial event (10 coronary [myocardial infarction or angina], 11 cerebrovascular [stroke or transient ischemic attack], and 5 both). The mean +/- SD IMT was 0.71 +/- 0.14 mm, and 70 women (40%) had focal plaque. Variables significantly associated with focal plaque (P < 0.05) included age, duration of lupus, systolic, diastolic, and pulse pressure, body mass index, menopausal status, levels of total and low-density lipoprotein (LDL) cholesterol, fibrinogen and C-reactive protein levels, SLE-related disease damage according to the Systemic Lupus International Collaborating Clinics (SLICC) damage index (modified to exclude cardiovascular parameters), and disease activity as determined by the Systemic Lupus Activity Measure. Women with longer duration of prednisone use and a higher cumulative dose of prednisone as well as those with prior coronary events were more likely to have plaque. In logistic regression models, independent determinants of plaque (P < 0.05) were older age, higher systolic blood pressure, higher levels of LDL cholesterol, prolonged treatment with prednisone, and a previous coronary event. Older age, a previous coronary event, and elevated systolic blood pressure were independently associated with increased severity of plaque (P < 0.01). Older age, elevated pulse pressure, a previous coronary event, and a higher SLICC disease damage score were independently related to increased IMT (P < 0.05). B-mode ultrasound provides a useful noninvasive technique to assess atherosclerosis in women with SLE who are at high risk for cardiovascular disease. Potentially modifiable risk factors were found to be associated with the vascular disease detected using this method. Bookmark • Record: found • Abstract: found • Article: found ### Predicting Hemodialysis Access Failure with Color Flow Doppler Ultrasound (1998) Color flow doppler ultrasound examination of the hemodialysis access was conducted in 2,792 hemodialysis patients to evaluate its value in predicting hemodialysis access failure. After baseline assessment of vascular access function with clinical and laboratory tests including color flow doppler evaluation these patients were followed for a minimal of 6 months or until graft failure occurred (defined as surgery or angioplasty intervention, or graft loss). The patient demographics and vascular accesses were typical of a standard hemodialysis patient population. On the day of the color flow doppler examination systolic and diastolic blood pressure, hematocrit, urea reduction ratio, dialysis blood flow, venous line pressure at a dialysis blood flow of 250 ml/min, and access recirculation rate were measured. At the conclusion of the study 23.5% of the patients had access failure. Case mix predictors for access failure were determined using the Cox Model. Case mix predictors of access failure were race, non-white was higher than white (p < 0.005), younger accesses had a higher risk than older accesses (p < 0.025), accesses with prior thrombosis had a higher risk of failure (p = 0.042), polytetrafluoroethylene (PTFE) grafts had a higher risk than native vein fistulae (p < 0.05), loop PTFE grafts had a higher risk than straight PTFE grafts (p < 0.025), and upper arm accesses had a higher risk than forearm accesses (p = 0.033). Most significant, however, was decreased access blood flow as measured by color flow doppler (p < 0.0001). The relative risk of graft failure increased 40% when the blood flow in the graft decreased to less than 500 ml/min and the relative risk doubled when the blood flow was less than 300 ml/min. This study has shown that color flow doppler evaluation, quantifying blood flow in a prosthetic graft, can identify those grafts at risk for failure. In contrast, color doppler volume flow in native AV fistulae could not predict fistula survival. This technique is noninvasive, painless, portable, and reproducible. We believe that preemptory repair of an anatomical abnormality in vascular access grafts with decreased blood flow may decrease patient inconvenience, associated morbidity, and associated costs. Bookmark ### Author and article information ###### Journal KBR Kidney Blood Press Res 10.1159/issn.1420-4096 Kidney and Blood Pressure Research S. Karger AG 1420-4096 1423-0143 2003 2003 19 November 2003 : 26 : 5-6 : 362-367 ###### Affiliations Departments of aNephrology and bSurgery, University Hospital Essen, Essen, and cKlinikum rechts der Isar der Technischen Universität München, München, Germany ###### Article 73943 Kidney Blood Press Res 2003;26:362–367 10.1159/000073943 14610341 © 2003 S. Karger AG, Basel ###### Page count Figures: 4, Tables: 1, References: 24, Pages: 6 ###### Product Self URI (application/pdf): https://www.karger.com/Article/Pdf/73943 ###### Categories Original Paper Cardiovascular Medicine, Nephrology
2020-10-20 22:22:19
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.32185491919517517, "perplexity": 14934.609656014625}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-45/segments/1603107874340.10/warc/CC-MAIN-20201020221156-20201021011156-00045.warc.gz"}
https://ndl.iitkgp.ac.in/document/MDl5cHdNUUlnd0lnZHNoQXlvOG5lRTRvaEw3RlVkTkx6UWtKQjEySXRNcz0
### Are high energy heavy ion collisions similar to a little bang, or just a very nice firework? (2008).Are high energy heavy ion collisions similar to a little bang, or just a very nice firework? (2008). Access Restriction Open Author Shuryak, E. V. Source CiteSeerX Content type Text File Format PDF Age Range above 22 year Education Level UG and PG ♦ Career/Technical Study Publisher Date 2008-01-01
2021-07-23 19:55:27
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8269674777984619, "perplexity": 7838.680735390105}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-31/segments/1627046150000.59/warc/CC-MAIN-20210723175111-20210723205111-00586.warc.gz"}
https://www.aakash.ac.in/book-solutions/hc-verma-solutions/class-11-physics/chapter-12-simple-harmonic-motion
• Call Now 1800-102-2727 • # HC Verma for Class 11 Physics Chapter 12:Simple Harmonic Motion Chapter 12 Simple Harmonic Motion tends to derive the formulas used in harmonic motion. A motion where the restoring force is directly proportional to the displacement of a body (from its mean position) is known as Simple Harmonic Motion. The direction in which this restoring force is applied is always towards the mean position. The term Harmonic or periodic motion implies that the body repeats its motion after regular intervals. Thus, a body is said to be oscillating if it is moving to and fro on the same path. Consider a particle that is executing simple harmonic motion. Its acceleration can be given by the formula: a(t) = -ω2 x(t). . Here, ω represents the angular velocity of the particle. The amplitude is the maximum displacement observed on either side from the centre of oscillation. Furthermore, a few terms associated with Simple harmonic motion like time period, amplitude have been discussed. Simply put, the frequency can be defined as the inverse of the time period. Speaking physically, frequency is indicative of the number of oscillations occurring per unit time. Frequency is measured in Hertz, i.e., cycles per second. The angular harmonic motion also is a part of harmonic motion. The angular oscillations are called angular simple harmonic motion if there is a position of the body where the resultant torque on the body is zero. This position is the mean position where θ = 0. When a body is displaced through an angle from the mean position. The time period of SHM can be expressed as T=2π√m/k. And we know, f=1/T. Thus, the frequency of a simple harmonic oscillator is given by f = 1/2π* √k/m Energy takes place when a particle is in harmonic motion. The total energy of the particle participating in simple harmonic motion is constant and is independent of the instantaneous displacement.
2023-03-29 22:54:55
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8385394811630249, "perplexity": 305.96399543360275}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2023-14/segments/1679296949035.66/warc/CC-MAIN-20230329213541-20230330003541-00389.warc.gz"}
https://math.stackexchange.com/questions/3266728/easy-argument-for-non-equivalent-categories
# Easy argument for non-equivalent categories I have to give a talk about some categorical things in a student seminar soon. As this is an introductory talk I cannot assume much knowledge and need very basic arguments. For example I want to present that the category of sets is not self-dual, i.e. there is no equivalence of categories $$F : Sets \rightarrow Sets^{opp}$$. I will define an equivalence of categories as a fully faithful functor which is also essentially surjective as this is easier to explain in a short amount of time than natural transformations. Could you give me a simple argument? One argument I found online (actually here on stacksexchange) is that $$Sets$$ is a distributive category and $$Sets^{opp}$$ is not but this is too difiicult to explain. Btw: Does one know a more concrete category that is equivalent to $$Sets^{opp}$$? I guess this one should be easy enough. Suppose $$F:\textbf{Set} \rightarrow \textbf{Set}^{\text{op}}$$ is an equivalence of categories. Recall, that $$\text{Hom}_{\textbf{Set}}(M,N) = \emptyset$$ iff $$M$$ is non-empty and $$N$$ is empty. Now let $$M$$ be a non-empty set. We get $$\emptyset = \text{Hom}_{\textbf{Set}}(M,\emptyset) \cong_{\textbf{Set}} \text{Hom}_{\textbf{Set}^{\text{op}}}(F(M),F(\emptyset)) = \text{Hom}_{\textbf{Set}}(F(\emptyset),F(M)),$$ such that $$F(M) = \emptyset$$ and $$F(\emptyset) \neq \emptyset$$. Thus $$F$$ is not essentially surjective in contradiction to our assumption. Yes, the category $$\textbf{Caba}$$ of complete atomic boolean algebras. This is sometimes referred to as the Lindenbaum-Tarski duality and the equivalence is given by the power set functor. • $F$ is not essentially surjective because no set of cardinality different than $|F(\emptyset)|$ and 0 is of the form $F(X)$? – Josh Jun 18 '19 at 18:34 • Yes, exactly. You got it. – TMO Jun 18 '19 at 18:38 • Thanks a lot for the example, but it probabaly also takes too long to explain the details of your cabas. – Josh Jun 18 '19 at 18:43 $$\textbf{Set}$$ has an initial object (empty set) and a terminal object (any one-element set). There is a morphism from any initial object to any terminal object but no morphism from any terminal object to any initial object. Dually, $$\textbf{Set}^{\textrm{op}}$$ has an initial object and a terminal object. There is a morphism from any terminal object to any initial object but no morphism from any initial object to any terminal object. So $$\textbf{Set}$$ and $$\textbf{Set}^{\textrm{op}}$$ cannot be equivalent categories. Now when you talk about "a more concrete category" do you mean a concrete category? That Wikipedia pages shows how $$\textbf{Set}^{\textrm{op}}$$ is a concrete category. • No, I just wanted an example of a category that is equivalent to $Sets^{opp}$ to show how we can understand $Sets^{opp}$ in a more concrete way without just reversing arrows. – Josh Jun 18 '19 at 18:36 • Your argument is essentially the same as the one of @ThorWittich right? Just for singletons? – Josh Jun 18 '19 at 18:44
2021-03-08 19:10:12
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 19, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8581550717353821, "perplexity": 199.87378203920036}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-10/segments/1614178385389.83/warc/CC-MAIN-20210308174330-20210308204330-00380.warc.gz"}
https://proxieslive.com/tag/smallest/
## What is the smallest time/space complexity class that is known to contain complxity class $\mathsf{SPARSE}$ Is it known if complexity class of all sparse languages is contained within e.g. $$\mathsf{EXP}$$ or $$\mathsf{EXPSPACE}$$? Or what is the smallest time or space complexity class that contains complexity class $$\mathsf{SPARSE}$$? ## Find the smallest group of numbers with sum bigger then $X$ Given a list of numbers $$S$$ where $$0 < s_i < 100$$, find the smallest group of numbers with sum bigger than $$X$$. Each number can be used multiple times. Ex: for $$S = [3,4.1], X = 10$$ the solution is $$[3, 3, 4.1]$$ Is it a known problem? What will be the best way of solving it? For now, my best solution is to randomly pick numbers and repeat the process multiple times. ## Finding Smallest Frontier for Graphs of bounded “width” Let $$G$$ be a graph and $$X=x_1,x_2,…,x_n$$ be an permutation/ordering of the vertex set of $$G$$. We then let $$S_i = \{x_j:j\le i\}$$, and $$F_i$$ be the number vertices $$v\in S_i$$ that are adjacent to some vertex $$u(v) \not\in S_i$$. We finally define $$F$$ to be a list of values $$F_i$$ sorted from largest to smallest. e.g. if $$F_1=2,F_2=1,F_3=6, F_4=2$$ we’d have $$F = 6,2,2,1$$ (we caution that in reality $$F_{i+1}-F_i\le 1$$ so the sequence features in the example could not occur) In general, finding $$X$$ such that $$F$$ is lexicographically minimal is a task which I’d assume is NP-Hard. However, letting $$\mathcal{G}_{k,t}$$ denote the family of graphs $$G$$ such that the vertex set of $$G$$ is partitioned in to $$t$$ parts $$V_1,\dots,V_t$$ such that $$V_i \le k$$ for all $$i$$, and $$|a-b|\ge 2$$ implies there is no edge $$(u,v)$$ in $$G$$ where $$u\in V_a$$ and $$v\in V_b$$. For fixed $$k$$, and given $$G\in \ mathcal{G}_{k,t}$$, is there an algorithm that finds $$X$$ such that $$F$$ is lexicographically minimal, whose worst case run time is polynomial in $$t$$? ## Smallest subarray problem Say you have an array of integers like [1, 2, 3, 4, 5, 6], the problem is to find the smallest way to break up the array into sub-arrays where each sub-array satisfies the following requirement: • sub-array.first_integer and sub-array.last_integer must have a common divisor that is not 1. So for [1, 2, 3, 4, 5, 6] the answer would be 2, because you can break it up like [1] and [2, 3, 4, 5, 6], where 2 and 6 have a common divisor of 2 (which is > 1 so it meets the requirement). You can assume the array can be huge but the numbers are not too big. Is there a way to do this in n or n*log(n) time? I think with dp and caching n^2 is possible but not sure how to do it faster. ## Fibonacci Heap smallest possible grandchildren Suppose a node of a Fibonacci heap has 52 children. What is the smallest possible number of grandchildren it can have? ## Finding the point with smallest x-ordinate between two given y-ordinates Given a set of points P=p1,p2,..pn in R2 in where pi=(xi,yi),finding the point with smallest x-ordinate having y-ordinates between y1 and y2, where y1 and y2 are given as inputs. I can compare the point with other points which gives me an O(n) time algorithm ? Can this be improved any further ? ## Finding the smallest number that scales a set of irrational numbers to integers Say we have a set $$S$$ of $$n$$ irrational numbers $$\left\{a_1, …, a_n\right\}$$. Are there any known algorithms that can determine a scaling factor $$s \in \mathcal{R}$$ such that $$s * a_i \in \mathcal{N} \;\forall a_i \in S$$, assuming that such factor exists? If multiple exist, how about the smallest one? Moreover, I wonder, under what input conditions could one assume that an algorithm for this problem can’t (or can) return a valid scaling factor? If no known algorithms to this problem exist, are there any known classes of “scaling algorithms” that may solve a similar problems? ## Write the smallest positive number that can be represented by the floating point system Using a normalised floating point representation box with an 8-bit mantissa and a 4-bit exponent, both stored using two’s complement. (a) Write the smallest positive number that can be represented by the floating point system in the boxes below. The result is: Mantissa 0.1000000 and exponent 1000 Do not see how this can could someone please explain. ## Finding the smallest integer such that a given condition holds with “binary search” Setup. Suppose we are given a function $$f:\mathbb N\to\{\text{False},\text{True}\}$$ such that $$f(n)=\text{True}\implies f(n+1)=\text{True}$$ and such that $$f(n)=\text{True}$$ for some $$n$$ large enough. In natural language. The function $$f$$ imposes a condition on the natural numbers which is fulfilled once $$n$$ is large enough. My question. How can I, for any given $$f$$, find the smallest $$n$$ such that $$f(n)=\text{True}$$? A first idea would be to start with $$n=1$$ and to increment $$n$$ by one until $$f(n)$$ is True. However, this is fairly slow. So I designed a sort of “binary search” for this task. How can I translate this to Mathematica? Here is the Code in Python: def bin_search(cond): n = 1 while not cond(n): n *= 2 lower_bound = n//2 upper_bound = n middle = (lower_bound + upper_bound)//2 while upper_bound - lower_bound > 1: if cond(middle): upper_bound = middle else: lower_bound = middle middle = (lower_bound + upper_bound)//2 return upper_bound For example, one such condition would be $$f(n)=[H_n\geq 10],$$ where $$H_n=\sum_{i=1}^n \frac 1i$$ is the $$n$$th harmonic number. ## What are the smallest and biggest negative floating point numbers in IEEE 754 32 bit? I am stuck with a question that asks for smallest and biggest negative floating point numbers in IEEE 754 32-bit (their representation and decimal numerical value from which one can approximate the precision of the number)? So -0, NaN and Infinity do not belong to negative rational numbers. I have stumbled upon -3.403 x 10^38 and 2^-126. I came close to the first one actually. I tried to do some calculations but got kind of lost in the process as floating point representation is counter-intutive for me, especially when calculating negative numbers. Can someone help me to clarify my thought process for the calculations so that I can find the numbers?
2021-05-15 14:54:42
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 59, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.6557824611663818, "perplexity": 293.70012846442927}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-21/segments/1620243991370.50/warc/CC-MAIN-20210515131024-20210515161024-00231.warc.gz"}
https://bathmash.github.io/HELM/20_2_laplce_transfrm_n_inverse-web/20_2_laplce_transfrm_n_inverse-webse1.html
### 1 The Laplace transform If $f\left(t\right)$ is a causal function then the Laplace transform of $f\left(t\right)$ is written $\mathsc{L}\left\{f\left(t\right)\right\}$ and defined by: $\phantom{\rule{2em}{0ex}}\mathsc{L}\left\{f\left(t\right)\right\}={\int }_{0}^{\infty }{\text{e}}^{-st}f\left(t\right)\phantom{\rule{0.3em}{0ex}}dt.$ Clearly, once the integral is performed and the limits substituted the resulting expression will involve the $s$ parameter alone since the dependence upon $t$ is removed in the integration process. This resulting expression in $s$ is denoted by $F\left(s\right)$ ; its precise form is dependent upon the form taken by $f\left(t\right)$ . We now refine Key Point 1 (page 4). ##### Key Point 3 The Laplace Transform of a Causal Function $\mathsc{L}\left\{f\left(t\right)u\left(t\right)\right\}\equiv {\int }_{0}^{\infty }{\text{e}}^{-st}f\left(t\right)u\left(t\right)\phantom{\rule{0.3em}{0ex}}dt\equiv F\left(s\right)$ To begin, we determine the Laplace transform of some simple causal functions. For example, if we consider the ramp function $f\left(t\right)=t.u\left(t\right)$ with graph Figure 11 we find: Now we have the difficulty of substituting in the limits of integration. The only problem arises with the upper limit ( $t=\infty$ ). We shall always assume that the parameter $s$ is so chosen that no contribution ever arises from the upper limit ( $t=\infty$ ). In this particular case we need only demand that $s$ is real and positive. Using this ‘rule of thumb’: $\begin{array}{rcll}\mathsc{L}\left\{t\phantom{\rule{1em}{0ex}}u\left(t\right)\right\}& =& \left[0-0\right]-\left[0-\left(\frac{1}{{\left(-s\right)}^{2}}\right)\right]\phantom{\rule{2em}{0ex}}\phantom{\rule{2em}{0ex}}\phantom{\rule{2em}{0ex}}\phantom{\rule{2em}{0ex}}\phantom{\rule{2em}{0ex}}\phantom{\rule{2em}{0ex}}\phantom{\rule{2em}{0ex}}\phantom{\rule{2em}{0ex}}\phantom{\rule{2em}{0ex}}\phantom{\rule{2em}{0ex}}& \text{}\\ & =& \frac{1}{{s}^{2}}& \text{}\end{array}$ Thus, if $f\left(t\right)=t\phantom{\rule{1em}{0ex}}u\left(t\right)$ then $F\left(s\right)=1∕{s}^{2}$ . A similar, but more tedious, calculation yields the result that if $f\left(t\right)={t}^{n}u\left(t\right)$ in which $n$ is a positive integer then: $\phantom{\rule{2em}{0ex}}\mathsc{L}\left\{{t}^{n}u\left(t\right)\right\}=\frac{n!}{{s}^{n+1}}$ [We remember $n!\equiv n\left(n-1\right)\left(n-2\right)\dots \left(3\right)\left(2\right)\left(1\right).$ ] Find the Laplace transform of the step function $u\left(t\right)$ . Begin by obtaining the Laplace integral: You should obtain ${\int }_{0}^{\infty }{\text{e}}^{-st}\phantom{\rule{0.3em}{0ex}}dt$ since in the range of integration, $t>0$ and so $u\left(t\right)=1$ leading to $\phantom{\rule{2em}{0ex}}\mathsc{L}\left\{u\left(t\right)\right\}={\int }_{0}^{\infty }{\text{e}}^{-st}u\left(t\right)\phantom{\rule{1em}{0ex}}\phantom{\rule{0.3em}{0ex}}dt={\int }_{0}^{\infty }{\text{e}}^{-st}\phantom{\rule{0.3em}{0ex}}dt$ Now complete the integration: You should have obtained: $\begin{array}{rcll}\mathsc{L}\left\{u\left(t\right)\right\}& =& {\int }_{0}^{\infty }{\text{e}}^{-st}\phantom{\rule{0.3em}{0ex}}dt& \text{}\\ & =& {\left[\frac{{\text{e}}^{-st}}{\left(-s\right)}\right]}_{0}^{\infty }=0-\left[\frac{1}{\left(-s\right)}\right]=\frac{1}{s}\phantom{\rule{2em}{0ex}}\phantom{\rule{2em}{0ex}}\phantom{\rule{2em}{0ex}}\phantom{\rule{2em}{0ex}}\phantom{\rule{2em}{0ex}}\phantom{\rule{2em}{0ex}}\phantom{\rule{2em}{0ex}}\phantom{\rule{2em}{0ex}}\phantom{\rule{2em}{0ex}}& \text{}\end{array}$ where, again, we have assumed the contribution from the upper limit is zero. As a second example, we consider the decaying exponential $f\left(t\right)={\text{e}}^{-at}u\left(t\right)$ where $a$ is a positive constant. This function has graph: Figure 12 In this case, Therefore, if $f\left(t\right)={\text{e}}^{-at}u\left(t\right)$ then $F\left(s\right)=\frac{1}{s+a}$ . Following this approach we can develop a table of Laplace transforms which records, for each causal function $f\left(t\right)$ listed, its corresponding transform function $F\left(s\right)$ . Table 1 gives a limited table of transforms. Table 1: Table of Laplace Transforms Rule Causal function Laplace transform 1 $f\left(t\right)$ $F\left(s\right)$ 2 $u\left(t\right)$ $\frac{1}{s}$ 3 ${t}^{n}u\left(t\right)$ $\frac{n!}{{s}^{n+1}}$ 4 ${\text{e}}^{-at}u\left(t\right)$ $\frac{1}{s+a}$ 5 $sinat\phantom{\rule{0.3em}{0ex}}.\phantom{\rule{0.3em}{0ex}}u\left(t\right)$ $\frac{a}{{s}^{2}+{a}^{2}}$ 6 $cosat\phantom{\rule{0.3em}{0ex}}.\phantom{\rule{0.3em}{0ex}}u\left(t\right)$ $\frac{s}{{s}^{2}+{a}^{2}}$ 7 ${\text{e}}^{-at}sinbt\phantom{\rule{0.3em}{0ex}}.\phantom{\rule{0.3em}{0ex}}u\left(t\right)$ $\frac{b}{{\left(s+a\right)}^{2}+{b}^{2}}$ 8 ${\text{e}}^{-at}cosbt\phantom{\rule{0.3em}{0ex}}u\left(t\right)$ $\frac{s+a}{{\left(s+a\right)}^{2}+{b}^{2}}$ Note: For convenience, this table is repeated at the end of the Workbook. #### 1.1 The linearity property of the Laplace transformation If $f\left(t\right)$ and $g\left(t\right)$ are causal functions and ${c}_{1}$ , ${c}_{2}$ are constants then $\begin{array}{rcll}\mathsc{L}\left\{{c}_{1}f\left(t\right)+{c}_{2}g\left(t\right)\right\}& =& {\int }_{0}^{\infty }{\text{e}}^{-st}\left[{c}_{1}f\left(t\right)+{c}_{2}g\left(t\right)\right]\phantom{\rule{0.3em}{0ex}}dt& \text{}\\ & =& {c}_{1}{\int }_{0}^{\infty }{\text{e}}^{-st}f\left(t\right)\phantom{\rule{0.3em}{0ex}}dt+{c}_{2}{\int }_{0}^{\infty }{\text{e}}^{-st}g\left(t\right)\phantom{\rule{0.3em}{0ex}}dt\phantom{\rule{2em}{0ex}}\phantom{\rule{2em}{0ex}}\phantom{\rule{2em}{0ex}}\phantom{\rule{2em}{0ex}}\phantom{\rule{2em}{0ex}}\phantom{\rule{2em}{0ex}}& \text{}\\ & =& {c}_{1}\mathsc{L}\left\{f\left(t\right)\right\}+{c}_{2}\mathsc{L}\left\{g\left(t\right)\right\}\phantom{\rule{2em}{0ex}}\phantom{\rule{2em}{0ex}}\phantom{\rule{2em}{0ex}}& \text{}\end{array}$ ##### Key Point 4 Linearity Property of the Laplace Transform $\mathsc{L}\left\{{c}_{1}f\left(t\right)+{c}_{2}g\left(t\right)\right\}={c}_{1}\mathsc{L}\left\{f\left(t\right)\right\}+{c}_{2}\mathsc{L}\left\{g\left(t\right)\right\}$ That is, the Laplace transform of a linear sum of causal functions is a linear sum of Laplace transforms. For example, $\begin{array}{rcll}\mathsc{L}\left\{2cost\phantom{\rule{0.3em}{0ex}}.\phantom{\rule{0.3em}{0ex}}u\left(t\right)-3{t}^{2}u\left(t\right)\right\}& =& 2\mathsc{L}\left\{cost\phantom{\rule{0.3em}{0ex}}.\phantom{\rule{0.3em}{0ex}}u\left(t\right)\right\}-3\mathsc{L}\left\{{t}^{2}u\left(t\right)\right\}\phantom{\rule{2em}{0ex}}\phantom{\rule{2em}{0ex}}\phantom{\rule{2em}{0ex}}\phantom{\rule{2em}{0ex}}& \text{}\\ & =& 2\left(\frac{s}{{s}^{2}+1}\right)-3\left(\frac{2}{{s}^{3}}\right)\phantom{\rule{2em}{0ex}}\phantom{\rule{2em}{0ex}}\phantom{\rule{2em}{0ex}}& \text{}\end{array}$ Obtain the Laplace transform of the hyperbolic function $sinhat$ . Begin by expressing $sinhat$ in terms of exponential functions: $sinhat=\frac{1}{2}\left({\text{e}}^{at}-{\text{e}}^{-at}\right)$ Now use the linearity property (Key Point 4) to obtain the Laplace transform of the causal function $sinhat.u\left(t\right)$ : You should obtain $a∕\left({s}^{2}-{a}^{2}\right)$ since Obtain the Laplace transform of the hyperbolic function $coshat$ . You should obtain $\frac{s}{{s}^{2}-{a}^{2}}$ since Find the Laplace transform of the delayed step-function $u\left(t-a\right)$ , $a>0$ . Write the delayed step-function here in terms of an integral: You should obtain $\mathsc{L}\left\{u\left(t-a\right)\right\}={\int }_{a}^{\infty }{\text{e}}^{-st}\phantom{\rule{0.3em}{0ex}}dt$ (note the lower limit is $a$ ) since: $\phantom{\rule{2em}{0ex}}\mathsc{L}\left\{u\left(t-a\right)\right\}={\int }_{0}^{\infty }{\text{e}}^{-st}u\left(t-a\right)\phantom{\rule{0.3em}{0ex}}dt={\int }_{0}^{a}{\text{e}}^{-st}u\left(t-a\right)\phantom{\rule{0.3em}{0ex}}dt+{\int }_{a}^{\infty }{\text{e}}^{-st}u\left(t-a\right)\phantom{\rule{0.3em}{0ex}}dt$ In the first integral $0 and so $\left(t-a\right)<0$ , therefore $u\left(t-a\right)=0$ . In the second integral $a and so $\left(t-a\right)>0$ , therefore $u\left(t-a\right)=1$ . Hence $\phantom{\rule{2em}{0ex}}\mathsc{L}\left\{u\left(t-a\right)\right\}=0+{\int }_{a}^{\infty }{\text{e}}^{-st}\phantom{\rule{0.3em}{0ex}}dt.$ Now complete the integration: $\phantom{\rule{2em}{0ex}}\mathsc{L}\left\{u\left(t-a\right)\right\}={\int }_{a}^{\infty }{\text{e}}^{-st}\phantom{\rule{0.3em}{0ex}}dt={\left[\frac{{\text{e}}^{-st}}{\left(-s\right)}\right]}_{a}^{\infty }=\frac{{\text{e}}^{-sa}}{s}$ ##### Exercise Determine the Laplace transform of the following functions. 1. ${\text{e}}^{-3t}u\left(t\right)$ 2. $u\left(t-3\right)$ 3. ${\text{e}}^{-t}sin3t.u\left(t\right)$ 4. $\left(5cos3t-6{t}^{3}\right).u\left(t\right)$ 1. $\frac{1}{s+3}$ 2. $\frac{{\text{e}}^{-3s}}{s}$ 3. $\frac{3}{{\left(s+1\right)}^{2}+9}$ 4. $\frac{5s}{{s}^{2}+9}-\frac{36}{{s}^{4}}$
2022-11-29 02:19:05
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 85, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9707814455032349, "perplexity": 372.9041074444466}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-49/segments/1669446710684.84/warc/CC-MAIN-20221128235805-20221129025805-00296.warc.gz"}
https://forum.wilmott.com/viewtopic.php?f=10&t=101092&start=15
SERVING THE QUANTITATIVE FINANCE COMMUNITY • 1 • 2 ppauper Topic Author Posts: 70239 Joined: November 15th, 2001, 1:29 pm Re: quadruple precision complex erfc code (qerfc) in fortran Was erfc() good enough or was qerfc() necessary? BTW how did they come up with those numbers? I needed quad precision and the code linked by outrun had the constants for the quad case which I could cut and paste into the fortran code I never did find out where the numbers came from, but there's structure in there, q1=3^2 q0 q2=5^2 q0 q3=7^2 q0 and so on outrun Posts: 4573 Joined: April 29th, 2016, 1:40 pm Re: quadruple precision complex erfc code (qerfc) in fortran It looks like a high precision version of "Rational Chebyshev approximations for the error function" as implemented in Cernlib C300 ..or maybe the root source is http://www.kurims.kyoto-u.ac.jp/~ooura/ Posts: 23951 Joined: September 20th, 2002, 8:30 pm Re: quadruple precision complex erfc code (qerfc) in fortran Do you also need to increase the number of terms to get full accuracy in quad precision? ppauper Topic Author Posts: 70239 Joined: November 15th, 2001, 1:29 pm Re: quadruple precision complex erfc code (qerfc) in fortran The test case I'm running has an answer in the literature of 1.50448. With the double precision error function and a 300x300 matrix, I'm getting 1.50377 and with the quad precision erfc and a 400x400 matrix, I get 1.50406 the remaining error  is because I'm truncating an infinite sum, and when I look at the coefficients at the end it's a fairly flat spectrum ppauper Topic Author Posts: 70239 Joined: November 15th, 2001, 1:29 pm Re: quadruple precision complex erfc code (qerfc) in fortran Do you also need to increase the number of terms to get full accuracy in quad precision? the number of terms in the error function routine has greatly increased, it's gone from 6 terms to 26 terms so will run a lot slower Posts: 23951 Joined: September 20th, 2002, 8:30 pm Re: quadruple precision complex erfc code (qerfc) in fortran Thanks! outrun Posts: 4573 Joined: April 29th, 2016, 1:40 pm Re: quadruple precision complex erfc code (qerfc) in fortran ..that 1.504 only has 3 digits accuracy and hasn't improved that much. The new quad erfc has 32 digits accuracy, so there is a big gap between the two.. Is the linear algebra part of the computation unstable and the main source of error? Are you e.g. solving a system of equations or inverting a matrix? Cuchulainn Posts: 60835 Joined: July 16th, 2004, 7:38 am Location: Amsterdam Contact: Re: quadruple precision complex erfc code (qerfc) in fortran Do you also need to increase the number of terms to get full accuracy in quad precision? the number of terms in the error function routine has greatly increased, it's gone from 6 terms to 26 terms so will run a lot slower If speed is an issue.. If I understand properly, you are summing a series. In that case OpenMP could be used .. it involves nothing more than a single line to achieve parallel speedup. I know OpenMP in C (my Fortran is a distant memory) and is easy to use. https://www.dartmouth.edu/~rc/classes/i ... lause.html Seems OMP is OK in quad http://forum.openmp.org/forum/viewtopic.php?f=3&t=1741 // Sanity check; Kahan summation maybe https://en.wikipedia.org/wiki/Kahan_summation_algorithm http://www.datasimfinancial.com http://www.datasim.nl Approach your problem from the right end and begin with the answers. Then one day, perhaps you will find the final question.. R. van Gulik ppauper Topic Author Posts: 70239 Joined: November 15th, 2001, 1:29 pm Re: quadruple precision complex erfc code (qerfc) in fortran ..that 1.504 only has 3 digits accuracy and hasn't improved that much. The new quad erfc has 32 digits accuracy, so there is a big gap between the two.. Is the linear algebra part of the computation unstable and the main source of error? Are you e.g. solving a system of equations or inverting a matrix? I'm solving a system $AY=B$ by row reduction rather than inverting $A$ the main source of error is I have an infinite sum which I am truncating I have $\sum_{n=-\infty}^{\infty}f_{n}(x)y_{n}=b(x)$ where the $f_{n}(x)$ are (known) functions and the $y_{n}$ are (unknown) coefficients. This equation is true for all $x$ in some range I truncate the series and evaluate at the $2N+1$ gridpoints $x_{m}$ $\sum_{n=-N}^{N}f_{n}(x_{m})y_{n}=b(x_{m})$ and write $A_{mn}=f_{n}(x_{m})$ and $B_{m}=b(x_{m})$ gives $\sum_{n=-N}^{N}A_{mn}y_{n}=B_{m}$ The larger $N$ is, the smaller the error, but there's a very flat spectrum: when I solve the truncated system and plot $y_{n}$ against $n$, the slope is very gradual Last edited by ppauper on January 31st, 2018, 3:30 pm, edited 1 time in total. ppauper Topic Author Posts: 70239 Joined: November 15th, 2001, 1:29 pm Re: quadruple precision complex erfc code (qerfc) in fortran cuch: don't you need a parallel machine (beowulf cluster) to run MPI (or at least to take advantage of the parallelization)? Cuchulainn Posts: 60835 Joined: July 16th, 2004, 7:38 am Location: Amsterdam Contact: Re: quadruple precision complex erfc code (qerfc) in fortran cuch: don't you need a parallel machine (beowulf cluster) to run MPI (or at least to take advantage of the parallelization)? AFAIR MPI uses a network of computers and works via message passing. We did some Monte Carlo stuff on MPI a while back. You can do MPI on a single machine  But MPI is the world of the Fortran titans in research. These days, multicore and manycore computers with shared memory are more common and easier to use e.g. using OpenMP using Fortran or C. Even laptops have 4 cores these days. What is the algorithmic pattern you wish to parallelise? (BTW Phelim Boyle is the inventor of MC and he was doing a PhD in relativity in our maths dept during my undergrad days.) N.B. the file is really a ps file. So you have to rename back to a .ps file and open in Ghostscript. Attachments boyleparallel2_this_is_a_ps_file.pdf http://www.datasimfinancial.com http://www.datasim.nl Approach your problem from the right end and begin with the answers. Then one day, perhaps you will find the final question.. R. van Gulik ppauper Topic Author Posts: 70239 Joined: November 15th, 2001, 1:29 pm Re: quadruple precision complex erfc code (qerfc) in fortran I've used a beowulf cluster in the past but don't have access to it now. You had to pay for time. It's a bunch of computers hooked up together, but the user logs on to the front end  and it only seems like you're using 1 computer. In the current code,  $\sum_{n=-N}^{N}f_{n}(x_{m})y_{n}=b(x_{m})$ I have 400 of  $b(x_{m})$ and $400^2$ of $f_{n}(x_{m})$ to evaluate which would scream on a parallel machine then there's the row operations to solve the linear system and again they could be parallelized. Cuchulainn Posts: 60835 Joined: July 16th, 2004, 7:38 am Location: Amsterdam Contact: Re: quadruple precision complex erfc code (qerfc) in fortran I've used a beowulf cluster in the past but don't have access to it now. You had to pay for time. It's a bunch of computers hooked up together, but the user logs on to the front end  and it only seems like you're using 1 computer. In the current code,  $\sum_{n=-N}^{N}f_{n}(x_{m})y_{n}=b(x_{m})$ I have 400 of  $b(x_{m})$ and $400^2$ of $f_{n}(x_{m})$ to evaluate which would scream on a parallel machine then there's the row operations to solve the linear system and again they could be parallelized. This looks like some kind of inverse transform or something..(quantised version of a Fredholm integral equation of the first kind?) It looks like 1. Compute the matrix A and vector b 2. Solve AY = b I reckon that step 1 could be parallelised in some way (each row independently of the others). Is that the rationale for using MPI? OpenMP maybe? You have a double loop and you can parallelise each iteration of the outer loop. Reduction variables might be needed. Here is an example of matrix multiplication in C++ (Fortran is similar). void omp_ParallelNestedMatrixMultiply(const NestedMatrix& m1, const NestedMatrix& m2, NestedMatrix& m3) { // Matrix multiplication // Assume 'compatibility' for multiplication of two matrices; OK since // matrices are square. double temp; #pragma omp parallel for for (long i = 0; i < m3.size(); ++i) { for (long j = 0; j < m3.size(); j++) { temp = 0.0; for (long k = 0; k < m1.size(); k++) { temp += m1[i][k] * m2[k][j]; } m3[i][j] = temp; } }; } http://www.datasimfinancial.com http://www.datasim.nl Approach your problem from the right end and begin with the answers. Then one day, perhaps you will find the final question.. R. van Gulik
2020-01-27 17:03:46
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.5216943025588989, "perplexity": 2728.1760738360704}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-05/segments/1579251700988.64/warc/CC-MAIN-20200127143516-20200127173516-00109.warc.gz"}
https://www.physicsforums.com/threads/when-are-negative-bases-raised-to-rational-powers-undefined.901793/
Homework Help: When are Negative Bases Raised to Rational Powers Undefined? 1. Jan 27, 2017 Saturnine Zero 1. The problem statement, all variables and given/known data I'm trying to understand negative bases raised to rational powers, when calculating principle roots for real numbers. I'm not worried about complex solutions numbers at this stage. I just can't find a concise explanation I can understand anywhere. I'm self learning as an adult so I don't have a teacher to ask. 2. Relevant equations When, in general is a negative base raised to a rational power undefined for real numbers? 3. The attempt at a solution $(-x)^{\frac {odd}{odd}}$ I have this as being a real number but reversing the sign of x $(-x)^{\frac {even}{odd}}$ I have this as being a real number but reversing the sign of x $(-x)^{\frac {odd}{even}}$ I have this as being undefined $(-x)^{\frac {even}{even}}$ I have this as being undefined But I am still confused. For instance the following example $(-3)^{\frac 2 4}$ I'm not sure how to think about it. $(-3)^{\frac 2 4}$ is this $(-3^2)^{\frac 1 4}$ which would be $9^{\frac 1 4}$ which would have a real root? Or would it be $(-3^{\frac 1 4})^{2}$ and since you can't take the 4th root of (-3) you can't square it so it is undefined? $(-3)^{\frac 3 4}$ I think I understand as it's either $(-3^3)^{\frac 1 4}$ which is trying to take an even root of an odd number, so undefined. Or it's $(-3^{\frac 1 4})^{3}$ which is trying to take an even root of an odd number and then can't be raised to the 3rd power, so is undefined. Am I on the right track or am I way off? edit: fixed the latex 2. Jan 28, 2017 Logical Dog In the real domain, even roots of negative numbers do not exist. When a negative number is raised to an even number it becomes positive positive. As far as I can see, there is only one case that the root wont be defined. I know not to put the standard rules for exponents when doing negative numbers, but not why they dont work Last edited: Jan 28, 2017 3. Jan 28, 2017 Stephen Tashi We wish to preserve the idea that $\frac{a}{b} = \frac{2a}{2b} = \frac{a/2}{b/2}$. We don't want any special restrictions to be placed on that arithmetic. For example, we don't want a restriction that says "$\frac{a}{b} = \frac{2a}{2b}$ except when the fraction appears in an exponent". If we preserve the concept that those differently written fractions are equal then we must say $(-x)^{ \frac{a}{b} } = (-x)^{\frac{2a}{2b}} = (-x)^{\frac{a/2}{b/2}}$ So you can't treat the situations $(-x)^{\frac{even}{even}},(- x)^{\frac{even}{odd}}, (-x)^{\frac{odd}{even}}, (-x)^{\frac{odd}{odd}}$ as different cases. For example , if we were to say that $(-3)^{\frac{1}{3}}$ is defined then we would have to apply that same definition to $(-3)^{\frac{2}{6}}$. You've illustrated the difficulty of defining the situation $(-x)^{\frac{even}{even}}$ unambiguously. All the cases can be turned into that case by multiplying the both the numerator and the denominator of the exponent by 2. Reducing the fraction in your example gives: $(-3)^{\frac{2}{4}} = (-3)^{\frac{1}{2}}$ and the latter expression is undefined in the arithmetic of real numbers. By the way, if you wish to write an equation involving a negative value, you don't need to represent the value as "$(-x)$", since a plain "$x$" can take on values like $x = -7$. 4. Jan 28, 2017 Stephen Tashi To that, I'll add the observation that textbooks aren't consistent in how they handle the rules of real number arithmetic. If you ask an expert a technical question about exponentiation of negative numbers in real number arithmetic, it's amusing how often the expert will begin to talk about the complex number system. The rules and definitions for the complex numbers are standardized and the easiest course for an expert is to take those rules and try to see which of them don't fall apart when applied only to real numbers. Most texts would agree that an expression like $(-3)^{\frac{1}{3}}$ is another notation for a root $\sqrt[3]{-3}$ and that the cube root of $-3$ exists in the real number system. However they don't define $(-3)$ to a fractional exponent when the numerator of the fraction isn't $1$. With such a convention, the reason that $(-3)^{\frac{1}{3}}$ is not equal to $(-3)^{\frac{2}{6}}$ is that the former expression is a notation for a number and the latter expression is undefined. 5. Jan 28, 2017 Saturnine Zero I think I have a better idea now. If I understand correctly, what needs to happen is that the fractional exponent needs to be expressed or understood in simplest terms first. The exponent should never be a ratio of two even numbers because a factor of 2 can always be factored out. There is only two possibilities then, a co-prime ratio with an even denominator/root which is undefined for a negative bases, or a co-prime ratio with an odd denominator/root which is defined in the real numbers. Is that an accurate assessment? 6. Jan 28, 2017 Saturnine Zero This is exactly what has happened in my book. And why I've been so puzzled! I'm looking forward to getting a handle on it using complex numbers. 7. Jan 28, 2017 Stephen Tashi You don't fully understand yet. We are dealing with a problem of definition. The question is "How is the notation $x^{\frac{a}{b}}$ defined in the arithmetic of the real numbers?". Definitions are a matter of tradition, not a matter of "true" or "false". We are asking question about how a notation is defined, not about a what a number is equal to - because until a notation is defined , it doesn't represent a particular number. I'm saying that the tradition for defining the notation $x^{\frac{a}{b}}$ in the arithmetic of the real numbers is not consistent from textbook to textbook. What do your text materials say? If you have two different books, they might say two different things. I think you are proposing the idea that we can define the notation $x^{\frac {a}{b}}$ when $a$ and $b$ are non-negative integers to mean: Reduce $\frac{a}{b}$ to its lowest terms. Let the reduced fraction be $\frac{p}{q}$. Find the $q$-th root of $x^p$ if it exists in the real number system. That is a possible definition for the notation. But then we have to worry about how this notation fits-in with other notation. For example, is it consistent with $( x^ {\frac{p}{q}})^2 = (x^{\frac{p}{q}})( x^{\frac{p}{q}}) = x^{(\frac{p}{q} + \frac{p}{q})} = x^{\frac{2p}{q}}$ ? If that notation holds then we would have $( (-3)^{\frac{1}{2}} )^2 = (-3)^{\frac{2}{2}}$ but the notation on left hand side does not define a number in the real number system and the notation on the right hand does represent a real number. It's not a simple matter to define a system of notation dealing exclusively with real numbers that is self-consistent and implements all the familiar algebraic manipulations that we want to do. It's not surprising that some elementary algebra books "throw up their hands" and just declare that $x^{\frac{a}{b}}$ is undefined when $x < 0$. They may implement the exception that $x^{\frac{1}{q}}$ is notation for $\sqrt[q]{x}$ and point out that this is an exception to the rule, or they may start using that notation and forget to point out that they are making an exception! The complex numbers are simpler in many ways, but added complexities appear - e.g. "branch points" of functions. Last edited: Jan 28, 2017 8. Jan 28, 2017 Saturnine Zero Thanks so much for your help Stephen I think that clears it up. I think it's a case of a book giving particular definitions of the notation and convention to get past a certain point with the assumed knowledge of that level which is "good enough" until such times a deeper explanation can be given using complex numbers. I guess this happens with a lot of topics, a student is given enough information to progress but there are always some subtleties lurking in the background which can only be understood after a certain point of future topics. I think this is a good example of the issue at hand and help me get my head around the fact it's an issue of defining notation.
2018-05-23 19:26:34
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.833993673324585, "perplexity": 255.01616743865273}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-22/segments/1526794865702.43/warc/CC-MAIN-20180523180641-20180523200641-00214.warc.gz"}
http://gatkforums.broadinstitute.org/gatk/discussion/1188/haploid-genomes
The current GATK version is 3.7-0 Examples: Monday, today, last week, Mar 26, 3/26/04 #### Howdy, Stranger! It looks like you're new here. If you want to get involved, click one of these buttons! #### ☞ Did you remember to? 1. Search using the upper-right search box, e.g. using the error message. 3. Include tool and Java versions. 4. Tell us whether you are following GATK Best Practices. 5. Include relevant details, e.g. platform, DNA- or RNA-Seq, WES (+capture kit) or WGS (PCR-free or PCR+), paired- or single-end, read length, expected average coverage, somatic data, etc. 6. For tool errors, include the error stacktrace as well as the exact command. 7. For format issues, include the result of running ValidateSamFile for BAMs or ValidateVariants for VCFs. 8. For weird results, include an illustrative example, e.g. attach IGV screenshots according to Article#5484. 9. For a seeming variant that is uncalled, include results of following Article#1235. #### ☞ Formatting tip! Surround blocks of code, error messages and BAM/VCF snippets--especially content with hashes (#)--with lines with three backticks ( ` ) each to make a code block. GATK 3.7 is here! Be sure to read the Version Highlights and optionally the full Release Notes. # Haploid genomes Member Posts: 19 edited July 2012 Dear GATK team, I know that in the past GATK was not suitable for haploid genomes. I wanted to ask if this possibly changed since then - and whether it is possible to use GATK for haploid genomes. Thanks a lot, Gilgi Post edited by Carneiro on Tagged: The ug in gatk2 can call haploid sequence natively now. You just set ploidy to 1. -- Mark A. DePristo, Ph.D. Co-Director, Medical and Population Genetics Broad Institute of MIT and Harvard The ug in gatk2 can call haploid sequence natively now. You just set ploidy to 1. -- Mark A. DePristo, Ph.D. Co-Director, Medical and Population Genetics Broad Institute of MIT and Harvard • Member Posts: 19 Thanks! This is great news! I'll try to work with it. • Member Posts: 19 Hi, But after download saw that the version doesn't seem to be 2 but: version 1.6-596-g3b9929c Is this the correct version? Thanks, Gilgi The 2.0 release should be coming out later today. Eric Banks, PhD -- Director, Data Sciences and Data Engineering, Broad Institute of Harvard and MIT • Member Posts: 19 When trying to use the UnifiedGenotyper with --sample_ploidy 1 I get an error: MESSAGE: Incorrect genotype calculation model chosen. Only [POOLSNP|POOLINDEL|POOLBOTH] supported with this walker if sample ploidy != 2 What does this mean? My data isn't pool, I have individual (barcoded) haploid sequenced strains. --genotype_likelihoods_model POOLBOTH But then I get: MESSAGE: Incorrect AF Calculation model. Only POOL model supported if sample ploidy != 2 I tried to look for the answer in the guide - without success. • Member Posts: 19 Thanks a lot!!! • Member Posts: 1 Ive used the -pnrm POOL option but still getting the same error as gilgi. So I have java -Xmx30g -jar /usr/local/gatk2/GenomeAnalysisTK.jar -T UnifiedGenotyper -R ref -I bam -I bam -pnrm POOL -polidy 1 -o vcf Any help? Ali We have changed the arguments so that they more accurately reflect what they are doing: So you'll want e.g. -pnrm GeneralPloidySNP Eric Banks, PhD -- Director, Data Sciences and Data Engineering, Broad Institute of Harvard and MIT • Member Posts: 14 With v2.0-39 I had been using -ploidy 1 -pnrm POOL -glm POOLSNP. Can you confirm that for calling of snps/indels in haploid genomes as of v2.1 would now be -ploidy 1 -pnrm EXACT -glm GeneralPloidySNP. Is the -pnrm POOL option now defunct? You just need -ploidy 1. "-pnrm EXACT" will work but there's no other option . "-glm GeneralPloidySNP" will not work - you need either SNP, INDEL or BOTH. • Member Posts: 14 Thanks for your reply. Under what circumstance should -glm GeneralPloidySNP/GeneralPloidyINDEL be used? • Member Posts: 3 I am experiencing a problem with the ploidy 1 option. Having used GATK2 unified genotyper with the params --sample_ploidy 1 --genotype_likelihoods_model BOTH -rf BadCigar I get the following line in a vcf file (see sample in bold) Staphylococcus 1553115 . A G 24454.01 . AC=13;AF=0.813;AN=16;BaseQRankSum=1.072;DP=1040;Dels=0.00;FS=32.822;HaplotypeScore=3.3463;MLEAC=13;MLEAF=0.813;MQ=40.20;MQ0=47;MQRankSum=-10.543;QD=32.13;ReadPosRankSum=-1.148;SB=-9.076e+03 GT:AD:DP:GQ:MLPSAC:MLPSAF:PL 1:0,29:29:99:1:1.00:1015,0 1:0,62:62:99:1:1.00:2053,0 1:0,106:106:99:1:1.00:3210,0 1:0,102:102:99:1:1.00:3305,0 1:0,88:88:99:1:1.00:2750,0 1:0,41:41:99:1:1.00:1324,0 1:0,76:76:99:1:1.00:2448,0 1:0,39:39:99:1:1.00:1303,0 0:64,40:104:99:0:0.00:0,1334 1:0,41:41:99:1:1.00:1373,0 1:0,49:49:99:1:1.00:1668,0 0:72,50:122:99:0:0.00:0,1258 1:0,59:59:99:1:1.00:1852,0 1:0,38:38:99:1:1.00:1192,0 1:0,31:31:99:1:1.00:961,0 0:53,0:53:99:0:0.00:0,1633 The sample in bold is called as WT (genotype 0) with a high GQ despite there being 72 reads of genotype 0 and 50 of genotype 1. Examining the bam file suggests that this is a mapping error in a repetitive phage region If I set ploidy to be 2 the equivalent line in the resulting vcf file is Staphylococcus 1553115 . A G 24788.02 . AC=28;AF=0.875;AN=32;BaseQRankSum=0.947;DP=1040;Dels=0.00;FS=32.822;HaplotypeScore=3.3463;InbreedingCoeff=0.4286;MLEAC=28;MLEAF=0.875;MQ=40.20;MQ0=47;MQRankSum=-10.096;QD=25.11;ReadPosRankSum=-1.177;SB=-9.871e+03 GT:AD:DP:GQ:PL 1/1:0,29:29:81:986,81,0 1/1:0,62:62:99:1895,156,0 1/1:0,106:106:99:2992,247,0 1/1:0,102:102:99:3169,268,0 1/1:0,88:88:99:2452,193,0 1/1:0,41:41:99:1243,99,0 1/1:0,76:76:99:2283,193,0 1/1:0,39:39:99:1233,105,0 0/1:64,40:104:99:886,0,1706 1/1:0,41:41:99:1298,108,0 1/1:0,49:49:99:1581,129,0 0/1:72,50:122:99:1235,0,2126 1/1:0,59:59:99:1649,132,0 1/1:0,38:38:87:1065,87,0 1/1:0,31:31:69:821,69,0 0/0:53,0:53:99:0,138,1588 As can be seen from the bold text, the same position is called as heterozygote which based on the number of the reads mapping would be likley except for the fact this is a bacterial haploid genome. Previously I would have discarded this since the heterozygous call indicates mis-mapping as the bam file confirms. I had been hoping to use the sample_polidy option set to 1 for bacterial genomes but this results concerns me. I could obviously filter based on AD but the wonder why the sample was given a high GQ when the polidy is set to 1 and the AD suggests the call of genotype 0 should be doubted. Any suggestions on what is going on here?? Many thanks Anthony The code is actually doing what it's designed to do - when you're using -ploidy 1, there are only 2 possible genotype assignments, and the assignment "0" is by far the most likely one even if 40% of your reads have another base. In the default diploid case, the most likely genotype is the 0/1 one, which is exactly what you're getting. Even in the haploid case, there's considerable evidence that favors the "0" genotype (plus the population prior), so you'll get a high value of GQ anyway - your PL values of 0,1258 indicate that, statistically, it's 10^125 likelier that your data came from a reference site than from an alt site based on all the available reads. • Member Posts: 14 edited January 2013 In the current documentation (v2.3-9) for the Unified Genotyper there is a caveat stating "We only handle diploid genotypes". Has something changed or can -ploidy still be safely set to 1?
2017-03-24 04:15:07
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.26400092244148254, "perplexity": 13534.897150720772}, "config": {"markdown_headings": true, "markdown_code": false, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-13/segments/1490218187690.11/warc/CC-MAIN-20170322212947-00224-ip-10-233-31-227.ec2.internal.warc.gz"}
https://codeforces.com/topic/75843/en1
Please, try EDU on Codeforces! New educational section with videos, subtitles, texts, and problems. × Maximum number of pairwise decrements possible in three numbers Revision en1, by liveoverflow, 2020-03-29 00:21:51 Given are the 3 non-negative integers a,b,c In a single operation, we have to subtract 1 from two integers only if they don't become negative. We have to find the maximum no of operations possible that we can do until the given operation is not possible. constraints:1<=a,b,c<=10^18 , 1<=test-cases<=10^5 Example- (1,2,3) = (1,1,2) -> (1,0,1) -> (0,0,0) , ans is 3 (1,1,8) = (1,0,7) -> (0,0,6) , ans is 2 Any approach or proof will be highly helpful. I have actually written a code that's working as far as I know, but I don't know if it's completely true, Thanks ~~~~~ #include<bits/stdc++.h> using namespace std; #define fastio ios_base::sync_with_stdio(0); cin.tie(0) #define LL long long int main(){ fastio; int t; cin>>t; while(t--){ LL a[3]; cin>>a[0]>>a[1]>>a[2]; sort(a,a+3); if(a[0]+a[1]>=a[2]){ LL ans = a[2] + (a[0]+a[1]-a[2])/2; cout<<ans; } else { LL ans = a[1] + min(a[0],a[2]-a[1]); cout<<ans; } cout<<"\n"; } } ~~~~~ #### History Revisions Rev. Lang. By When Δ Comment en1 liveoverflow 2020-03-29 00:21:51 1155 Initial revision (published)
2020-07-10 00:50:31
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.6506420373916626, "perplexity": 7839.49137958393}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-29/segments/1593655902377.71/warc/CC-MAIN-20200709224746-20200710014746-00131.warc.gz"}
https://gmatclub.com/forum/if-x-and-y-are-integers-and-2-x-1-2-y-2-5-then-x-y-can-take-how-306777.html
GMAT Question of the Day: Daily via email | Daily via Instagram New to GMAT Club? Watch this Video It is currently 25 May 2020, 10:48 ### GMAT Club Daily Prep #### Thank you for using the timer - this advanced tool can estimate your performance and suggest more practice questions. We have subscribed you to Daily Prep Questions via email. Customized for You we will pick new questions that match your level based on your Timer History Track every week, we’ll send you an estimated GMAT score based on your performance Practice Pays we will pick new questions that match your level based on your Timer History # If x and y are integers and 2*x^(1/2) + y^2 < 5, then x*y can take how Author Message TAGS: ### Hide Tags Math Expert Joined: 02 Sep 2009 Posts: 64111 If x and y are integers and 2*x^(1/2) + y^2 < 5, then x*y can take how  [#permalink] ### Show Tags 30 Sep 2019, 04:50 00:00 Difficulty: 95% (hard) Question Stats: 34% (02:31) correct 66% (02:15) wrong based on 64 sessions ### HideShow timer Statistics If x and y are integers and $$2\sqrt{x} + y^2 < 5$$, then x*y can take how many different values ? (A) 4 (B) 5 (C) 6 (D) 7 (E) 8 _________________ Math Expert Joined: 02 Aug 2009 Posts: 8587 Re: If x and y are integers and 2*x^(1/2) + y^2 < 5, then x*y can take how  [#permalink] ### Show Tags 30 Sep 2019, 05:47 2 2 Bunuel wrote: If x and y are integers and $$2\sqrt{x} + y^2 < 5$$, then x*y can take how many different values ? (A) 4 (B) 5 (C) 6 (D) 7 (E) 8 $$2\sqrt{x} + y^2 < 5$$ x has to be non-negative... 1) when x=0, $$0+y^2<5$$...irrespective of the value of y, xy will be 0. 2) when x=1, $$2*1+y^2<5.....y^2<3$$, so y=+1, -1, 0... xy will be 0, 1 and -1. 3) when x=4, $$4+y^2<5$$, so y will be 0... xy will be 0. 4) when x is 2 or 3 similarly, y can be 1 or -1..possible value of xy = 2, -2, 3 and -3 Different values of xy = 0, 1, -1, 2, -2, 3, -3..so 7 values D _________________ SVP Joined: 24 Nov 2016 Posts: 1549 Location: United States If x and y are integers and 2*x^(1/2) + y^2 < 5, then x*y can take how  [#permalink] ### Show Tags 29 Mar 2020, 14:18 Bunuel wrote: If x and y are integers and $$2\sqrt{x} + y^2 < 5$$, then x*y can take how many different values ? (A) 4 (B) 5 (C) 6 (D) 7 (E) 8 $$2\sqrt{x}+y^2<5…y=(negative,zero,positive)$$ $$\sqrt{anything.in.gmat}≥0: x=non.negative.integer$$ $$y=0:2\sqrt{x}+y^2<5…2\sqrt{x}<5…\sqrt{x}<2.5$$ $$x={0,1,2,3}…xy=x*0={0}…(always 0)$$ $$y=1:2\sqrt{x}+y^2<5…2\sqrt{x}<4…\sqrt{x}<2$$ $$x={0,1,2,3}…xy=x*1={0,1,2,3}$$ $$y=-1:2\sqrt{x}+y^2<5…2\sqrt{x}<4…\sqrt{x}<2$$ $$x={0,1,2,3}…xy=x*(-1)={0,-1,-2,-3}$$ xy={-1,-2,-3,0,1,2,3}=7 Ans (D) If x and y are integers and 2*x^(1/2) + y^2 < 5, then x*y can take how   [#permalink] 29 Mar 2020, 14:18
2020-05-25 18:48:46
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8023179173469543, "perplexity": 3675.708935494147}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-24/segments/1590347389309.17/warc/CC-MAIN-20200525161346-20200525191346-00335.warc.gz"}
https://electronics.stackexchange.com/questions/299421/not-gate-with-3-leds-attached-after-npn-transistor
# Not gate with 3 LEDS attached after NPN transistor I want to build a circuit that turns 3 LEDS (L1,L2,L3) on if the button isn't pressed and turns a different set of 3 LEDS (L4,L5,L6) on when the button is pressed. Only 1 of the sets of LEDS should be able to be lit up at a time though so pressing or releasing the button will turn off the currently lit up set. I built and assembled this schematic: When I don't press the button L1,L2,L3 light up properly. When I press the button L4,L5,L6 light up, but L1,L2,L3 don't turn off. How can I go about getting L1,L2,L3 to turn off when the button is pressed? Simplest way is to use a 1PDT push button.. Otherwise you probably need another transistor $Q2$ that is normally biased on and is turned off when the original one $Q1$ turns on. simulate this circuit – Schematic created using CircuitLab You CAN tie $Q1$ to $R2$ via a diode $D7$, but then $R2$ needs to be 1/2W and $Q1$ needs to dump a lot more current than you need. simulate this circuit • That Power waster version will have all LEDs turned on at the same time when the button is not pressed. – 12Lappie Apr 13 '17 at 19:22 • @lappie oops..good point... That's what I get for rushing.. Fixed. – Trevor_G Apr 13 '17 at 19:24 • Wow, big detailed schematics, good stuff...you can tell you've recently started working from home again, Trevor :-) :-) – TonyM Apr 13 '17 at 19:42 • @TonyM.. OH YES... Working!... I knew I was supposed to be doing something else... LOL ;-) – Trevor_G Apr 13 '17 at 19:44 • You're contributing good stuff to society's up and coming new generation, you can't get a more valuable use of time than this :-) (don't try telling the client that when they check in on things) – TonyM Apr 13 '17 at 20:02
2020-01-23 04:36:22
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.20542342960834503, "perplexity": 2081.9739154371164}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-05/segments/1579250608295.52/warc/CC-MAIN-20200123041345-20200123070345-00488.warc.gz"}
https://www.ideals.illinois.edu/handle/2142/8971
## Files in this item FilesDescriptionFormat application/pdf ImpactofEarthquakesontheCentralUSA.pdf (78Mb) Full ReportPDF application/pdf ImpactofEarthquakesontheCentralUSA - Main Body.pdf (3Mb) Main Body of ReportPDF application/pdf ImpactofEarthqu ... lUSA - Appendices Only.pdf (75Mb) Appendices OnlyPDF ## Description Title: Impact of Earthquakes on the Central USA Author(s): Elnashai, Amr S.; Cleveland, Lisa J.; Jefferson, Theresa; Harrald, John Contributor(s): Spencer, Billie F., Jr.; Masud, Arif; Pineda, Omar; Suarez, Rob; Chang, Liang; Unen, Can; Genct��rk, Bora; Frankie, Thomas; Lee, Jong Sung; Barbuto, Daniel; Challand, Sarah; Vlna, Jessica; Mekala, Sindhura; Alrawi, Nasiba; Harrald, John; Fiedrich, Frank; Johannes, Tay; Madhukar, Ashutosh; Mexted-Freeman, Clinton; Sener, Sebnem; CUSEC; IEM; Army Corps of Engineers; Bauer, Robert; Basuch, Douglass; Chesla, Kirk; Escalona, Eduardo Subject(s): FEMA Phase I Mid-America Earthquake (MAE) Center Catastrophic Earthquake Response Planning Abstract: The region of potential impact due to earthquake activity in the New Madrid Seismic Zone (NMSZ) is comprised of eight states: Alabama, Arkansas, Illinois, Indiana, Kentucky, Mississippi, Missouri and Tennessee. Moreover, the Wabash Valley Seismic Zone (WVSZ) in southern Illinois and southeast Indiana and the East Tennessee Seismic Zone in eastern Tennessee and northeastern Alabama constitute significant risk of moderate-to-severe earthquakes throughout the central region of the USA. The investigation summarized in this report includes earthquake impact assessment scenarios completed using HAZUS-MH MR2 for several potential earthquake scenarios affecting the aforementioned eight-state region. The NMSZ includes eight scenarios - one for each state - whilst the WVSZ scenario in Indiana and the ETSZ scenario in Alabama complete the suite of ten total scenarios. These ten scenarios are designed to provide scientificallycredible, worst case damage and loss estimates for the purposes of emergency planning, response and recovery. The earthquake impact assessments presented in this report employ an analysis methodology comprising three major components; namely hazard, inventory and fragility (or vulnerability). The hazard characterizes not only the shaking of the ground but also the consequential transient and permanent deformation of the ground due to strong ground shaking. The inventory comprises all assets in a specified region, including the built environment and population data. Fragility or vulnerability functions relate the severity of shaking to the likelihood of reaching or exceeding damage states (light, moderate, extensive and near-collapse, for example). Social impact models are also included in the current assessment methodology and employ infrastructure damage results to estimate the effects on populations subjected to the earthquake. Whereas the modeling software used (HAZUS-MH MR2, FEMA-NIBS, 2006) provides default values for all of the above, most of these default values were replaced by components of traceable provenance and higher reliability than the default data, as described below. The hazard employed in this investigation includes ground shaking for three seismic zones and various events within those zones. The NMSZ consists of three fault segments: the northeast segment, the reelfoot thrust or central segment, and the southwest segment. Each segment comprises a deterministic, magnitude 7.7 (Mw7.7) earthquake caused by a rupture over the entire length of the segment. The employed magnitude was provided by US Geological Survey (USGS). The NMSZ represents the first of three hazard events utilized in this report. Two deterministic events are also included, namely a magnitude Mw7.1 in the Wabash Valley Seismic Zone (WVSZ) and a magnitude Mw5.9 in the East Tennessee Seismic Zone (ETSZ) earthquakes. Permanent ground deformation is characterized by a liquefaction susceptibility map that provides data for part of the eight states. Full liquefaction susceptibility maps for the entire region are still under development and will be utilized in subsequent phases of the current project. Inventory is enhanced through the use of the Homeland Security Infrastructure Program (HSIP) 2007 Gold Dataset (NGA Office of America, 2007). This dataset contains various types of critical infrastructure that are key inventory components for earthquake impact assessment. Transportation and utility facility inventories are improved while regional natural gas and oil pipelines are added to the inventory, alongside some high potential loss facility inventories. Additional essential facilities data were used for the State of Illinois via another impact assessment project at the Mid-America Earthquake Center, funded by FEMA and the Illinois Emergency Management Agency. Existing HAZUSMH MR2 fragility functions are utilized in this study and default values are used to determine damage likelihoods for all infrastructure components. The results indicate that the State of Tennessee incurs the highest level of damage and social impacts. Over 250,000 buildings are moderately or more severely damaged, over 260,000 people are displaced and well over 60,000 casualties (injuries and fatalities) are expected. Total direct economic losses surpass $56 billion. The State of Missouri also incurs substantial damage and loss, though estimates are less than those in Tennessee. Well over 80,000 buildings are damaged leaving more than 120,000 people displaced and causing over 15,000 casualties. Total direct economic losses in Missouri reach nearly$40 billion. Kentucky and Illinois also incur significant losses with total direct economic losses reaching approximately $45 and$35 billion, respectively. The State of Arkansas incurs nearly $19 billion in direct economic loss while the State of Mississippi incurs$9.5 billion in direct economic losses. States such as Indiana and Alabama experience limited damage and loss from NMSZ events with approximately $1.5 and$1.0 billion, respectively. Noting that experience confirms that the indirect economic loss due to business interpretation and loss of market share, amongst other features, is at least as high if not much higher than the direct economic losses, the total economic impact of a series of NMSZ earthquakes is likely to constitute by far the highest economic loss due to a natural disaster in the USA. The contents of this report provide the various assumptions used to arrive at the impact estimates, detailed background to the above figures, and a breakdown of the figures per sector at the county and state levels. The main body of the report gives state-level impact assessments, whilst the Appendices give earthquake impact modeling results at the county level. The results are designed to provide emergency managers and agencies with information required to establish response plans based on likely impacts of plausible earthquakes in the central USA. Issue Date: 2008-09-03 Series/Report: MAE Center Report 08-02 Genre: Technical Report Type: Text Language: English URI: http://hdl.handle.net/2142/8971 Publication Status: unpublished Peer Reviewed: not peer reviewed Sponsor: Army W9132T-06-02 Date Available in IDEALS: 2008-09-03 
2015-08-31 19:58:34
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.3961561322212219, "perplexity": 10421.86586240558}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2015-35/segments/1440644066586.13/warc/CC-MAIN-20150827025426-00077-ip-10-171-96-226.ec2.internal.warc.gz"}
https://statistics.berkeley.edu/tech-reports/698
# Ubiquity of synonymity: almost all large binary trees are not uniquely identified by their spectra or their immanantal polynomials Report Number 698 Authors Frederick A. Matsen and Steven N. Evans Abstract There are several common ways to encode a tree as a matrix, such as the adjacency matrix, the Laplacian matrix (that is, the infinitesimal generator of the natural random walk), and the matrix of pairwise distances between leaves. Such representations involve a specific labeling of the vertices or at least the leaves, and so it is natural to attempt to identify trees by some feature of the associated matrices that is invariant under relabeling. An obvious candidate is the spectrum of eigenvalues (or, equivalently, the characteristic polynomial). We show for any of these choices of matrix that the fraction of binary trees with a unique spectrum goes to zero as the number of leaves goes to infinity. We investigate the rate of convergence of the above fraction to zero using numerical methods. For the adjacency and Laplacian matrices, we show that that the {\em a priori} more informative immanantal polynomials have no greater power to distinguish between trees. PDF File Postscript File
2021-05-09 13:07:06
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9147451519966125, "perplexity": 312.2638907102071}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-21/segments/1620243988986.98/warc/CC-MAIN-20210509122756-20210509152756-00003.warc.gz"}
https://www.numerade.com/questions/for-the-reaction-mathrmcaco_3s-rightleftharpoons-mathrmcaosmathrmco_2g-calculate-the-equilibrium-p_m/
Enroll in one of our FREE online STEM summer camps. Space is limited so join now!View Summer Courses Check back soon! ## Discussion You must be signed in to discuss. ## Video Transcript were given this reaction, and we want to determine the partial pressure at equilibrium for carbon dioxide gas, a 25 degree Celsius. We can begin by rating out the expression for the equilibrium constant K p. We know that when we're writing out that expression, that solid species do not participate in that that expression that we right out we have a curious species, then we're looking for a value of K C, since we have concentration values. But the only non solid that we have in this equation is this carbon dioxide gas, and so we use KP. Since gas is, gas concentrations are represented with partial pressure values. And so when we write out the KP expression, we take the partial pressure of each gas on the product side of the reaction, and we raise it to the power of its tricky metric coefficient. And then we multiply that quantity by each gas in the products and then divide that same those same values for all the gases on the reactions. But since we just have one single gas on the product side CO two, that means that the equilibrium expression KP is just equal to the partial pressure of CO two since has a stroke geometrically efficient of one ensue. We know that there is an equation, then to find the equilibrium constant, using the changing Gibbs free energy of the reaction. So if we plug in values to sell for that that value for the equilibrium constant, we know that that will also equal the partial pressure of CO two at those conditions. So next we need to solve for the change in Gibbs free energy of the reaction. We use this equation on the bottom to help us do that. The first find the total change and gives free energy of formation at standard conditions of the products and then subtract that seemed value from of the for the reactions. You can break up the turns into the products in the react mints and starting with the product side, we see that we have one mole of CEO in one mole of CO two, so one mole of CEO on one mole of CO two or each multiplied by their values. For Delta G information, it's danger conditions which we can look up in the appendix so that units of moles cancel out to yield total units of energy for the change in Gibbs free energy of the products. And now for the reactions, we see that we have one mole of C A c 03 And when we multiply that with its delta G information at standard conditions value, you get the total Gibbs free energy change of the reactions. Then we take the total delta G of the products and subtract the total delta G of the reactant. We do that, we get this final answer for Delta G of reaction. And now we can plug that in for this variable in the equation that we are using the cell for the partial pressure of co two. So KP equals E to the power of negative Delta G, which is what we just saw for 130 0.9. And we need to convert this into jewels to the weekend, eventually cancel out the units of the gas Constant are so we should multiply this by 1000. Get it in jewels per mole. Then we divide this by are the constant eight point 314 Jules or more times Kelvin and then multiply that by the temperature. If we're told is 298 Teldyn, and now we just plug that all in. And we knew again from this relationship that that equilibrium constant will be equal to the partial pressure of CO two, which is what we're ultimately trying to find. So we can see that the partial pressure of CO two at equilibrium at 25 degrees Celsius for this system is equal to about 1.13 times and through the negative 23rd power in atmospheres for the partial pressure units.
2020-08-11 09:42:19
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8114762902259827, "perplexity": 391.6188977075082}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-34/segments/1596439738746.41/warc/CC-MAIN-20200811090050-20200811120050-00503.warc.gz"}
https://minireference.com/blog/2012/12/
### New landing page If you visit minireference.com you will now see a new design which conforms to the standard “book product webpage” format. I am very pleased with the result, which was an attempt to mimic other good book product pages. The design process took me about three weeks. Most of the time was spent on the copy editing. The ability to “put stuff on the page” you have with html + css is much more powerful that LaTeX. And with webfonts becoming the norm now, one cam make very beautiful sites very quickly. Check it out: minireference.com ### The web we still have The facebookification of the Internet brings with it a stupidification of the content that people produce and share. The old web was about blog posts (long, though-out pieces of writing) which automatically form links to each other (through trackback) so that a conversation can emerge without the need for a centralized service. Trackbacks are awesome! For example, I can make this post appear on quora if I embed some javascript (their embed code) which will ping the quora server: We need to cherish this kind of distributed technology, because it is the way out of the walled gardens. They are the living proof that you can have social without central. LDA, BTW, is short for Latent Dirichlet Allocation which is a powerful way to classify documents according to the topics they contain. ### Strang lectures on linear algebra Professor Gilbert Strang’s video lectures on Linear Algebra have been recommended to me several times. I am very impressed with the first lecture. He presents all the important problems and concepts of LA in the first lecture and in a completely as-a-matter-of-fact way. The lecture presents the problem of solving n equations in n unknowns in three different ways: the row picture, the column picture and the matrix picture. In the row picture, each equation represents a line in the xy plane. When “solving” these equations simultaneously, we are looking for the point (x,y) which lies on both lines. In the case of the two lines he has on the board (2x-y=0 and -x+2y=3) the solution is the point x=1, y=2. The second way to look the system of equations is to think of the column of x coefficients as a vector and to think of the column of y coefficients as another vector. In the column picture, solving the system of equations requires us to find the linear combination of the columns (i.e., $x$ times the first column plus $y$ times the second column) gives us the vector on the right hand side. If students start off with this picture, they will be much less mystified (as I was) by the time they start to learn about the column space of matrices. As a side benefit of this initial brush with linear algebra in the “column picture”, Prof. Strang is also able to present an intuitive picture for the formula for the product between a matrix and a vector. He says “Ax is the combination of the columns of A.”  This way of explaining the matrix product is much more intuitive than the standard dot-product-of-row-times-column approach. Who has seen them dot products? What? Why? WTF? I will definitely include the “column picture” in the introductory chapter on linear algebra in the book. In fact, I have been wondering for some time how I can explain what the matrix product Ax. I want to talk about A as the linear transformation TA so that I can talk about the parallels between $x$, $f:R \to R$, $f^{-1}$ and $\vec{v}$, $A$, $A^{-1}$. Now I know how to fix the intro section! Clearly you are the master of the subject. It is funny that what started as a procrastination activity (watching a youtube video to which I just wanted to link to) led to an elegant solution to an old-standing problem which was blocking my writing. Sometimes watching can be productive 😉  Thank you Prof. Strang! ### Target revenue I did a little calculation regarding what kind of sales figures I would need to make it to the 100k income range (which is my current standard for “success” in a technical field). If I can make deals with 100 Universities, and ship ship 100 copies of the book to each of them, then I am done: I think it is totally doable with the MATH and PHYSICS title alone within the next couple of years. So fuck the job world. I am doing my own thing! ### Showing off with python 2:57AM on a Monday. I have to be up at 8AM. The faster I get the job done the more sleep I get. Sounds like the kind of thing to motivate a person. TASK: Parse an access.log file and produce page visit trace for each visitor. Ex: 11.22.33.90 on Monday at 3pm   (Montreal, Firefox 4, on Mac OS X): /contents     (stayed for 3 secs) /derivatives     (stayed for 2m20sec) /contents     (6 secs) /derivative_rules  (1min) /derivative_formulas  (2min) end I had already found some access.log parsing code,  and setup a processing pipeline from last time I wanted to work on this. Here is what we have so far. 3:45AM. Here is the plan. All the log entries are in a list called entries, which I will now sort and split by IP. 4:15AM. Done. Though I have to cleanup the output some more. ### Available on lulu.com We are proud to announce that the Concise MATH & PHYSICS Minireference is now available on the lulu.com book store. After five years of low intensity work and two years of high intensity work, the book has reached a sufficiently quality in the writing, content and narrative flow so that we are ready to show it to the world. Freshman-level math and physics in 300 pages. We at Minireference Co. are here to fix the textbook industry. ### December launch I have been promoting and selling the book for the past two weeks art McGill and I have received a lot of good feedback from students. There is no point in thinking about business ideas — you have to go out and talk to clients. In just two weeks, I now have a title (thanks to my friend Adriano), a product line (I made a mechanics only version too) and a good idea of which pitches work and which do not. ### Product NO BULLSHIT guide to MATH & PHYSICS. In just 300 pages, this book covers Precalculus, Mechanics, Calculus I (derivatives) and Calculus II (integrals). All the material is explained in a clear conversational tone. 100% Math and Physics, No filler. We sold out today. Let’s see what happens tomorrow.
2021-10-26 08:12:09
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.41672301292419434, "perplexity": 1107.593708277895}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-43/segments/1634323587854.13/warc/CC-MAIN-20211026072759-20211026102759-00099.warc.gz"}
https://math.stackexchange.com/questions/3437975/prove-that-x3-is-continuous/3437998
Prove that $|x^3|$ is continuous. I want to do it with the epsilon-delta definition. So for $$\forall x\in D(f)$$ and $$\forall \epsilon >0$$ $$\exists \delta>0$$ $$\forall y$$ such $$|y-x|<\delta$$ $$\implies$$ $$|f(y)-f(x)|<\epsilon$$. Let $$|y-x|<\delta$$ , and $$|y^3-x^3|=|y-x||y^2+yx+x^2|\leq|y-x||y+x|^2$$ $$\implies \delta|y-x|^2 =\epsilon$$, and here im stuck. • $|x^2+xy+y^2| \le |x+y|^2$ is not true in general – David Peterson Nov 16 '19 at 13:39 • Hint: the concatenation of continuous functions is continuous. Your function is the concatenation of which two functions? – Alexander Geldhof Nov 16 '19 at 13:45 • $x^2$ and $|x|$? – Elekhey Nov 16 '19 at 13:49 • Hint: You define $f(x)=x^3$ and $g(x)=|x|$ that are continuous. Then $g(f(x))=$? – Alex Pozo Nov 16 '19 at 13:49 • Ohhh I get it. Thank you! – Elekhey Nov 16 '19 at 13:51 We know that $$g(x) = x^{3}$$ is continuous. Let us show that the function $$f(x) = |x|$$ is continuous at an arbitrary point $$a \in \mathbb{R}$$. To do this, let $$\delta = \epsilon$$ and suppose $$|x-a| \le \epsilon$$. Then, because of the triangle inequality, we have: $$||x|-|a||\le |x-a|\le \epsilon$$ Which proves $$f$$ is continuous at $$a$$. Now, we know that the composite of continuous functions is continuous, so if $$h(x) = |x^{3}|$$ then: $$h(x) = (f\circ g)(x)$$
2020-06-01 13:40:14
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 20, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8996855616569519, "perplexity": 162.4762261003653}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-24/segments/1590347417746.33/warc/CC-MAIN-20200601113849-20200601143849-00547.warc.gz"}
http://mathhelpforum.com/trigonometry/204579-terminal-side-print.html
# terminal side • October 3rd 2012, 10:57 AM RiderMind Hello. I am having difficulty with this problem: Find the point on the terminal side of θ = ​-3pi/4 that has an x coordinate of -1 I was hoping that someone here would be able to help me out? I am actually trying to figure out what I am doing so if you could explain how you found the answer, I would really appreciate it. Thank you so much guys. • October 3rd 2012, 11:01 AM MarkFL Re: terminal side θ = ? • October 3rd 2012, 11:02 AM RiderMind Re: terminal side oops, sorry. θ=-3pi/4 Thanks • October 3rd 2012, 11:10 AM MarkFL Re: terminal side From the given information, we know: $r\cos\left(-\frac{3\pi}{4} \right)=-1$ $r=\sqrt{2}$ The $y$-coordinate is then: $r\sin\left(-\frac{3\pi}{4} \right)=?$ • October 3rd 2012, 11:58 AM RiderMind Re: terminal side 0? • October 3rd 2012, 12:15 PM MarkFL Re: terminal side No, what is $\sin\left(-\frac{3\pi}{4} \right)=-\sin\left(\frac{3\pi}{4} \right)=-\sin\left(\pi-\frac{3\pi}{4} \right)=-\sin\left(\frac{\pi}{4} \right)$ ? • October 3rd 2012, 12:30 PM RiderMind Re: terminal side -45 degrees? or sqrt2 • October 3rd 2012, 12:37 PM MarkFL Re: terminal side No, the sine of an angle will not return an angle. $\frac{\pi}{4}$ is a special angle for which you should know the trig. functions at that angle. $-\sin\left(\frac{\pi}{4} \right)=-\frac{1}{\sqrt{2}}$ So, the $y$-coordinate of the point is: $y=r\cdot\left(-\frac{1}{\sqrt{2}} \right)$ Recall we found $r=\sqrt{2}$ hence: $y=\sqrt{2}\cdot\left(-\frac{1}{\sqrt{2}} \right)=-1$ And so, the point in question is (-1,-1). Try drawing a diagram, and you will easily see that the $y$-coordinate has to be equal to the $x$-coordinate, as the given angle lies along the line $y=x$. • October 3rd 2012, 12:52 PM RiderMind Re: terminal side I see what you mean now. Thanks for telling me to draw a diagram. That made it easier for me to understand. So -pi/4 is one of those I just need to memorize then right? Thanks so much bro. • October 3rd 2012, 01:06 PM HallsofIvy Re: terminal side Well, it's a lot easier to memorize if you understand it. Imagine a right triangle having one angle of $\pi/4$ and one leg of length 1. Since $\pi/2= 2(\pi/4)$, so that the other angle is also $\pi/4$ which means that the other leg also has length 1. By the Pythagorean theorem, the length of the hypotenuse is given by $c^2= 1^2+ 1^2= 2$ so that $c= \sqrt{2}$. That gives $sin(\pi/4)= 1/\sqrt{2}= \frac{\sqrt{2}}{2}$. • October 3rd 2012, 06:02 PM Nervous Re: terminal side Could someone send me a link to teach me how to do this? I'm afraid I don't even understand it...
2015-08-01 12:05:47
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 20, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8548895716667175, "perplexity": 854.3989026130679}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2015-32/segments/1438042988650.6/warc/CC-MAIN-20150728002308-00163-ip-10-236-191-2.ec2.internal.warc.gz"}
http://openstudy.com/updates/50bc4d2ce4b0bcefefa072ac
## CalcDerp102 Group Title Area of a shaded region, posted below. I have some, dont know where to proceed. one year ago one year ago 1. CalcDerp102 |dw:1354517811391:dw| $\int\limits_{1/2}^{2} 1/x$ 2. CalcDerp102 thats all i have at the moment 3. jayz657 the anti derivative of 1/x = ln(x) so you evaluate this from 1/2 to 2 so its ln(2) - ln(1/2) 4. CalcDerp102 thank you so much, i missed my anti-derivative class so i ont know when to use it 5. CalcDerp102 Your instructor might prefer it as $$\ln 4$$, which is exact.
2014-10-26 09:44:29
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8620593547821045, "perplexity": 2863.939305784326}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2014-42/segments/1414119662145.58/warc/CC-MAIN-20141024030102-00169-ip-10-16-133-185.ec2.internal.warc.gz"}
https://ai.meta.stackexchange.com/tags/feature-request
# Questions tagged [feature-request] You have an idea for a new feature, or for a change to the existing functionality. 25 questions Filter by Sorted by Tagged with 16 views ### Merge [graphs] and [graph-theory] tags These tags seem to be effectively interchangeable for all questions I've seen them used in. Should they be merged? (Note: I agree graph-neural-networks should remain distinct.) 48 views ### Should the threshold for post closure by non-moderators be reduced from 5 to 3? Essentially, because voting activity on SE:AI is still sub-optimal, it's rare for a question to recieve 5 close votes from the community. (Here the community refers to non-moderators--mods can ... 66 views ### Are bots used for spam filtering in SE.AI currently? In stack overflow there is a couple community bots that aims to help moderation by automatically flagging posts with the stack exchange API, like this. Is bots like this used in artificial ... 18 views ### How to do code syntax highlighting Stack Overflow does code syntax highlighting automatically, however, ai.stackexchange doesn't. I've tried to add <-- language: python --> before code lines ... 45 views Earlier today I answered a question titled "About the paper : “Label-Free Supervision of Neural Networks with Physics and Domain Knowledge”", for which I had to check the original paper. I spent some ... 43 views ### Flagging a question to be closed as off-topic should offer more options Currently, when you flag a question as off-topic, you cannot specify which other SE website it should belong to. I think we should at least have the option to specify that it can belong to Data ... 47 views ### When will the CMS value corresponding to the description in the communities drop down be changed? It was unanimously agreed that the current AI Stack Exchange description in the below depicted drop down and search for MORE STACK EXCHANGE COMMUNITIES is a misrepresentation for two reasons. Not ... 69 views ### Is the Artificial Intelligence beta stuck with its current out-facing description? After two years of effort and patience in developing a sensible consensus about the AI SE sub-site's description, taking care to be respectful of other established SE sub-sites and insuring a faithful ... 17 views ### Why did the close page flow include neither DS nor CV in the listed migration recommendation options? [duplicate] When I voted to close a purely data science question that appeared in the First Question queue and selected the option that it was more appropriate for another site, the AI meta was listed as the only ... 71 views ### Is the incentive of the down vote undifferentiated by the existence of associated reasoning non-optimal for SE? This is a question to ask the social engineers employed by those who own SE and SO, but I would like to vet it here before doing that. Background Consider that the SE/SO structure is, from one ... 20 views ### Is there any way to add the “tabular” environment to the LaTeX support for this site? That would allow this structure, instead of trying to build tables with ASCII or UTF-8 characters, tedious to say the least. ... 73 views ### How can we change the site description to match our current topic guidelines, and past votes on the description? The current description for the AI.SE site is: Q&A for people interested in conceptual questions about life and challenges in a world where "cognitive" functions can be mimicked in purely ... 31 views ### Visited/Unvisited link colours are too similar Not sure if this is a bug, or maybe an improvement opportunity, but with my current settings I find the colours to be similar enough to cause confusion, I haven't experienced this in other sites. ... 29 views ### Low-resolution version of logo too vague While the new logo looks great at higher resolutions (e.g. in the title just under the navigation bar), it's really 'vague' (sorry, I don't know how to phrase it) in e.g. the Hot Network Questions ... 18 views ### Push Notifications in questions list I'd like to be notified when new questions arise. Would it be feasible? Receive notifications from a specific feed (in my case the Artificial Intelligence community)? When I say "Notifications" I ... 42 views ### Increase the number of sites for flag “off-topic->another site” If a question must be flag as "off-topic" due to "This question belongs on another site in the Stack Exchange network", the only current possibility is "belongs on ai.meta.stackexchange.com". No ... 38 views ### Should we make tags for the different languages? Recently, a user asked a question in which I am unable to help (Or even refer) the OP as I do not know which language is being used. So, my question is, should we make language tags ("python", "... 23 views ### Math equations on AI stackexchange [duplicate] I recently replied to a post on ai stackexchange, and I noticed that it is not possible to insert equations, unlike other websites like crypto stackexchange for example. I believe the library used on ... 23 views ### Are these tags synonymous? I came across these tags. Are these tags synonymous? spanish-language language-processing natural-language natural-language-processing The Spanish language tag is not synonymous, but I don't find ... 30 views ### Proposed Tag: Probabilistic Graphical model I Searched for the PGM tag to subscribe to but could not find it. IMHO, Probabilistic Graphical model is an essential branch of AI Thanks 65 views ### Is it possible to place a banner stating that programming questions are off-topic here? We seem to have a lot of questions about programming showing up now, which are off-topic (and not enough people VTCing!). Examples: (1) (2) (3) Is it possible to place a banner at the top of the ... 282 views What should we have in Help Center > Asking section regarding What topics can I ask about here? For example Stats SE has this: CrossValidated is for statisticians, data miners, and anyone else ... 41 views ### Add CogSCi as a migration target Can we please add Cognitive Sciences as a migration target? It only makes sense, since we're bound to have some questions that should probably be migrated there, such as this. Or at least add the '... I was going to answer a question about reinforcement learning and wanted to show some formulas using the same notation I use on CrossValidated, for instance: $r_{t+1}+\gamma \max_a Q(s_{t+1},a)$ But ...
2020-06-05 15:02:05
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.6458610892295837, "perplexity": 2692.3144879277957}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-24/segments/1590348502097.77/warc/CC-MAIN-20200605143036-20200605173036-00134.warc.gz"}
https://blog.theangry.dev/category/projects/aer-calculator/
# Android AER Calculator This post is about an Android application I wrote to calculate the Annual Equivalent Rate (AER) of a portfolio. The AER of a portfolio is the annualised interest rate that, when applied to the portfolio contributions, results in the current value of the portfolio. ## Simple example If I invested £100 exactly one year ago and the value now is £105, the AER would be 5%. This is because £100 x (1 + 5%) = £105. ## Not so simple Example However, things get significantly more complicated when there are multiple contributions on different dates. What is the AER of a portfolio with contributions of £100 one year ago, £50 ten months ago that is worth £160 today? It turns out to be approximately 7.04%. This is a typical example that is simply stated but has a not-so-obvious solution. ## The Algorithm To compute the AER, I used a numerical method called the NewtonRaphson method. The method starts with an initial guess to the root of a function. From here, we follow the derivative of the function in order to converge on the true root of the function. In this case, the function we are interested in is: $f(r) = \sum_i C_i (r + 1)^{\frac{D_i - D_t}{365}} - P$ Where $C_i$ is the ith contribution, $D_i$ is the ith day, $D_t$ is the current day and $P$ is the present value of the portfolio. The derivative is: $f'(r) = \sum_i C_i \left[ \frac{D_i - D_t}{365} \right] (r + 1)^{\left[ \frac{D_i - D_t}{365} - 1\right] }$ The algorithm starts with an initial estimate $r_0$ and obtains a better approximation $r_1$ by using the relation: $\displaystyle r_1 = r_0 - \frac{f(r_0)}{f'(r_0)}$ This process is repeated until the difference between successive estimates is less than some threshold value or the number of iterations hits some predefined limit. ## AER Calculator This led me to write the Android application AER Calculator, which can be found on the Google Play Store. The (open source!) project can be found on GitHub.
2021-03-03 18:40:12
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 9, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.7356682419776917, "perplexity": 726.6835313599028}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-10/segments/1614178367183.21/warc/CC-MAIN-20210303165500-20210303195500-00174.warc.gz"}
https://banhadegalinha.com/6mxi8/e8oxg.php?1846e3=how-to-insert-square-root-symbol-in-excel-on-mac
Maths of root. For example, 2611 Alt X will insert a ballot square box with check mark symbol as ☑. As you can see, the category has a square root symbol discussed in the previous paragraphs. When writing equations with square root, the square root is written using the radical (√) symbol. The “Ö” character should be uppercase as shown below. Thanks. It is called an x-bar sometimes, and also one of the most important math symbols which can never be ignored. Place the insertion pointer in the right place. There are several different ways that you can insert a square root symbol in Word, and we will cover three of those options below. To type the 2 Squared Symbol anywhere on your PC or Laptop keyboard (like in Microsoft Word or Excel), press Option + 00B2 shortcut for Mac. Select the square root and click the Insert button. Sounds like a silly question but I cant find it anywhere! You can insert a square root symbol in Excel using only your keyboard, without any additional steps. Press the alt key and type 8730 using numeric keypad to make square root √ symbol. Highlight this symbol: ² and then press Ctrl and C simultaneously to copy it. Go back to your document or application, and then paste the symbol by pressing Ctrl and V at the same time. I have been doing the square root symbol (√) by typing alt/option + v to make the square root. The difference between this and symbol font approaches is that you do not need to change font when using the Alt method. If you would like to write the square root symbol in Excel, you have to press ‘Alt’ + ‘251’. Press “Win + ;” keys to open Windows emoji keyboard. Under Equation Tools, on the Design tab, in … To put the square root symbol, you must enter the following key combination: Option + V key Square root symbol in word. The square root of a number is a value that, when multiplied by itself, gives the number. My favorite method for inserting symbols is the copy and paste. For any comments or questions about this Squared sign guide, please let me know in the comments section below. For Mac users, to type the square root symbol in Excel, specify the cell that will contain the symbol; then press Option+V on your keyboard. The first option is to use keys combo. Below are the steps to insert the Square Sign in Word using the insert symbol dialog. Corentin https://www.youtube.com/channel/UCmV5uZQcAXUW7s4j7rM0POg?sub_confirmation=1 How to Type Square Root Symbol in Excel Rett ... need to copy paste some symbols on excel & PowerPoint. The mathematical square root symbol is called radical. For example, 4 * 4 = 16 or 4^2 = 16. Go to the Insert tab. The mathematical square root symbol is called radical. Based on your description, when you click Insert->Symbol in Excel 2016 for Mac, it does not show anything. Just like other applications in Microsoft’s Office suite, Excel too has the Symbol feature where you can use a dialog which lists all supported characters in one place. It is called an x-bar sometimes, and also one of the most important math symbols which can never be ignored. Go to taskbar search and type ‘character map’. Change the input source to Unicode Hex Input and use the hex code as given in the above table. Posted on Feb 10, 2015 10:27 PM. Squared Symbol Alt Code (For Windows Windows). The steps in this article were performed in Microsoft Word 2013, but will also work in most other versions of Word. Using Windows Keyboard Shortcut: Open the document in which you want to insert the square root … The SQRT function in Excel returns the square root of a number.. 1. Thank you. Click on it to launch. add "squared" symbol to email I'd like to type ft squared, i.e. Or Gaussian distribution, do the following key combination: Option + V key square root symbol a... Or tab key to insert a caret ^ symbol, press down the Alt key and type 0178 253! Enough for you to write the square root function requires only one argument for its function.! Language different than your standard keyboard layout Excel 2011 for Mac and I need insert... Root symbol in Excel 365 for Mac environment the Σ symbol to email I 'd like to write the Math... Design tab, in … how to insert these symbols regularly into your work for. Written using the symbol in Word for macOS.Open the file you need to write the Squared sign the! The document in which you want to insert on your numeric keypad to make root! Interchangeably for this character square symbol Math AutoCorrect and use \sqrt to convert it the! Symbols into your document or application, and Mac simple Formula … to. The root the Ampersand ( & ) sign favorite method for most of … Alt shortcut! Additional steps in Outlook for Mac and I need to know to be to. Will open symbol utility having different special characters using Alt key shortcuts in macOS to insert these symbols on &. Holding down the Alt button and then the enter or tab key to insert square symbols Mac! + 0024 to insert the symbol dialog, choose Mathematical Operators from the dropdown symbol will also the... Function to get a square root symbol like √ the Ampersand ( & ) sign paste! Shortcuts ( Windows and Mac ), 2 example, after switching to Unicode Hex Input, Pages! Questions: text, multiple choice, check boxes, etc email I 'd to. And go to “ symbol ” symbol font approaches is that I want the line on top of the small... Multiplied by itself, gives the number to find the square root and the... A '' has two square roots: positive and negative ±√a can add it to a root... That a user can add it to a square root symbol discussed in the above table and press. Description, when multiplied by itself deviation symbol are used interchangeably for this character your... Will be selected after typing the code are two ways to insert square symbols in Excel using your., we will learn how to insert square symbol in Word how to type cubes... ( Windows and Mac platforms Excel Mac version in your own equation, enter f ( x =., 4 * 4 = 16 or 4^2 = 16 special characters using Alt key, press 251 to a! Show up in the previous paragraphs insert this symbol: ² and then paste Squared... More keys simultaneously, you ’ ll learn 4 different ways you can Simply copy the Squared code. The shortcut methods to be able to type a cubes symbol in Excel returns square!, some of these symbols regularly into your work iPhone, iPod, iPad, and platforms. Keys on the “ Ö ” Alt+0178 ( will be discussed in the section. Key, press down the Alt key and type 0178 or 253 code! The most important Math symbols which can never be ignored ² ) + ‘ 251.. Can insert the symbol in Excel Mac version some apps include their own editors with an symbol... The square root of a number you don ’ t printed on the keyboard that user. Through most applications tab and change the Input method to Unicode Hex Input the font group named symbol “. Be available if you how to insert square root symbol in excel on mac ’ t have this character font to “ insert symbols. Check mark how to insert square root symbol in excel on mac as ☑, to square a number, multiply the or. Given in the Ribbon and use the Hex code as given in the search results '' symbol to email 'd... V key square root symbol discussed in the font group named symbol type a2, highlight. Approach isn ’ t have to be difficult insert \$ symbol two square roots: positive and negative.... We need to insert the corresponding code for this character article were performed in Microsoft Word positive number a... This code is 0178 or 253 Alt code ( for Windows Windows ) below. Every positive number a '' has two square roots: positive and negative ±√a my 365! Math symbol on a PC/Windows 8730 into the function to get this symbol: and... The 2 root symbol in Word using the copy and paste Word how to type cubes. Would like to type this symbol on your macOS documents like Pages or Keynote note: to insert symbol... For both Windows and Mac ), 2 community focused on purchasing decisions and technical aspects of most. + V key square root symbol feature in Excel > symbol as.. Or 253 Alt key and type Option + 0024 to insert a shape into a worksheet, follow four... The difference between this and any other symbol on a PC/Windows methods to type ft Squared i.e... The UNICHAR function of Excel can return a Unicode character based on the Design tab, …! From this page and paste the symbol dialog box the comments section below you... And paste the Squared Alt code shortcut, its procedure is different this how... Examples does not exist in my Office 365 for Mac environment iPad and..., just type \sqrt and then press Ctrl and V at the same.! And any other symbol on a Mac ” character should be uppercase as shown.. ” menu by clicking on the keyboard for both Windows and Mac platforms Word 2013, but annoying, in. But will also work in Microsoft Word should be uppercase as shown below Word, can. “ Digits – All. ” Simply add the category “ Digits – All. ” Simply add the has! Press SHIFT + 7 keys give you the Ampersand ( & ) sign down the Alt code shortcut for Word!, which is x always in word/excel on a PC/Windows ways you can find the symbol in Word... Comments or questions about this Squared sign using the symbol you are this. Easily insert such symbols into your work symbol √ does not show anything using Unicode Hex Input use! Word/Excel on a PC/Windows requires only one argument for its function i.e or... Formula … how to insert a simple Formula … how to get the by! And also one of the most important Math symbols which can never be ignored can from! Can use to insert square root of a number and paste method will work... Any wasting much time, let ’ s a Windows shortcut for character... Your standard keyboard layout combination: Option + V key square root symbol in a language different your... I insert the square root or principal square root symbol ∛ ” Option tutorials it... And negative ±√a character map ’ write the x-bar Math symbol on Mac! Group named symbol box will appear with a library of symbols symbols dialog box will appear with a library symbols... 00B2, Alt x will insert a mean symbol, which is x that, when multiplied itself! A2, then highlight the 2 Squared symbol ² shortcuts ( Windows and Mac Input and use the or! Sometimes, and Mac computers other tutorials show it being done but it doesn t..., gives the number 8730 into the function to get a square symbol! Insert the symbol in Excel user can add it to a square root symbol discussed the! Symbols in Mac by pressing two or more keys simultaneously, you a... Click “ symbols ” popup showing lots of special symbols and characters show up in the dialog. Enter “ Ö ” character should be uppercase as shown below different special characters and accented letters explained details... Squared Alt code shortcut write the Squared symbol in Word how to type a cubes symbol Excel! Insert these symbols is the copy and paste method followed by the shortcuts > path! Windows emoji keyboard whether you are using Windows keyboard shortcut: open the document in which you want to under... Windows and Mac ), 2 Microsoft Word the line on top of the above.! Method for most of … Alt code ( for Windows Windows ) Outlook for PC app by on! Will show up in the Ribbon may insert the ` cubic ' symbol ( how to insert square root symbol in excel on mac tiny 3 ) Excel... The Advanced symbol tab seems to have disappeared, let ’ s no specific key for it in keyboard! Font when using the character map app will show up in the given category ( Windows. All the easy methods including the Squared symbol ( the tiny 3 ) in.. My Office 365 for Mac is the insert symbol dialog however, these might have... Any comments or questions about this Squared sign into your document press ‘ ’. When using the character map ’, we will teach you how to the. 4 * 4 = 16 or 4^2 = 16 or 4^2 = 16 on keyboard! Is available in the given category symbol shortcut for Microsoft Word finished to continue typing the rest of your..... you can Simply copy the Squared Alt code ” my Office 365 for environment... Click symbols > symbol in Word symbol using a Formula, on the cell you want insert! Press Ctrl and V at the same time or expression you want to under! With an insert symbol function, however, these might not have the symbol in Word, you have Mac. ## how to insert square root symbol in excel on mac Catia Back To School Offer 2020, Sunday Morning Chords, Dr Bronner Toothpaste, Is Trident A Good Golf Brand, Msi Trident 3 Motherboard, Continental Io-346 Parts Manual, Kawai Mp7 Vs Mp7se, Cr2o72- + Fe2+ + H+, Propane To Natural Gas Conversion Chart,
2021-02-27 16:35:39
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.48941174149513245, "perplexity": 2203.9065563693102}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-10/segments/1614178358976.37/warc/CC-MAIN-20210227144626-20210227174626-00508.warc.gz"}
http://cpr-condmat-strel.blogspot.com/2013/06/13063250-xie-chen-et-al.html
## Symmetry Enforced Non-Abelian Topological Order at the Surface of a Topological Insulator    [PDF] Xie Chen, Lukasz Fidkowski, Ashvin Vishwanath The surfaces of three dimensional topological insulators (3D TIs) are generally described as Dirac metals, with a single Dirac cone. It was previously believed that a gapped surface implied breaking of either time reversal T or U(1) charge conservation symmetry. Here we discuss a novel possibility in the presence of interactions, a surface phase that preserves all symmetries but is nevertheless gapped and insulating. A requirement is that the surface develops topological order of a kind that cannot be realized in a purely 2D system with the same symmetries. We discuss two candidate surface states - both of which are non-Abelian Fractional Quantum Hall states which, when realized in 2D, have \sigma_{xy}=1/2 and hence break T symmetry. However, by constructing an exactly soluble 3D lattice model, we show they can be realized as T symmetric surface states. Both the corresponding 3D phases are confined, have \theta=\pi magnetoelectric response, and require electrons that are Kramers doublets. The first, the T-Pfaffian state, is the (Read-Moore) Pfaffian state with the neutral sector reversed, while the second, the Pfaffian-antisemion state is a product of the Pfaffian state with antisemion topological order. The latter can be connected to the superconducting TI surface state on breaking charge U(1) symmetry, while for the T-Pfaffian there is no simple way to do so. We discuss two physical scenarios for the T-Pfaffian, either (i) it is equivalent to the Pfaffian-antisemion theory and also describes the 3D TI surface OR (ii) it represents a new, interacting 3D TI. View original: http://arxiv.org/abs/1306.3250
2017-08-23 15:44:16
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8206735253334045, "perplexity": 1335.6482865488997}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-34/segments/1502886120573.75/warc/CC-MAIN-20170823152006-20170823172006-00671.warc.gz"}
http://en.wikipedia.org/wiki/Lower_set
# Upper set (Redirected from Lower set) The powerset algebra of the set $\{1,2,3,4\}$ with the upset $\uparrow\{1\}$ colored green. In mathematics, an upper set (also called an upward closed set or just an upset) of a partially ordered set (X,≤) is a subset U with the property that, if x is in U and xy, then y is in U. The dual notion is lower set (alternatively, down set, decreasing set, initial segment; the set is downward closed), which is a subset L with the property that, if x is in L and yx, then y is in L. ## Properties • Every partially ordered set is an upper set of itself. • The intersection and the union of upper sets is again an upper set. • The complement of any upper set is a lower set, and vice versa. • Given a partially ordered set (X,≤), the family of lower sets of X ordered with the inclusion relation is a complete lattice, the down-set lattice O(X). • Given an arbitrary subset Y of an ordered set X, the smallest upper set containing Y is denoted using an up arrow as ↑Y. • Dually, the smallest lower set containing Y is denoted using a down arrow as ↓Y. • A lower set is called principal if it is of the form ↓{x} where x is an element of X. • Every lower set Y of a finite ordered set X is equal to the smallest lower set containing all maximal elements of Y: Y = ↓Max(Y) where Max(Y) denotes the set containing the maximal elements of Y. • A directed lower set is called an order ideal. • The minimal elements of any upper set form an antichain. • Conversely any antichain A determines an upper set {x: for some y in A, xy}. For partial orders satisfying the descending chain condition this correspondence between antichains and upper sets is 1-1, but for more general partial orders this is not true. ## Ordinal numbers An ordinal number is usually identified with the set of all smaller ordinal numbers. Thus each ordinal number forms a lower set in the class of all ordinal numbers, which are totally ordered by set inclusion.
2014-03-14 01:14:04
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 2, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8810036182403564, "perplexity": 578.7167355395122}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2014-10/segments/1394678683421/warc/CC-MAIN-20140313024443-00035-ip-10-183-142-35.ec2.internal.warc.gz"}
http://mathhelpforum.com/advanced-algebra/97666-solved-can-anyone-tell-me-how-invert-complex-matrix.html
# Thread: [SOLVED] Can anyone tell me how to invert a complex matrix? 1. ## [SOLVED] Can anyone tell me how to invert a complex matrix? Thanks. 2. Hello, Why would it be different than inverting a real matrix ? Here is the matrix $\displaystyle A$ that I'm working on. Can you just get me started with the elimination of $\displaystyle A_{21}$...please. $\displaystyle \left[\begin{matrix} 1 & 1 & 1 \\ j8 & -j8 & 0 \\ 0 & j8 & -j8 \end{matrix}\right]$
2018-05-25 02:17:46
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.4467158019542694, "perplexity": 622.7583638717066}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-22/segments/1526794866917.70/warc/CC-MAIN-20180525004413-20180525024413-00186.warc.gz"}
https://math.stackexchange.com/questions/3276185/prove-that-a-bijection-fx%E2%86%92y-is-a-homeomorphism-if-and-only-if-f-and-f-1/3276204
# "Prove that a bijection $f:X→Y$ is a homeomorphism if and only if $f$ and $f^{-1}$ map closed sets to closed sets." This is a problem from Introduction to Topology: Pure and Applied by Colin Adams and Robert Franzosa. Problem "Prove that a bijection $$f:X→Y$$ is a homeomorphism if and only if $$f$$ and $$f^{-1}$$ map closed sets to closed sets." Definition "We can paraphrase the definition of homeomorphism by saying that $$f$$ is a homeomorphism if it is a bijection on points and a bijection on the collections of open sets making up the topologies involved. Every point in $$X$$ is matched to a unique point in $$Y$$, with no points in $$Y$$ left over. At the same time, every open set in $$X$$ is matched to a unique open set in $$Y$$, with no open sets in $$Y$$ left over." Thoughts Let $$f:X→Y$$ be a bijection. Suppose $$f^{-1}$$ does not map the closed $$C'$$ to a closed set C. Then $$f^{-1}$$ does not map the open set $$Y-C'$$ to an open set $$X-C$$. Then $$f:X→Y$$ is not a homeomorphism. Suppose $$f$$ maps all closed sets $$C$$ to all closed sets $$C'$$, and $$f^{-1}$$ maps all closed sets $$C'$$ to all closed sets $$C$$. Then $$f$$ maps all open sets $$X-C$$ to open sets $$Y-C'$$, and $$f^{-1}$$ maps all open sets $$Y-C'$$ to open sets $$X-C$$. Then $$f:X→Y$$ is a homeomorphism. • Is that the real definition of homeomorphism, that you got? Normally a function $f:X\to Y$ is called a homeomorphism if $f$ is a continuous bijection and $f^{-1}$ is continuous. The property that $f$ and $f^{-1}$ map closed sets to closed sets is equivalent to $f$ and $f^{-1}$ beeing continuous. Jun 27 '19 at 15:39 ## 2 Answers The maps $$f$$ and $$f^{−1}$$ are closed iff they are continuous: Suppose $$f$$ is a homeomorphism and let $$A \subset X$$ be a closed set. We get that $$f(A) = (f^{-1})^{-1}(A) \subset Y$$ is closed since $$f^{-1}$$ is continuous. Analogously $$f^{-1}$$ is closed. Suppose $$f$$ and $$f^{-1}$$ are closed, and let $$B \subset Y$$ be a closed set. Now we have that $$f^{-1}(B) \subset X$$ is closed as $$f^{-1}$$ is a closed map. Therefore $$f$$ is continuous. Analogously $$f^{-1}$$ is continuous. Hint: $$f$$ is closed iff $$f^{-1}$$ is continuous.
2022-01-24 19:38:32
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 46, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9452657699584961, "perplexity": 67.14137488085603}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": false}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-05/segments/1642320304600.9/warc/CC-MAIN-20220124185733-20220124215733-00260.warc.gz"}
http://clay6.com/qa/38308/if-the-equation-ax-2-bx-c-0-a-0-has-roots-alpha-and-beta-such-that-alpha-2-
# If the equation $ax^2+bx+c=0$ $(a > 0)$ has roots $\alpha$ and $\beta$ such that $\alpha < -2$ and $\beta > 2$ then, $\begin{array}{1 1}(A)\;b^2-4ac < 0&(B)\;c > 0\\(C)\;a+|b|+c < 0&(D)\;4a+2|b|+c > 0\end{array}$ Since equation has two real roots $\alpha,\beta$ $\Rightarrow b^2-4ac > 0$ $\alpha < -2$ and $\beta > 2$ and $a > 0$ Product of roots < 0 $\Rightarrow \alpha\beta < 0\Rightarrow \large\frac{c}{a}$$< 0$ $a > 0$ $\therefore C < 0$ $f(-1)=a-b+c < 0$ $f(1) =a+b+c < 0$ $\Rightarrow a+|b|+c < 0$ $f(2)=4a+2b+c < 0$ $f(-2)=4a-2b+c < 0$ $\Rightarrow 4a+2|b|+c < 0$ Hence (C) is the correct answer.
2018-06-23 15:41:54
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.7715585231781006, "perplexity": 2648.308329926239}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-26/segments/1529267865098.25/warc/CC-MAIN-20180623152108-20180623172108-00040.warc.gz"}
http://www.physicsforums.com/showthread.php?t=356525
# Double Integral with base e by r_swayze Tags: base, double, integral Share this thread: P: 66 $$\int_0^1\int_0^y e^{x^2} dx dy$$ The region I am integrating over should look like this graph, right? I tried switching the bounds but I am left where what I started. since 0 < x < y, and 0 < y < 1 I can switch to 0 < x < 1 , and x < y < 1 leaving me with the integral $$\int_0^1\int_x^1 e^{x^2} dy dx$$ integrating gives ex2y then substituting the values for y gives $$\int_0^1 e^{x^2} - e^{x^2}x dx$$ Am I integrating over the wrong bounds? I know if 0 < y < x, it would work. Attached Thumbnails HW Helper Thanks PF Gold P: 7,659 Everything looks correct and you reversed the limits nicely. I suspect a typo in your textbook. (Of course you can do the second integral but that doesn't help). Related Discussions Introductory Physics Homework 8 Calculus & Beyond Homework 11 Calculus & Beyond Homework 4 Calculus 3 Calculus 2
2014-08-30 20:34:53
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8543010354042053, "perplexity": 626.9707447079115}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": false}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2014-35/segments/1408500835699.86/warc/CC-MAIN-20140820021355-00442-ip-10-180-136-8.ec2.internal.warc.gz"}
https://wiki.ubc.ca/Science:Math_Exam_Resources/Courses/MATH103/April_2006/Question_04
# Science:Math Exam Resources/Courses/MATH103/April 2006/Question 04 MATH103 April 2006 Work in progress: this question page is incomplete, there might be mistakes in the material you are seeing here. Other MATH103 Exams ### Question 04 You are driving your car at 30 m/sec (approximately 108 km/hr) to catch your flight to Costa Rica for summer holidays. A pedestrian runs across the road, forcing you to break hard. Suppose it takes you 1 sec to react to the danger, and that when you apply your brakes, you slow down at the rate a = -10 m/${\displaystyle {\text{sec}}^{2}}$. After applying the brakes, how long will it take you to stop? How far will your car move from the instant that the danger is sighted until coming to a complete stop? Make sure you understand the problem fully: What is the question asking you to do? Are there specific conditions or constraints that you should take note of? How will you know if your answer is correct from your work only? Can you rephrase the question in your own words in a way that makes sense to you? If you are stuck, check the hint below. Consider it for a while. Does it give you a new idea on how to approach the problem? If so, try it! Checking a solution serves two purposes: helping you if, after having used the hint, you still are stuck on the problem; or if you have solved the problem and would like to check your work. If you are stuck on a problem: Read the solution slowly and as soon as you feel you could finish the problem on your own, hide it and work on the problem. Come back later to the solution if you are stuck or if you want to check your work. If you want to check your work: Don't only focus on the answer, problems are mostly marked for the work you do, make sure you understand all the steps that were required to complete the problem and see if you made mistakes or forgot some aspects. Your goal is to check that your mental process was correct, not only the result.
2022-07-07 13:21:05
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 1, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.49149465560913086, "perplexity": 318.79519203685845}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-27/segments/1656104692018.96/warc/CC-MAIN-20220707124050-20220707154050-00089.warc.gz"}
https://codereview.stackexchange.com/questions/103812/php-script-to-connect-to-mysql-database-using-pdo
# PHP script to connect to MySQL database using PDO Introduction I am preparing to use a basic script to cover how you can connect to a MySQL database using PDO. This script is meant for educational purposes (introductory class on PHP/MySQL) and does of course not cover all security aspects of database connections. Points of focus 1. Conformity to the PSR-1 and PSR-2 standards. 2. Structure for connecting to database connection. 3. Security aspects that has not been covered. Code (settings.php): <?php // Defines database connection information $settings = [ 'host' => '127.0.0.1', 'name' => 'c9', 'port' => '3306', 'charset' => 'utf8', 'username' => 'admin', 'password' => 'root' ]; ?> Code (db.php): <?php // Includes database connection information require_once('../settings.php'); // Establishes connection to database server try {$dbh = new PDO( sprintf( 'mysql:host=%s;dbname=%s;port=%s;charset=%s', $settings['host'],$settings['name'], $settings['port'],$settings['charset'] ), $settings['username'],$settings['password'] ); // Prevents emulated prepare statements and sets error mode // to PDO::ERRMODE_EXCEPTION $dbh->setAttribute(PDO::ATTR_EMULATE_PREPARES, false);$dbh->setAttribute(PDO::ATTR_ERRMODE, PDO::ERRMODE_EXCEPTION); } // Prints out errors raised by PDO catch (PDOException $e) { die('Error: ' .$e->getMessage()); } ?> The ../ for settings.php indicates that the file is outside document root for security purposes. Much appreciated in advance for any comments that could improve this code. I don't use PHP often, but here are a few comments: Use Objects to Manage Resources Like any other language with object-oriented capabilities, PHP serves you best when you manage resources with objects. I'd strongly recommend holding your PDO object within a custom class that manages the connection. This makes the code easier to maintain and doesn't actually have to change much about what you did. The code will also be more extensible. Also, introducing OOP is a very good lesson for students starting out.*** You could then also show them lazy-loading, making sure the PDO isn't opened until there is actually communicating between the server and the database.
2022-05-17 09:00:40
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.22597047686576843, "perplexity": 4359.769114773654}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-21/segments/1652662517018.29/warc/CC-MAIN-20220517063528-20220517093528-00119.warc.gz"}
https://gmatclub.com/forum/what-is-the-greatest-possible-area-of-a-triangular-region-91398.html?fl=similar
It is currently 19 Nov 2017, 23:32 ### GMAT Club Daily Prep #### Thank you for using the timer - this advanced tool can estimate your performance and suggest more practice questions. We have subscribed you to Daily Prep Questions via email. Customized for You we will pick new questions that match your level based on your Timer History Track every week, we’ll send you an estimated GMAT score based on your performance Practice Pays we will pick new questions that match your level based on your Timer History # Events & Promotions ###### Events & Promotions in June Open Detailed Calendar # What is the greatest possible area of a triangular region new topic post reply Question banks Downloads My Bookmarks Reviews Important topics Author Message TAGS: ### Hide Tags Manager Joined: 18 Oct 2009 Posts: 50 Kudos [?]: 768 [3], given: 3 Schools: Kellogg What is the greatest possible area of a triangular region [#permalink] ### Show Tags 01 Nov 2009, 22:12 3 KUDOS 12 This post was BOOKMARKED 00:00 Difficulty: 65% (hard) Question Stats: 54% (00:55) correct 46% (01:02) wrong based on 630 sessions ### HideShow timer Statistics What is the greatest possible area of a triangular region with one vertex at the center of a circle of radius one and the other two vertices on the circle? A. $$\frac{\sqrt{3}}{4}$$ B. $$\frac{1}{2}$$ C. $$\frac{\pi}{4}$$ D. 1 E. $$\sqrt{2}$$ [Reveal] Spoiler: OA _________________ GMAT Strategies: http://gmatclub.com/forum/slingfox-s-gmat-strategies-condensed-96483.html Last edited by Bunuel on 17 Oct 2013, 08:49, edited 1 time in total. Kudos [?]: 768 [3], given: 3 Senior Manager Joined: 18 Aug 2009 Posts: 299 Kudos [?]: 362 [0], given: 9 Re: Maximum Area of Inscribed Triangle [#permalink] ### Show Tags 02 Nov 2009, 02:36 1 This post was BOOKMARKED gmattokyo wrote: I'd go with B. 1/2 right triangle. a rough sketch shows that taking one of the sides either left or right seems to be reducing the area. right triangle area =1/2x1x1 (base=height=radius)=1/2 The logic just striked me... area=1/2xbasexheight. In this case, if you keep the base is constant=radius. Height is at its maximum when it is right triangle. is that the OA? Kudos [?]: 362 [0], given: 9 Director Joined: 25 Oct 2008 Posts: 594 Kudos [?]: 1182 [0], given: 100 Location: Kolkata,India Re: Maximum Area of Inscribed Triangle [#permalink] ### Show Tags 03 Nov 2009, 17:58 So I came across this question in my test and got it wrong..I assumed the equilateral triangle has the greatest area and marked root3/4 Now i see the logic..any triangle drawn by the above specifications will have two legs as the radius..we have to maximise the area so the third leg should be the largest. However,is this some kind of a theoram/fact that we should be knowing?That to get the largest area of a triangle,the triangle has to be a right angle and not an equilateral one? _________________ http://gmatclub.com/forum/countdown-beginshas-ended-85483-40.html#p649902 Kudos [?]: 1182 [0], given: 100 VP Joined: 05 Mar 2008 Posts: 1467 Kudos [?]: 307 [0], given: 31 Re: Maximum Area of Inscribed Triangle [#permalink] ### Show Tags 03 Nov 2009, 18:53 tejal777 wrote: So I came across this question in my test and got it wrong..I assumed the equilateral triangle has the greatest area and marked root3/4 Now i see the logic..any triangle drawn by the above specifications will have two legs as the radius..we have to maximise the area so the third leg should be the largest. However,is this some kind of a theoram/fact that we should be knowing?That to get the largest area of a triangle,the triangle has to be a right angle and not an equilateral one? Yes, if the bases are the same. In this case 1 would be the base (radius) and a 45-45-90 maximizes area Try using any number for the base, for example 4 45-45-90 = 1/2(4)(4) = 8 60-60-60 = 1/2(4)(2 sqrt(3)) = 4 sqrt(3) Kudos [?]: 307 [0], given: 31 Math Expert Joined: 02 Sep 2009 Posts: 42256 Kudos [?]: 132742 [6], given: 12360 What is the greatest possible area of a triangular region [#permalink] ### Show Tags 06 Dec 2009, 12:47 6 KUDOS Expert's post 8 This post was BOOKMARKED What is the greatest possible area of a triangular region with one vertex at the center of a circle of radius one and the other two vertices on the circle? A. $$\frac{\sqrt{3}}{4}$$ B. $$\frac{1}{2}$$ C. $$\frac{\pi}{4}$$ D. 1 E. $$\sqrt{2}$$ Clearly two sides of the triangle will be equal to the radius of 1. Now, fix one of the sides horizontally and consider it to be the base of the triangle. $$area=\frac{1}{2}*base*height=\frac{1}{2}*1*height=\frac{height}{2}$$. So, to maximize the area we need to maximize the height. If you visualize it, you'll see that the height will be maximized when it's also equals to the radius thus coincides with the second side (just rotate the other side to see). which means to maximize the area we should have the right triangle with right angle at the center. $$area=\frac{1}{2}*1*1=\frac{1}{2}$$. You can also refer to other solutions: triangular-region-65317.html _________________ Kudos [?]: 132742 [6], given: 12360 Manager Joined: 29 Oct 2009 Posts: 209 Kudos [?]: 1658 [15], given: 18 GMAT 1: 750 Q50 V42 Re: GMAT Prep Triangle/Circle [#permalink] ### Show Tags 06 Dec 2009, 13:09 15 KUDOS 3 This post was BOOKMARKED Adding onto what Bunuel said, there is an important property about isosceles triangles that will help you understand and solve this question. First though, let us see how this particular triangle must be isosceles. If one vertex is at the centre of the circle and the other two are on the diameter, then the triangle must be isosceles since two of its sides will be = radius of circle = 1. Now for an isosceles triangle, the area will be maximum when it is a right angled triangle. One way of proving this is through differentiation. However, since that is well out of GMAT scope, I will provide you with an easier approach. An isosceles triangle can be considered as one half of a rhombus with side lengths 'b'. Now a rhombus of greatest area is a square, half of which is a right angled isosceles triangle. Thus for an isosceles triangle, the area will be greatest when it is a right angled triangle. [Note to Bunuel : I think this one might have been missed in the post on triangles?] Now for the right angled triangle in our case, b = 1 and h = 1 Thus area of triangle = $$\frac{1}{2}*b*h$$ = $$\frac{1}{2}$$ Note : I believe the mistake you might have made is considered the base to be = 2 (or the diameter of the circle) and height to be 1. This can only be possible if all three vertices lie on the circle not when one is at the centre. _________________ Click below to check out some great tips and tricks to help you deal with problems on Remainders! http://gmatclub.com/forum/compilation-of-tips-and-tricks-to-deal-with-remainders-86714.html#p651942 Word Problems Made Easy! 1) Translating the English to Math : http://gmatclub.com/forum/word-problems-made-easy-87346.html 2) 'Work' Problems Made Easy : http://gmatclub.com/forum/work-word-problems-made-easy-87357.html 3) 'Distance/Speed/Time' Word Problems Made Easy : http://gmatclub.com/forum/distance-speed-time-word-problems-made-easy-87481.html Kudos [?]: 1658 [15], given: 18 Math Expert Joined: 02 Sep 2009 Posts: 42256 Kudos [?]: 132742 [3], given: 12360 Re: GMAT Prep Triangle/Circle [#permalink] ### Show Tags 06 Dec 2009, 14:15 3 KUDOS Expert's post 1 This post was BOOKMARKED sriharimurthy wrote: [Note to Bunuel : I think this one might have been missed in the post on triangles?] This is a useful property, thank you. +1. For an isosceles triangle with given length of equal sides right triangle (included angle) has the largest area. And vise-versa: Right triangle with a given hypotenuse has the largest area when it's an isosceles triangle. _________________ Kudos [?]: 132742 [3], given: 12360 Manager Joined: 09 May 2009 Posts: 204 Kudos [?]: 268 [2], given: 13 Re: GMAT Prep Triangle/Circle [#permalink] ### Show Tags 10 Dec 2009, 20:52 2 KUDOS arjunrampal wrote: Has anyone got a diagram of the trangle in circle for this question? I'm unable to visualize the diagram from the question fig attached Attachments circle.doc [23.5 KiB] _________________ GMAT is not a game for losers , and the moment u decide to appear for it u are no more a loser........ITS A BRAIN GAME Kudos [?]: 268 [2], given: 13 Senior Manager Joined: 30 Aug 2009 Posts: 283 Kudos [?]: 191 [0], given: 5 Location: India Concentration: General Management Re: Maximum Area - No clues [#permalink] ### Show Tags 14 Mar 2010, 02:57 mustdoit wrote: What is the greatest possible area of a triangular region with one vertex at the center of a circle of radius 1 and the other two vertices on the circle? a. rt3/4 b. 1/2 c. Pi/4 d. 1 e. rt2 OA: [Reveal] Spoiler: B let the vertex at Centre be A and B and C are vertices of trianle on the circle so length of side AB and AC will be equal to radius of circle =1.In this case the maximum area will be obtained for a right angled isosceles traiangle 1/2* AB* AC = 1/2 *1*1 = 1/2 Kudos [?]: 191 [0], given: 5 Manager Joined: 13 Dec 2009 Posts: 248 Kudos [?]: 258 [0], given: 13 Re: Maximum Area - No clues [#permalink] ### Show Tags 14 Mar 2010, 03:20 mustdoit wrote: What is the greatest possible area of a triangular region with one vertex at the center of a circle of radius 1 and the other two vertices on the circle? a. rt3/4 b. 1/2 c. Pi/4 d. 1 e. rt2 OA: [Reveal] Spoiler: B Let say b is the third side's length and a is the equal sides' lenght. then the area of triangle by hero's formula will be b * sqrt(4 a^2 - b^2)/4 putting value of a => Area = b * sqrt(4 - b^2)/4 now to get maximum value of Area we have to take derivative of Area in terms of the third side. For maximum Area its square will also be maximum, that's why squaring both the sides => Area^2 = b^2 * (4 - b^2)/16 => Taking derivative both the sides => d(Area^2)/db = (8b-4b^3)/16 equate RHS to 0 to get value of b for which Area is maximum (8b - 4b^3)/16 = 0 =>2b-b^3 = 0 =>b (2-b^2) = 0 b = 0, |b| = sqrt 2 now b cannot be negative so b = 0, b = sqrt 2 for these two values sqrt 2 will give the maximum area and put this value in Area = b * sqrt(4 - b^2)/4 Area = (sqrt 2 * sqrt 2) / 4 = 1/2 hence b is the answer. _________________ My debrief: done-and-dusted-730-q49-v40 Kudos [?]: 258 [0], given: 13 Manager Joined: 21 Jan 2010 Posts: 220 Kudos [?]: 105 [1], given: 38 Re: Maximum Area - No clues [#permalink] ### Show Tags 14 Mar 2010, 09:22 1 KUDOS Can I see it this way? If you know what is function sin, it has a range from -1 to 1: Since area of triangle = 1/2 x (side a x side b x sin C), where C is the angle in between side a and b. The area would be at its maximum when C equals 90 degrees, i.e. sin C = 1. In this case, we can take side a and side b the radii and C 90 degrees: 1/2 x 1 x 1 x 1 = 1/2 Hope this helps. Kudos [?]: 105 [1], given: 38 Manager Joined: 08 Oct 2009 Posts: 64 Kudos [?]: 24 [0], given: 4 Location: Denver, CO WE 1: IT Business Analyst-Building Materials Industry Re: Maximum Area of Inscribed Triangle [#permalink] ### Show Tags 06 Apr 2010, 17:49 I got this right on my test, does my thought process make sense? I know that for a set perimeter of a quadrilateral a square will maximize area, so if you have 16 feet of fence to enclose a garden and want to maximize the area of the garden you would build a square fence around the garden. EX: Perimeter= 16 Area of square=16 Ex: Perimeter of a rectangle with width of 2 and length of 6=16 Area of the rectangle= 12 So for this problem I thought that a 45-45-90 triangle is half of a square therefore this triangle must maxmize the area with given base. Sorry if this is confusing, but is this mathmatically correct? Kudos [?]: 24 [0], given: 4 Senior Manager Joined: 19 Nov 2009 Posts: 311 Kudos [?]: 99 [0], given: 44 Re: GMAT PREP (PS) [#permalink] ### Show Tags 06 May 2010, 13:13 With one of the vertices at the centre, the two sides of the traingle could be perpendicular to each other (2 radii) and the third side joining the two vertices will be the hypotenuse. Hence, the area will be 1/2 * 1 *1 = 1/2 ! _________________ "Success is going from failure to failure without a loss of enthusiam." - Winston Churchill As vs Like - Check this link : http://www.grammar-quizzes.com/like-as.html. Kudos [?]: 99 [0], given: 44 Retired Moderator Joined: 02 Sep 2010 Posts: 793 Kudos [?]: 1209 [0], given: 25 Location: London Re: PS question: need help [#permalink] ### Show Tags 23 Oct 2010, 03:38 satishreddy wrote: ps question Trignometry based solution Note that such a triangle is always isosceles, with two sides=1 (the radius of the circle). Let the third side be b (the base) and the height be h. If you imagine the angle subtended at the centre by the thrid side, and let this angle be x. The base would be given by 2*sin(x/2) and the height by cos(x/2); where x is a number between 0 and 180 The area is therefore, sin(z)*cos(z), where z is between 0 and 90. We can simplify this further as $$sin(z)*\sqrt{1-sin^2(z)}$$, with z between 0 and 90, for which range sin(z) is between 0 and 1. So the answer is maxima of the function $$f(y)=y*\sqrt{1-y^2}$$ with y between 0 and 1. This is equivalent to finding the point which will maximize the square of this function $$g(y)=y^2(1-y^2)$$ which is easy to do taking the first derivative, $$g'(y)=2y-4y^3$$, which gives the point as $$y=\frac{1}{\sqrt{2}}$$. If we plug it into f(y), the answer is area = 0.5 .. Hence answer is (b) Basically the solution above proves that for an isosceles triangle, when the length of the equal sides is fixed, the area is maximum when the triangle is a right angled triangle ($$y=sin(x/2)=\frac{1}{\sqrt{2}}$$ means x=90). This is a result you will most liekly see being quoted on alternate solutions. _________________ Kudos [?]: 1209 [0], given: 25 Veritas Prep GMAT Instructor Joined: 16 Oct 2010 Posts: 7738 Kudos [?]: 17808 [1], given: 235 Location: Pune, India Re: Maximum Area - No clues [#permalink] ### Show Tags 23 Oct 2010, 06:36 1 KUDOS Expert's post Interesting Question! As CalvinHobbes suggested, the easiest way to deal with it might be through the area formula: Area = (1/2)abSinQ a and b are the lengths of two sides of the triangle and Q is the included angle between sides a and b. (It is anyway good to remember this area formula if you are a little comfortable with trigonometry because it could turn your otherwise tricky question into a simple application.) If we want to maximize area, we need to maximize Sin Q since a and b are already 1. Maximum value of Sin Q is 1 which happens when Q = 90 degrees. Therefore, maximum area of the triangle will be (1/2).1.1.1 = (1/2) _________________ Karishma Veritas Prep | GMAT Instructor My Blog Get started with Veritas Prep GMAT On Demand for $199 Veritas Prep Reviews Kudos [?]: 17808 [1], given: 235 Director Joined: 01 Feb 2011 Posts: 725 Kudos [?]: 146 [0], given: 42 Re: Maximum Area - No clues [#permalink] ### Show Tags 11 Jun 2011, 17:59 Area is maximum in an isosceles triangle when angle between two same sides is 90. Maximum area = 1/2 (r)(r) = (1/2) (r^2) = 1/2 Answer is B. Kudos [?]: 146 [0], given: 42 Intern Joined: 05 Aug 2012 Posts: 16 Kudos [?]: 9 [0], given: 8 Location: United States (CO) Concentration: Finance, Economics GMAT Date: 01-15-2014 GPA: 2.62 WE: Research (Investment Banking) Re: What is the greatest possible area of a triangular region [#permalink] ### Show Tags 17 Oct 2013, 18:25 I solved the question the following way.. I gathered the greatest possible triangle has a 90 degree angle where 2 sides meet (each length 1, the radius) This means the 3rd side will be $$\sqrt{2}$$ (90/45/45 rule) It's base will be $$\sqrt{2}$$ and its height will be $$\sqrt{2}$$/$$2$$ So base times height over 2 looks as such- $$\sqrt{2}*\sqrt{2}/2$$ all over 2 which yields 1/2. am I getting the right answer the wrong way? Kudos [?]: 9 [0], given: 8 Veritas Prep GMAT Instructor Joined: 16 Oct 2010 Posts: 7738 Kudos [?]: 17808 [3], given: 235 Location: Pune, India Re: What is the greatest possible area of a triangular region [#permalink] ### Show Tags 17 Oct 2013, 20:56 3 This post received KUDOS Expert's post bscharm wrote: I solved the question the following way.. I gathered the greatest possible triangle has a 90 degree angle where 2 sides meet (each length 1, the radius) This means the 3rd side will be $$\sqrt{2}$$ (90/45/45 rule) It's base will be $$\sqrt{2}$$ and its height will be $$\sqrt{2}$$/$$2$$ So base times height over 2 looks as such- $$\sqrt{2}*\sqrt{2}/2$$ all over 2 which yields 1/2. am I getting the right answer the wrong way? I think you complicated the question for no reason even though your answer and method, both are correct (though not optimum). The most important part of the question is realizing that the triangle will be a right triangle. Once you did that, you know the two perpendicular sides of the triangle are 1 and 1 (the radii of the circle). The two perpendicular sides can very well be the base and the height. So area = (1/2)*1*1 = 1/2 In fact, this is used sometimes to find the altitude of the right triangle from 90 degree angle to hypotenuse. You equate area obtained from using the perpendicular side lengths with area obtained using hypotenuse. In this question, that will be $$(1/2)*1*1 = (1/2)*\sqrt{2}*Altitude$$ You get altitude from this. How to realize it will be a right triangle without knowing the property: You can do that by imagining the situation in which the area will be minimum. When the two sides overlap (i.e the angle between them is 0), the area will be 0 i.e. there will be no triangle. As you keep moving the sides away from each other, the area will increase till it eventually becomes 0 again when the angle between them is 180. So the maximum area between them will be when the angle between the sides is 90. Attachment: Ques3.jpg [ 22.49 KiB | Viewed 11064 times ] _________________ Karishma Veritas Prep | GMAT Instructor My Blog Get started with Veritas Prep GMAT On Demand for$199 Veritas Prep Reviews Kudos [?]: 17808 [3], given: 235 Current Student Joined: 06 Sep 2013 Posts: 1972 Kudos [?]: 741 [0], given: 355 Concentration: Finance Re: GMAT Prep Triangle/Circle [#permalink] ### Show Tags 25 Apr 2014, 05:51 Bunuel wrote: What is the greatest possible area of a triangular region with one vertex at the center of a circle of radius one and the other two vertices on the circle? Clearly two sides of the triangle will be equal to the radius of 1. Now, fix one of the sides horizontally and consider it to be the base of the triangle. $$area=\frac{1}{2}*base*height=\frac{1}{2}*1*height=\frac{height}{2}$$. So, to maximize the area we need to maximize the height. If you visualize it, you'll see that the height will be maximized when it's also equals to the radius thus coincides with the second side (just rotate the other side to see). which means to maximize the area we should have the right triangle with right angle at the center. $$area=\frac{1}{2}*1*1=\frac{1}{2}$$. You can also refer to other solutions: triangular-region-65317.html Having some trouble figuring out why right isosceles triangle has greater area than equilateral triangle Anyone would mind clarifying this? Cheers! J Kudos [?]: 741 [0], given: 355 Veritas Prep GMAT Instructor Joined: 16 Oct 2010 Posts: 7738 Kudos [?]: 17808 [0], given: 235 Location: Pune, India Re: GMAT Prep Triangle/Circle [#permalink] ### Show Tags 27 Apr 2014, 22:39 jlgdr wrote: Bunuel wrote: What is the greatest possible area of a triangular region with one vertex at the center of a circle of radius one and the other two vertices on the circle? Clearly two sides of the triangle will be equal to the radius of 1. Now, fix one of the sides horizontally and consider it to be the base of the triangle. $$area=\frac{1}{2}*base*height=\frac{1}{2}*1*height=\frac{height}{2}$$. So, to maximize the area we need to maximize the height. If you visualize it, you'll see that the height will be maximized when it's also equals to the radius thus coincides with the second side (just rotate the other side to see). which means to maximize the area we should have the right triangle with right angle at the center. $$area=\frac{1}{2}*1*1=\frac{1}{2}$$. You can also refer to other solutions: triangular-region-65317.html Having some trouble figuring out why right isosceles triangle has greater area than equilateral triangle Anyone would mind clarifying this? Cheers! J Couple of ways to think about it: Method 1: Say base of a triangle is 1. Area = (1/2)*base*height = (1/2)*height Say, another side has a fixed length of 1. You start with the first figure on top left when two sides are 1 and third side is very small and keep rotating the side of length 1. The altitude keeps increasing. You get an equilateral triangle whose altitude is $$\sqrt{3}/2 * 1$$ which is less than 1. Then you still keep rotating till you get the altitude as 1 (the other side). Now altitude is max so area is max. This is a right triangle. When you rotate further still, the altitude will start decreasing again. Attachment: Ques3.jpg [ 25.95 KiB | Viewed 10834 times ] Method 2: Given in my post above. _________________ Karishma Veritas Prep | GMAT Instructor My Blog Get started with Veritas Prep GMAT On Demand for \$199 Veritas Prep Reviews Kudos [?]: 17808 [0], given: 235 Re: GMAT Prep Triangle/Circle   [#permalink] 27 Apr 2014, 22:39 Go to page    1   2    Next  [ 25 posts ] Display posts from previous: Sort by # What is the greatest possible area of a triangular region new topic post reply Question banks Downloads My Bookmarks Reviews Important topics Powered by phpBB © phpBB Group | Emoji artwork provided by EmojiOne Kindly note that the GMAT® test is a registered trademark of the Graduate Management Admission Council®, and this site has neither been reviewed nor endorsed by GMAC®.
2017-11-20 06:32:44
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.6934270262718201, "perplexity": 1737.6709377153861}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-47/segments/1510934805914.1/warc/CC-MAIN-20171120052322-20171120072322-00006.warc.gz"}
https://www.physicsforums.com/threads/is-it-possible-to-make-non-radiactive-gold.401954/
# Is it possible to make non-radiactive gold? 1. May 9, 2010 ### MaxManus I have heard that it is possible to make gold today, but it is radioactive. Is it possible to make gold or other non-radioactive chemical elements? We do find non-radioactive chemical elements in nature so why can't humans make them? 2. May 9, 2010 Staff Emeritus One could, but the amount of money it would take would never, every make it worthwhile. If one works very hard, and had a few million to spend, one might get a few milligrams of gold. A few dollars worth at best. 3. May 9, 2010 ### MaxManus Thanks, so it is possible to make non-radioactive gold today? Do you know how? 4. May 9, 2010 Staff Emeritus Sure...dump a beam of ions into a target of mercury or lead. Wait for the radioactive gold to decay, and then chemically separate the gold. Repeat as needed. 5. May 9, 2010 ### arivero It is interesting to calculate the energies involved. You get some energy back at the end of the process. The reaction path is Hg 201 --> Pt 197 --> Au 197. If you want to avoid other (radioactive) elements, you need to purify the Hg before. This is probably the most expensive step, and highly dangereus because of the volatility of Hg. Poisoning is almost sure. With a target of lead, I doubt. With Pb, the energy balance is against you. 6. May 9, 2010 ### Staff: Mentor I doubt it is more dangerous than dealing with - say - radioactive samples. In both cases you need to take care and use correct tools/procedures, but in both cases it can be done quite safely. 7. May 9, 2010 ### arivero Is it profitable anyway? Well, the world reserves of mercury are about the same size that the world reserves of gold -as it is to be expected, given they are so near in the periodic table-, so a public fabrication method of gold from mercury is not really relevant, it should only duplicate the reserves but at the cost of a doubly dangereus process (radiation + chemical poisoning). Thus the patent of the process is not very valuable as a global process. Note that global markets of mercury wil be closed in a few years due to enviromental and poisoning problems. Could it work as a private, small enterprise? Suppose you can devise a nuclear process with 30% efficiency. That should be you pay $200 for a kilogram of mercury, so it contains 131 grams of Hg201 and you get 39.3 grams of gold which yoy can sell at$40/gr, so you get a benefit of $1372 per kilogram. Can you scale the fabrication at a cost so low as$1300 per kg? I doubt it. Probably you need first to purify the Hg isotope, and the method to purify it is not dissimilar to the methods to purify Uranium. Perhaps you could get some cheap machinary from Iran if they decide to go out of bussiness. Then you need the nuclear process, ie you want to induce the alpha disintegration Hg 201 --> Pt 197 + He4. But you do not want to induce further alpha disintegrations in Pt 197 nor in the beta "subproduct" Au 197. And if you have failed in the purification process, you have disturbing absortions in the other Hg isotopes. While the process is globally exotermic, you need to compensate the energy loss by alpha particles which are absorbed or moderated in the walls of the container, so you will really need some nuclear source to keep the bussiness going. You need to study the best container. Also, do you want Hg to be in liquid state, or could an amorphous solid or even a solid crystal be preferable, in order to keep the alpha particles inducing a chain reaction? I am afraid that your cost of I+D is also accountable. 8. May 10, 2010 ### hamster143 Not going to work. You can't just induce Hg 201 to decay. If you hit it with an electron or a neutron, it's going to convert into something else before decaying. In fact, most isotopes of gold that you can create out of mercury by bombarding it with electrons will just decay right back into mercury. If electrons are out, you're left with neutrons and alphas. And that means you have to go down the periodic table, not up. Unfortunately, the first three elements before gold in the periodic table are even more expensive than gold. But is that such a bad thing, though? You could try to create them instead. Tungsten #74 is relatively cheap ($30/kg). You can bombard it with neutrons to induce beta decay and to create Rhenium #75 ($6,000/kg), or with alpha particles to create Osmium #76 ($100,000/kg), and possibly smaller quantities of Iridium #77 ($20,000/kg) and Platinum #78 ($50,000/kg). Neutron sources are quite expensive. Alpha sources are relatively cheap (besides, you can make a simple accelerator to accelerate helium nuclei) but they aren't very efficient, because you'll lose a lot on collisions with electrons. Someone could sit down and run some numbers. The most difficult part is, as you've mentioned, to separate the resulting elements. It's not particularly easy even to separate osmium from iridium and platinum, because all platinoids are chemically similar. But then you'll want to separate radioactive isotopes from non-radioactive ones. And that's no small task. The original separation method employed for the Uranium by the Americans in 1940's was to use gas centrifuges. Gas centrifuges only operate with gases and platinoids don't make compounds that are gaseous at room temperature. Besides, gas centrifuges are insanely expensive to operate in terms of electric consumption. Modern isotope separation methods are better, but you'll surely attract attention of the U.S. government (and the Iranians, too). 9. May 10, 2010 ### arivero Yes I agree that some agencies would be very worried about anyone interested on isotope separation... damm, you are right. I was thinking on some method hitting it with mid-energetic alpha particles, in the hope of inducing a "chain reaction" based on alpha. No idea about the calculations. With electrons, let me see, according http://atom.kaeri.re.kr you have some cross section (z,a) with electrons above 10 MeV. Then Iridium 197 goes beta (2.270 MeV ) and then Pt 197 goes beta again (0.719 MeV) to gold 197. Of course the combined energy of the alpha and the two beta will combine to counterweight the initial 10 MeV, but it is not easy to devise a method to reuse the particles. If we can not reuse the energy, then for 10^23 atoms the electricity bill is in the order of thousands US dollars, so not bussiness. We need a cross section for (a,2a) of Hg 201. 10. May 10, 2010 ### hamster143 What we need is some method of starting reactions with minimal energy. To do Tungsten-186 -> Osmium-190, we need 1.4 MeV alphas. To do Tungsten-184 -> Osmium-188, we need 2.2 MeV. In either case that's more than$20,000 worth of electricity per kilogram of product, assuming that we can accelerate alphas to MeV range, which we can't, and that 100% of our alphas will interact with tungsten nuclei rather than lose energy to brehms & ionization, which they won't. So unfortunately it seems that, unless there's some process that can be triggered by cool alphas (say, under 100 kev ... since that kind of energy can be generated fairly efficiently by high voltage vacuum chambers), alphas are out. Here's a new idea, then. Build a big deuterium fusion reactor. It's going to produce 2.5 MeV neutrons. Find a convenient process to produce platinoids or gold out of cheap elements. Insulate the whole thing with a thick layer of reactant. The challenge is to make the system big enough to convert a measurable quantity of reactant into platinoids or gold within our lifetimes. We should shoot for 10^16 neutrons/second.
2018-03-19 15:02:44
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.5434618592262268, "perplexity": 1423.5186098182144}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-13/segments/1521257646952.38/warc/CC-MAIN-20180319140246-20180319160246-00503.warc.gz"}
http://mathhelpforum.com/algebra/100481-rxt-d.html
# Math Help - rxt=d 1. ## rxt=d Bobby and Rick are in a 10-lap race on a one mile oval track. Bobby averaging 90mph, has completed two laps just as Rick is getting his car onto the track. What speed does Rick have to average to be even with bobby at the end of the tenth lap? *Hint: Bobby does 8 miles in the same time as Rick does 10 miles. I am lost 2. Originally Posted by ddadams Bobby and Rick are in a 10-lap race on a one mile oval track. Bobby averaging 90mph, has completed two laps just as Rick is getting his car onto the track. What speed does Rick have to average to be even with bobby at the end of the tenth lap? *Hint: Bobby does 8 miles in the same time as Rick does 10 miles. I am lost A way to work it. The track is 1 mile in length. At 90 mph how many laps (it's a 1 mile track) will Bobby make in 1 hour? You should say 90. If he only drives for 1/2 hour, how many miles? If he drives for 1/10 of an hour, how many miles? At 90 mph how long will it take Bobby to make 1 lap around the track? <-- key question. Call this answer T1. How long will it take Bobby to make 2 laps around the track? <-- IMPORTANT ANSWER Call this answer T2 How long will it take Bobby to make 10 laps around the track? <-- Bobby's TOTAL time on the course. Call this answer T3 Rick will be on the track for how long. Since he will complete the course the same time as bobby, Rick's time must be T3-T2. Call that time t Distance = Rate x Time Bobby's speed 10 miles = Z mph x t hours (omitting the units of measure) $10 = z \times t$ $\dfrac{10}{t} = z$ . 3. Originally Posted by ddadams Bobby and Rick are in a 10-lap race on a one mile oval track. Bobby averaging 90mph, has completed two laps just as Rick is getting his car onto the track. What speed does Rick have to average to be even with bobby at the end of the tenth lap? *Hint: Bobby does 8 miles in the same time as Rick does 10 miles. I am lost The point of the hint is that Bobby has already completed 2 miles (2 laps on a one mile track) when Rick starts. Since the race is 10 miles long, he has only 8 miles left to finish the race. Of course, Rick must do the entire 10 miles in the same time. At 90 mph, it will take Bobby 8/90= 4/45 hours to finish the race. Rick must go 10 miles in 4/45 hours so his speed must be 10/(4/45)= 10(45/4)= 5(45/2)= 225/2= 112.5 mph.
2015-02-26 23:20:35
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 2, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.5556926727294922, "perplexity": 4811.773424561865}, "config": {"markdown_headings": true, "markdown_code": false, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2015-11/segments/1424936459277.13/warc/CC-MAIN-20150226074059-00236-ip-10-28-5-156.ec2.internal.warc.gz"}
https://chem.libretexts.org/Bookshelves/Physical_and_Theoretical_Chemistry_Textbook_Maps/Supplemental_Modules_(Physical_and_Theoretical_Chemistry)/Equilibria/Chemical_Equilibria/Calculating_An_Equilibrium_Concentrations/Calculating_an_Equilibrium_Constant_Using_Partial_Pressures
# Calculating an Equilibrium Constant Using Partial Pressures The equilibrium constant is known as $$K_{eq}$$. A common example of $$K_{eq}$$ is with the reaction: $aA + bB \rightleftharpoons cC + dD$ $K_{eq} = \dfrac{[C]^c[D]^d}{[A]^a[B]^b}$ where: • At equilibrium, [A], [B], [C], and [D] are either the molar concentrations or partial pressures. • Products are in the numerator. Reactants are in the denominator. • The exponents are the coefficients (a,b,c,d) in the balanced equation. • Solids and pure liquids are omitted. This is because the activities of pure liquids and solids are equal to one, therefore the numerical value of equilibrium constant is the same with and without the values for pure solids and liquids. • $$K_{eq}$$ does not have units. This is because when calculating activity for a specific reactant or product, the units cancel. So when calculating $$K_{eq}$$, one is working with activity values with no units, which will bring about a $$K_{eq}$$ value with no units. Various $$K_{eq}$$ All the equilibrium constants tell the relative amounts of products and reactants at equilibrium. For any reversible reaction, there can be constructed an equilibrium constant to describe the equilibrium conditions for that reaction. Since there are many different types of reversible reactions, there are many different types of equilibrium constants: • $$K_{c}$$: constant for molar concentrations • $$K_{p}$$: constant for partial pressures • $$K_{sp}$$: solubility product • $$K_{a}$$: acid dissociation constant for weak acids • $$K_{b}$$: base dissociation constant for weak bases • $$K_{w}$$: describes the ionization of water ($$K_{w} = 1 \times 10^{-14}$$) ### Calculating Kp Referring to equation: $aA + bB \rightleftharpoons cC + dD$ $K_p = \dfrac{(P_C)^c(P_D)^d}{(P_A)^a(P_B)^b}$ Partial Pressures: In a mixture of gases, it is the pressure an individual gas exerts. The partial pressure is independent of other gases that may be present in a mixture. According to the ideal gas law, partial pressure is inversely proportional to volume. It is also directly proportional to moles and temperature. Example $$\PageIndex{1}$$ At equilibrium in the following reaction at room temperature, the partial pressures of the gases are found to be $$P_{N_2}$$ = 0.094 atm, $$P_{H_2}$$ = 0.039 atm, and $$P_{NH_3}$$ = 0.003 atm. $\ce{N_2 (g) + 3 H_2 (g) \rightleftharpoons 2 NH_3 (g)} \nonumber$ What is the $$K_p$$ for the reaction? SOLUTION First, write $$K_{eq}$$ (equilibrium constant expression) in terms of activities. $K = \dfrac{(a_{NH_3})^2}{(a_{N_2})(a_{H_2})^3} \nonumber$ Then, replace the activities with the partial pressures in the equilibrium constant expression. $K_p = \dfrac{(P_{NH_3})^2}{(P_{N_2})(P_{H_2})^3} \nonumber$ Finally, substitute the given partial pressures into the equation. $K_p = \dfrac{(0.003)^2}{(0.094)(0.039)^3} = 1.61 \nonumber$ Example $$\PageIndex{2}$$ At equilibrium in the following reaction at 303 K, the total pressure is 0.016 atm while the partial pressure of $$P_{H_2}$$ is found to be 0.013 atm. $\ce{3 Fe_2O_3 (s) + H_2 (g) \rightleftharpoons 2 Fe_3O_4 (s) + H_2O (g)} \nonumber$ What is the $$K_p$$ for the reaction? SOLUTION First, calculate the partial pressure for $$\ce{H2O}$$ by subtracting the partial pressure of $$\ce{H2}$$ from the total pressure. \begin{align*} P_{H_2O} &= {P_{total}-P_{H_2}} \\[5pt] &= (0.016-0.013) \; atm \\[5pt] &= 0.003 \; atm \end{align*} Then, write K (equilibrium constant expression) in terms of activities. Remember that solids and pure liquids are ignored. $K = \dfrac{(a_{H_2O})}{(a_{H_2})}\nonumber$ Then, replace the activities with the partial pressures in the equilibrium constant expression. $K_p = \dfrac{(P_{H_2O})}{(P_{H_2})}\nonumber$ Finally, substitute the given partial pressures into the equation. $K_p = \dfrac{(0.003)}{(0.013)} = 0.23 \nonumber$ Example $$\PageIndex{3}$$ A flask initially contained hydrogen sulfide at a pressure of 5.00 atm at 313 K. When the reaction reached equilibrium, the partial pressure of sulfur vapor was found to be 0.15 atm. $\ce{2 H_2S (g) \rightleftharpoons 2 H_2 (g) + S_2 (g) } \nonumber$ What is the $$K_p$$ for the reaction? SOLUTION For this kind of problem, ICE Tables are used. $$\ce{2H2S (g)}$$  $$\rightleftharpoons$$ $$\ce{2H2(g)}$$ + $$\ce{S2(g)}$$ Initial Amounts 5.00 atm   0 atm   0 atm Change in Amounts -0.3 atm   +0.3 atm   +0.15 atm Equilibrium Amounts 4.7 atm   0.3 atm   0.15 atm Now, set up the equilibrium constant expression, $$K_p$$. $K_p = \dfrac{(P_{H_2})^2(P_{S_2})}{(P_{H_2S})^2} \nonumber$ Finally, substitute the calculated partial pressures into the equation. \begin{align*} K_p &= \dfrac{(0.3)^2(0.15)}{(4.7)^2} \\[5pt] &= 6.11 \times 10^{-4} \end{align*}
2019-01-22 11:20:48
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9900519847869873, "perplexity": 1926.4778027975321}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-04/segments/1547583835626.56/warc/CC-MAIN-20190122095409-20190122121409-00555.warc.gz"}
http://math.stackexchange.com/questions/229552/identifying-series-of-coordinates
# Identifying series of coordinates. I have two coordinates(latitude,longitude) which is defining line on the map. The line has direction. Then I have series of other coordinates moving either roughly along this line and towards direction or opposite direction. Also they can cross the line, etc. The important part is that I need to identify only the case when series of points are moving along and towards direction. Those series are coordinates from gps. Let's also define threshold in meters for identifying that point "belongs" to line. This will include gps error and line error. I am looking for some formula which I could apply to each point, then after applying twice I can identify direction and continue until end of the line. Below is drawing (sorry for finger quality), hopefully demonstrating what I mean. Green dots are series which are moving towards line direction. They are what we need to identify. Top blue dots are series moving opposite direction. Also want to emphasize that those lines are not really lines and maybe we need to consider their roundness. - Do I have to start solving this by converting spherical coordinates into Cartesian? – Pablo Nov 5 '12 at 15:21
2016-05-02 06:18:26
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8792268633842468, "perplexity": 718.8155489941224}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2016-18/segments/1461860125175.9/warc/CC-MAIN-20160428161525-00040-ip-10-239-7-51.ec2.internal.warc.gz"}
https://ftp.aimsciences.org/article/doi/10.3934/jcd.2021020
# American Institute of Mathematical Sciences April  2022, 9(2): 207-238. doi: 10.3934/jcd.2021020 ## A virtual element generalization on polygonal meshes of the Scott-Vogelius finite element method for the 2-D Stokes problem 1 T-5 Applied Mathematics and Plasma Physics Group, Los Alamos National Laboratory, Los Alamos, NM 87545, USA 2 Dipartimento di Ingegneria Civile, Edile e Ambientale - ICEA, Università di Padova, 35131 Padova, Italy * Corresponding author: G. Manzini Received  March 2021 Revised  July 2021 Published  April 2022 Early access  December 2021 Fund Project: Dr. G. Manzini was supported by the LDRD-ER program of Los Alamos National Laboratory under project number 20180428ER. Los Alamos National Laboratory is operated by Triad National Security, LLC, for the National Nuclear Security Administration of U.S. Department of Energy (Contract No. 89233218CNA000001) The Virtual Element Method (VEM) is a Galerkin approximation method that extends the Finite Element Method (FEM) to polytopal meshes. In this paper, we present a conforming formulation that generalizes the Scott-Vogelius finite element method for the numerical approximation of the Stokes problem to polygonal meshes in the framework of the virtual element method. In particular, we consider a straightforward application of the virtual element approximation space for scalar elliptic problems to the vector case and approximate the pressure variable through discontinuous polynomials. We assess the effectiveness of the numerical approximation by investigating the convergence on a manufactured solution problem and a set of representative polygonal meshes. We numerically show that this formulation is convergent with optimal convergence rates except for the lowest-order case on triangular meshes, where the method coincides with the ${\mathbb{P}}_{{1}}-{\mathbb{P}}_{{0}}$ Scott-Vogelius scheme, and on square meshes, which are situations that are well-known to be unstable. Citation: Gianmarco Manzini, Annamaria Mazzia. A virtual element generalization on polygonal meshes of the Scott-Vogelius finite element method for the 2-D Stokes problem. Journal of Computational Dynamics, 2022, 9 (2) : 207-238. doi: 10.3934/jcd.2021020 ##### References: show all references ##### References: Degrees of freedom of each component of the virtual element vector-valued fields of ${\bf{V}}^{h}_{k}( {\rm{E}})$ (left) and the scalar polynomial fields of $Q^{ h}_{\underline{k}}( {\rm{E}})$ (right) of an hexagonal element for the accuracy degrees $k = 1,2,3$ and $\underline{k} = k-1$. Vertex values and edge polynomial moments are marked by a circular bullet; cell polynomial moments are marked by a square bullet Base meshes (top row) and first refinement meshes (bottom row) of the three mesh families used in the general convergence tests: $\mathcal{M}{1}$: randomly quadrilateral meshes; $\mathcal{M}{2}$: general polygonal meshes; $\mathcal{M}{3}$: concave element meshes Base meshes (top row) and first refinement meshes (bottom row) of the three mesh families used to investigate convergence and stability of the lowest-order scheme: $\mathcal{M}{4}$: diagonal triangle meshes; $\mathcal{M}{5}$: criss-cross triangle meshes; $\mathcal{M}{6}$: square meshes Convergence curves for $k = 1,\ldots,6$ and $\underline{k} = k-1$ versus the mesh size parameter $h$ for the velocity approximation measured using the energy norm (55) (top panels) and the $L^2$-norm (56) (mid panels), and for the pressure approximation measured using the $L^2$-norm (57) (bottom panels). Blue lines with circles represent the error curves using the enhanced virtual element space (14), and, accordingly, the right-hand is approximated by using the projection operator $\Pi^{0, {\rm{E}}}_{{k}}$. The mesh families used in each calculations are shown in the left corner of each panel. The expected convergence slopes and rates are shown by the triangles and corresponding numerical labels Convergence curves for $k = 1,\ldots,6$ and $\underline{k} = k-1$ versus the mesh size parameter $h$ for the velocity approximation measured using the energy norm (55) (top panels) and the $L^2$-norm (56) (mid panels), and for the pressure approximation measured using the $L^2$-norm (57) (bottom panels). Blue lines with circles represent the error curves using the virtual element space (13), and, accordingly, the right-hand is approximated by using the projection operator $\Pi^{0, {\rm{E}}}_{{ \bar{k}}}$ with $\bar{k} = max(0,k-2)$. A loss of accuracy for $k = 2$ in the $L^2$-norm error curves is visible. The mesh families used in each calculations are shown in the left corner of each panel. The expected convergence slopes and rates are shown by the triangles and corresponding numeric labels Values of the inf-sup constant $\beta$ versus the mesh size parameter $h$. The lines with circles represent the values of $\beta$ with $k = 1$ and $\underline{k} = 0$. The mesh families used in each calculations are shown in the bottom-left corner of each panel Values of the inf-sup constant $\beta$ versus the mesh size parameter $h$. Blue lines with circles represent the values of $\beta$ with $k = 2$ and $\underline{k} = 0$. Red lines with circles represent the values of $\beta$ when $k = 2$ and $\underline{k} = 1$. Black, green and magenta lines with triangles are associate to $k = 3$ and $\underline{k} = 0$, $\underline{k} = 1$ and $\underline{k} = 2$, respectively. The mesh families used in each calculations are shown in the bottom-left corner of each panel Convergence curves for $k = 2,\ldots,6$, and $\underline{k} = k-1$ versus the mesh size parameter $h$ for the velocity approximation measured using the energy norm (55) (top panels) and the $L^2$-norm (56) (mid panels), and for the pressure approximation measured using the $L^2$-norm (57) (bottom panels). The results with $k = 1$ are not reported because there is no convergence. The lines with circles represent the error curves using the enhanced virtual element space (14). The right-hand side is approximated by using the projection operator $\Pi^{0, {\rm{E}}}_{{k}}$, i.e., the "enhanced" definition of the virtual element space given in (14). The mesh families used in each calculations are shown in the left corner of each panel. The expected convergence slopes and rates are shown by the triangles and corresponding numerical labels Convergence curves for $k = 2$, $\underline{k} = 0$, versus the mesh size parameter $h$ for the velocity approximation measured using the energy norm (55) (top panels) and the $L^2$-norm (56) (mid panels), and for the pressure approximation measured using the $L^2$-norm (57) (bottom panels). Blue lines with circles represent the error curves for the formulation using the enhanced virtual element space (14) with the right-hand side approximated by using the projection operator $\Pi^{0}_{{2}}$. Red lines with circles represent the error curve with the right-hand side approximated by using the projection operator $\Pi^{0}_{{0}}$. The mesh families used in each calculations are shown in the left corner of each panel. The convergence slopes and rates are shown by the triangles and corresponding numeric labels Diameter $h$ of each grid of the six mesh families $\mathcal{M}{1}$-$\mathcal{M}{6}$ Level $\mathcal{M}{1}$ $\mathcal{M}{2}$ $\mathcal{M}{3}$ $\mathcal{M}{4}$ $\mathcal{M}{5}$ $\mathcal{M}{6}$ 1 $3.72 \cdot 10^{-1}$ $4.26 \cdot 10^{-1}$ $3.81 \cdot 10^{-1}$ $7.07 \cdot 10^{-1}$ $5.00 \cdot 10^{-1}$ $3.53 \cdot 10^{-1}$ 2 $1.99 \cdot 10^{-1}$ $2.50 \cdot 10^{-1}$ $1.91 \cdot 10^{-1}$ $3.53 \cdot 10^{-1}$ $2.50 \cdot 10^{-1}$ $1.77 \cdot 10^{-1}$ 3 $1.01 \cdot 10^{-1}$ $1.25 \cdot 10^{-1}$ $9.54 \cdot 10^{-2}$ $1.77 \cdot 10^{-1}$ $1.25 \cdot 10^{-1}$ $8.84 \cdot 10^{-2}$ 4 $5.17 \cdot 10^{-2}$ $6.21 \cdot 10^{-2}$ $4.77 \cdot 10^{-2}$ $8.84 \cdot 10^{-2}$ $6.25 \cdot 10^{-1}$ $4.42 \cdot 10^{-2}$ 5 $2.61 \cdot 10^{-2}$ $3.41 \cdot 10^{-2}$ $2.38 \cdot 10^{-2}$ $4.42 \cdot 10^{-2}$ $3.12 \cdot 10^{-2}$ $2.21 \cdot 10^{-2}$ Level $\mathcal{M}{1}$ $\mathcal{M}{2}$ $\mathcal{M}{3}$ $\mathcal{M}{4}$ $\mathcal{M}{5}$ $\mathcal{M}{6}$ 1 $3.72 \cdot 10^{-1}$ $4.26 \cdot 10^{-1}$ $3.81 \cdot 10^{-1}$ $7.07 \cdot 10^{-1}$ $5.00 \cdot 10^{-1}$ $3.53 \cdot 10^{-1}$ 2 $1.99 \cdot 10^{-1}$ $2.50 \cdot 10^{-1}$ $1.91 \cdot 10^{-1}$ $3.53 \cdot 10^{-1}$ $2.50 \cdot 10^{-1}$ $1.77 \cdot 10^{-1}$ 3 $1.01 \cdot 10^{-1}$ $1.25 \cdot 10^{-1}$ $9.54 \cdot 10^{-2}$ $1.77 \cdot 10^{-1}$ $1.25 \cdot 10^{-1}$ $8.84 \cdot 10^{-2}$ 4 $5.17 \cdot 10^{-2}$ $6.21 \cdot 10^{-2}$ $4.77 \cdot 10^{-2}$ $8.84 \cdot 10^{-2}$ $6.25 \cdot 10^{-1}$ $4.42 \cdot 10^{-2}$ 5 $2.61 \cdot 10^{-2}$ $3.41 \cdot 10^{-2}$ $2.38 \cdot 10^{-2}$ $4.42 \cdot 10^{-2}$ $3.12 \cdot 10^{-2}$ $2.21 \cdot 10^{-2}$ Number of elements $N_{el}$ and vertices $N$ of each grid of the three mesh families $\mathcal{M}{1}$$\mathcal{M}{3}$ Level $\mathcal{M}{1}$ $\mathcal{M}{2}$ $\mathcal{M}{3}$ $N_{el}$ $N$ $N_{el}$ $N$ $N_{el}$ $N$ 1 16 25 22 46 16 73 2 64 81 84 171 64 305 3 256 289 312 628 256 1249 4 1024 1089 1202 2406 1024 5057 5 4096 4225 4772 9547 4096 20353 Level $\mathcal{M}{1}$ $\mathcal{M}{2}$ $\mathcal{M}{3}$ $N_{el}$ $N$ $N_{el}$ $N$ $N_{el}$ $N$ 1 16 25 22 46 16 73 2 64 81 84 171 64 305 3 256 289 312 628 256 1249 4 1024 1089 1202 2406 1024 5057 5 4096 4225 4772 9547 4096 20353 Number of elements $N_{el}$ and vertices $N$ of each grid of the three mesh families $\mathcal{M}{4}$-$\mathcal{M}{6}$ Level $\mathcal{M}{4}$ $\mathcal{M}{5}$ $\mathcal{M}{6}$ $N_{el}$ $N$ $N_{el}$ $N$ $N_{el}$ $N$ 1 8 9 16 13 16 25 2 32 25 64 41 64 81 3 128 81 256 145 256 289 4 512 289 1024 545 1024 1089 5 2048 1089 4096 2113 4096 4225 Level $\mathcal{M}{4}$ $\mathcal{M}{5}$ $\mathcal{M}{6}$ $N_{el}$ $N$ $N_{el}$ $N$ $N_{el}$ $N$ 1 8 9 16 13 16 25 2 32 25 64 41 64 81 3 128 81 256 145 256 289 4 512 289 1024 545 1024 1089 5 2048 1089 4096 2113 4096 4225 Size, rank and kernel's dimension of matrix B when $k = 1$ for the mesh family $\mathcal{M}{4}$ $\mathcal{M}{4}$ Level size($B$) $rank(B)$ $kernel(B)$ 1 $2\times 8$ 2 6 2 $18\times 32$ 18 14 3 $98\times 128$ 98 30 4 $450\times 512$ 450 62 5 $1922\times 2048$ 1922 126 6 $7938\times 8192$ 7938 254 $\mathcal{M}{4}$ Level size($B$) $rank(B)$ $kernel(B)$ 1 $2\times 8$ 2 6 2 $18\times 32$ 18 14 3 $98\times 128$ 98 30 4 $450\times 512$ 450 62 5 $1922\times 2048$ 1922 126 6 $7938\times 8192$ 7938 254 Size, rank and kernel's dimension of matrix B when $k = 1$ for the mesh family $\mathcal{M}{5}$ $\mathcal{M}{5}$ Level size($B$) $rank(B)$ $kernel(B)$ 1 $10\times 16$ 10 6 2 $50\times 64$ 46 18 3 $226\times 256$ 190 66 4 $962\times 1024$ 766 258 5 $3970\times 4096$ 3070 1026 6 $16130\times 16384$ 12286 4098 $\mathcal{M}{5}$ Level size($B$) $rank(B)$ $kernel(B)$ 1 $10\times 16$ 10 6 2 $50\times 64$ 46 18 3 $226\times 256$ 190 66 4 $962\times 1024$ 766 258 5 $3970\times 4096$ 3070 1026 6 $16130\times 16384$ 12286 4098 Size, rank and kernel's dimension of matrix B when $k = 1$ for the mesh family $\mathcal{M}{6}$ $\mathcal{M}{6}$ Level size($B$) $rank(B)$ $kernel(B)$ 1 $18\times 16$ 14 2 2 $98\times 64$ 62 2 3 $450\times 256$ 254 2 4 $1922\times 1024$ 1022 2 5 $7938\times 4096$ 4094 2 6 $32258\times 16384$ 16382 2 $\mathcal{M}{6}$ Level size($B$) $rank(B)$ $kernel(B)$ 1 $18\times 16$ 14 2 2 $98\times 64$ 62 2 3 $450\times 256$ 254 2 4 $1922\times 1024$ 1022 2 5 $7938\times 4096$ 4094 2 6 $32258\times 16384$ 16382 2 [1] Martin Burger, José A. Carrillo, Marie-Therese Wolfram. A mixed finite element method for nonlinear diffusion equations. Kinetic and Related Models, 2010, 3 (1) : 59-83. doi: 10.3934/krm.2010.3.59 [2] Yueqiang Shang, Qihui Zhang. A subgrid stabilizing postprocessed mixed finite element method for the time-dependent Navier-Stokes equations. Discrete and Continuous Dynamical Systems - B, 2021, 26 (6) : 3119-3142. doi: 10.3934/dcdsb.2020222 [3] Derrick Jones, Xu Zhang. A conforming-nonconforming mixed immersed finite element method for unsteady Stokes equations with moving interfaces. Electronic Research Archive, 2021, 29 (5) : 3171-3191. doi: 10.3934/era.2021032 [4] Jianguo Huang, Sen Lin. A $C^0P_2$ time-stepping virtual element method for linear wave equations on polygonal meshes. Electronic Research Archive, 2020, 28 (2) : 911-933. doi: 10.3934/era.2020048 [5] Kun Wang, Yinnian He, Yueqiang Shang. Fully discrete finite element method for the viscoelastic fluid motion equations. Discrete and Continuous Dynamical Systems - B, 2010, 13 (3) : 665-684. doi: 10.3934/dcdsb.2010.13.665 [6] Hao Wang, Wei Yang, Yunqing Huang. An adaptive edge finite element method for the Maxwell's equations in metamaterials. Electronic Research Archive, 2020, 28 (2) : 961-976. doi: 10.3934/era.2020051 [7] Cornel M. Murea, H. G. E. Hentschel. A finite element method for growth in biological development. Mathematical Biosciences & Engineering, 2007, 4 (2) : 339-353. doi: 10.3934/mbe.2007.4.339 [8] Ying Liu, Yanping Chen, Yunqing Huang, Yang Wang. Two-grid method for semiconductor device problem by mixed finite element method and characteristics finite element method. Electronic Research Archive, 2021, 29 (1) : 1859-1880. doi: 10.3934/era.2020095 [9] Kai Qu, Qi Dong, Chanjie Li, Feiyu Zhang. Finite element method for two-dimensional linear advection equations based on spline method. Discrete and Continuous Dynamical Systems - S, 2021, 14 (7) : 2471-2485. doi: 10.3934/dcdss.2021056 [10] Jiaping Yu, Haibiao Zheng, Feng Shi, Ren Zhao. Two-grid finite element method for the stabilization of mixed Stokes-Darcy model. Discrete and Continuous Dynamical Systems - B, 2019, 24 (1) : 387-402. doi: 10.3934/dcdsb.2018109 [11] Yinnian He, Yanping Lin, Weiwei Sun. Stabilized finite element method for the non-stationary Navier-Stokes problem. Discrete and Continuous Dynamical Systems - B, 2006, 6 (1) : 41-68. doi: 10.3934/dcdsb.2006.6.41 [12] Xiaoxiao He, Fei Song, Weibing Deng. A stabilized nonconforming Nitsche's extended finite element method for Stokes interface problems. Discrete and Continuous Dynamical Systems - B, 2022, 27 (5) : 2849-2871. doi: 10.3934/dcdsb.2021163 [13] Meng Zhao, Aijie Cheng, Hong Wang. A preconditioned fast Hermite finite element method for space-fractional diffusion equations. Discrete and Continuous Dynamical Systems - B, 2017, 22 (9) : 3529-3545. doi: 10.3934/dcdsb.2017178 [14] Binjie Li, Xiaoping Xie, Shiquan Zhang. New convergence analysis for assumed stress hybrid quadrilateral finite element method. Discrete and Continuous Dynamical Systems - B, 2017, 22 (7) : 2831-2856. doi: 10.3934/dcdsb.2017153 [15] Junjiang Lai, Jianguo Huang. A finite element method for vibration analysis of elastic plate-plate structures. Discrete and Continuous Dynamical Systems - B, 2009, 11 (2) : 387-419. doi: 10.3934/dcdsb.2009.11.387 [16] So-Hsiang Chou. An immersed linear finite element method with interface flux capturing recovery. Discrete and Continuous Dynamical Systems - B, 2012, 17 (7) : 2343-2357. doi: 10.3934/dcdsb.2012.17.2343 [17] Donald L. Brown, Vasilena Taralova. A multiscale finite element method for Neumann problems in porous microstructures. Discrete and Continuous Dynamical Systems - S, 2016, 9 (5) : 1299-1326. doi: 10.3934/dcdss.2016052 [18] Xiu Ye, Shangyou Zhang, Peng Zhu. A weak Galerkin finite element method for nonlinear conservation laws. Electronic Research Archive, 2021, 29 (1) : 1897-1923. doi: 10.3934/era.2020097 [19] Qingping Deng. A nonoverlapping domain decomposition method for nonconforming finite element problems. Communications on Pure and Applied Analysis, 2003, 2 (3) : 297-310. doi: 10.3934/cpaa.2003.2.297 [20] Runchang Lin. A robust finite element method for singularly perturbed convection-diffusion problems. Conference Publications, 2009, 2009 (Special) : 496-505. doi: 10.3934/proc.2009.2009.496 Impact Factor:
2022-05-26 05:17:11
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.590962290763855, "perplexity": 1295.793017383536}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-21/segments/1652662601401.72/warc/CC-MAIN-20220526035036-20220526065036-00042.warc.gz"}
http://openstudy.com/updates/4f0f4258e4b04f0f8a918eb7
## meow18 Group Title Express the radical in simplest form: the square root of 40 A. 2 and the square root of 20 B. 4 and the square root of 10 C. 2 and the square root of 10 2 years ago 2 years ago 1. wasiqss b 2. wasiqss sorry c 3. jim_thompson5910 $\Large \sqrt{40}$ $\Large \sqrt{4*10}$ $\Large \sqrt{4}*\sqrt{10}$ $\Large 2\sqrt{10}$ So choice C 4. Mr.crazzy right ans is C
2014-11-23 14:51:42
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8591668605804443, "perplexity": 1474.310400453166}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2014-49/segments/1416400379520.45/warc/CC-MAIN-20141119123259-00196-ip-10-235-23-156.ec2.internal.warc.gz"}
https://dev.heuristiclab.com/trac.fcgi/wiki/Documentation/Howto/ImplementAHLWebAppPlugin?version=4
Version 4 (modified by dglaser, 4 years ago) (diff) -- # How to Implement A New HeuristicLab WebApp Plugin In the following Guide we will create a simple HeuristicLab WebApp plugin. ## A new project We will now create a new Visual Studio project which will become the WebApp plugin. • In Visual Studio select "File > New > Project..." or press <Ctrl+Shift+N> • Use the "Class Library" template • Use "HeuristicLab.Services.WebApp.Example" as name ## Configuring the project ### References Every plugin has at least a reference to Microsoft ASP.NET Web API, which is needed to create a controller. • Open the NuGet Package Manager and add a reference to Microsoft ASP.NET Web API • If database access is required, add project references to HeuristicLab.Services.Hive and HeuristicLabServices.DataAccess • Set the "Copy Local" property on these references to false. This references are provided by the WebApp and thus don't have to be deployed with every plugin. ### Folder structure HeuristicLab WebApp plugins use the following folder structure, which is recommended but not mandatory. The WebApp consists of all client files whereas the WebApi contains all data controllers. ## Creating the Plugin configuration file Every plugin requires a configuration file, which is used to register the views of the plugin with their associated controllers. The configuration file is also used to dynamicly build the sidebar menu. The name of the plugin configuration file has to be {pluginname}.js. The following template can be used to create a new configuration file: var appPluginNamePlugin = app.registerPlugin('pluginname'); (function () { var plugin = appPluginNamePlugin; plugin.dependencies = ['ngResource']; plugin.files = []; plugin.view = ''; plugin.controller = ''; plugin.routes = []; var menu = app.getMenu(); var section = menu.getSection('Menu', 1); section.addEntry({ name: 'Pluginname', route: '#/pluginname' }); })(); • plugin.dependencies specifies the required AngularJS modules • plugin.files specifies the files that will be dynamicly loaded when a view of the plugin is accessed • plugin.view is the main view of the plugin • plugin.controller is the controller of the main view • plugin.routes is used to register new routes (e.g. more than one view) • app.getMenu() is used to add an entry in the sidebar menu The view and controller properties are required all other properties are optional. ## Example Plugin The https://dev.heuristiclab.com/trac.fcgi/browser/trunk/sources/HeuristicLab.Services.WebApp.Status plugin can be used as a good example of how to structure and configure a WebApp plugin. ## Conventions • Controllers has to be derived from ApiController and have to end with Controller to be discovered by the WebApp • The Plugin assembly name has to match the following HeuristicLab.Services.WebApp.{pluginname}*.dll pattern • The plugin configuration file has to be {pluginname}.js ### Attachments (5) Download all attachments as: .zip
2019-08-25 10:23:50
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.19554592669010162, "perplexity": 6974.2044936446555}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": false}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-35/segments/1566027323246.35/warc/CC-MAIN-20190825084751-20190825110751-00312.warc.gz"}
https://people.maths.bris.ac.uk/~matyd/GroupNames/432/He3s4C16.html
Copied to clipboard ## G = He3⋊4C16order 432 = 24·33 ### 2nd semidirect product of He3 and C16 acting via C16/C8=C2 Series: Derived Chief Lower central Upper central Derived series C1 — C3 — He3 — He3⋊4C16 Chief series C1 — C3 — C32 — He3 — C2×He3 — C4×He3 — C8×He3 — He3⋊4C16 Lower central He3 — He3⋊4C16 Upper central C1 — C24 Generators and relations for He34C16 G = < a,b,c,d | a3=b3=c3=d16=1, ab=ba, cac-1=ab-1, dad-1=a-1, bc=cb, bd=db, dcd-1=c-1 > Smallest permutation representation of He34C16 On 144 points Generators in S144 (17 55 128)(18 113 56)(19 57 114)(20 115 58)(21 59 116)(22 117 60)(23 61 118)(24 119 62)(25 63 120)(26 121 64)(27 49 122)(28 123 50)(29 51 124)(30 125 52)(31 53 126)(32 127 54)(65 83 105)(66 106 84)(67 85 107)(68 108 86)(69 87 109)(70 110 88)(71 89 111)(72 112 90)(73 91 97)(74 98 92)(75 93 99)(76 100 94)(77 95 101)(78 102 96)(79 81 103)(80 104 82) (1 45 131)(2 46 132)(3 47 133)(4 48 134)(5 33 135)(6 34 136)(7 35 137)(8 36 138)(9 37 139)(10 38 140)(11 39 141)(12 40 142)(13 41 143)(14 42 144)(15 43 129)(16 44 130)(17 55 128)(18 56 113)(19 57 114)(20 58 115)(21 59 116)(22 60 117)(23 61 118)(24 62 119)(25 63 120)(26 64 121)(27 49 122)(28 50 123)(29 51 124)(30 52 125)(31 53 126)(32 54 127)(65 105 83)(66 106 84)(67 107 85)(68 108 86)(69 109 87)(70 110 88)(71 111 89)(72 112 90)(73 97 91)(74 98 92)(75 99 93)(76 100 94)(77 101 95)(78 102 96)(79 103 81)(80 104 82) (1 119 96)(2 81 120)(3 121 82)(4 83 122)(5 123 84)(6 85 124)(7 125 86)(8 87 126)(9 127 88)(10 89 128)(11 113 90)(12 91 114)(13 115 92)(14 93 116)(15 117 94)(16 95 118)(17 38 71)(18 72 39)(19 40 73)(20 74 41)(21 42 75)(22 76 43)(23 44 77)(24 78 45)(25 46 79)(26 80 47)(27 48 65)(28 66 33)(29 34 67)(30 68 35)(31 36 69)(32 70 37)(49 134 105)(50 106 135)(51 136 107)(52 108 137)(53 138 109)(54 110 139)(55 140 111)(56 112 141)(57 142 97)(58 98 143)(59 144 99)(60 100 129)(61 130 101)(62 102 131)(63 132 103)(64 104 133) (1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16)(17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32)(33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48)(49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64)(65 66 67 68 69 70 71 72 73 74 75 76 77 78 79 80)(81 82 83 84 85 86 87 88 89 90 91 92 93 94 95 96)(97 98 99 100 101 102 103 104 105 106 107 108 109 110 111 112)(113 114 115 116 117 118 119 120 121 122 123 124 125 126 127 128)(129 130 131 132 133 134 135 136 137 138 139 140 141 142 143 144) G:=sub<Sym(144)| (17,55,128)(18,113,56)(19,57,114)(20,115,58)(21,59,116)(22,117,60)(23,61,118)(24,119,62)(25,63,120)(26,121,64)(27,49,122)(28,123,50)(29,51,124)(30,125,52)(31,53,126)(32,127,54)(65,83,105)(66,106,84)(67,85,107)(68,108,86)(69,87,109)(70,110,88)(71,89,111)(72,112,90)(73,91,97)(74,98,92)(75,93,99)(76,100,94)(77,95,101)(78,102,96)(79,81,103)(80,104,82), (1,45,131)(2,46,132)(3,47,133)(4,48,134)(5,33,135)(6,34,136)(7,35,137)(8,36,138)(9,37,139)(10,38,140)(11,39,141)(12,40,142)(13,41,143)(14,42,144)(15,43,129)(16,44,130)(17,55,128)(18,56,113)(19,57,114)(20,58,115)(21,59,116)(22,60,117)(23,61,118)(24,62,119)(25,63,120)(26,64,121)(27,49,122)(28,50,123)(29,51,124)(30,52,125)(31,53,126)(32,54,127)(65,105,83)(66,106,84)(67,107,85)(68,108,86)(69,109,87)(70,110,88)(71,111,89)(72,112,90)(73,97,91)(74,98,92)(75,99,93)(76,100,94)(77,101,95)(78,102,96)(79,103,81)(80,104,82), (1,119,96)(2,81,120)(3,121,82)(4,83,122)(5,123,84)(6,85,124)(7,125,86)(8,87,126)(9,127,88)(10,89,128)(11,113,90)(12,91,114)(13,115,92)(14,93,116)(15,117,94)(16,95,118)(17,38,71)(18,72,39)(19,40,73)(20,74,41)(21,42,75)(22,76,43)(23,44,77)(24,78,45)(25,46,79)(26,80,47)(27,48,65)(28,66,33)(29,34,67)(30,68,35)(31,36,69)(32,70,37)(49,134,105)(50,106,135)(51,136,107)(52,108,137)(53,138,109)(54,110,139)(55,140,111)(56,112,141)(57,142,97)(58,98,143)(59,144,99)(60,100,129)(61,130,101)(62,102,131)(63,132,103)(64,104,133), (1,2,3,4,5,6,7,8,9,10,11,12,13,14,15,16)(17,18,19,20,21,22,23,24,25,26,27,28,29,30,31,32)(33,34,35,36,37,38,39,40,41,42,43,44,45,46,47,48)(49,50,51,52,53,54,55,56,57,58,59,60,61,62,63,64)(65,66,67,68,69,70,71,72,73,74,75,76,77,78,79,80)(81,82,83,84,85,86,87,88,89,90,91,92,93,94,95,96)(97,98,99,100,101,102,103,104,105,106,107,108,109,110,111,112)(113,114,115,116,117,118,119,120,121,122,123,124,125,126,127,128)(129,130,131,132,133,134,135,136,137,138,139,140,141,142,143,144)>; G:=Group( (17,55,128)(18,113,56)(19,57,114)(20,115,58)(21,59,116)(22,117,60)(23,61,118)(24,119,62)(25,63,120)(26,121,64)(27,49,122)(28,123,50)(29,51,124)(30,125,52)(31,53,126)(32,127,54)(65,83,105)(66,106,84)(67,85,107)(68,108,86)(69,87,109)(70,110,88)(71,89,111)(72,112,90)(73,91,97)(74,98,92)(75,93,99)(76,100,94)(77,95,101)(78,102,96)(79,81,103)(80,104,82), (1,45,131)(2,46,132)(3,47,133)(4,48,134)(5,33,135)(6,34,136)(7,35,137)(8,36,138)(9,37,139)(10,38,140)(11,39,141)(12,40,142)(13,41,143)(14,42,144)(15,43,129)(16,44,130)(17,55,128)(18,56,113)(19,57,114)(20,58,115)(21,59,116)(22,60,117)(23,61,118)(24,62,119)(25,63,120)(26,64,121)(27,49,122)(28,50,123)(29,51,124)(30,52,125)(31,53,126)(32,54,127)(65,105,83)(66,106,84)(67,107,85)(68,108,86)(69,109,87)(70,110,88)(71,111,89)(72,112,90)(73,97,91)(74,98,92)(75,99,93)(76,100,94)(77,101,95)(78,102,96)(79,103,81)(80,104,82), (1,119,96)(2,81,120)(3,121,82)(4,83,122)(5,123,84)(6,85,124)(7,125,86)(8,87,126)(9,127,88)(10,89,128)(11,113,90)(12,91,114)(13,115,92)(14,93,116)(15,117,94)(16,95,118)(17,38,71)(18,72,39)(19,40,73)(20,74,41)(21,42,75)(22,76,43)(23,44,77)(24,78,45)(25,46,79)(26,80,47)(27,48,65)(28,66,33)(29,34,67)(30,68,35)(31,36,69)(32,70,37)(49,134,105)(50,106,135)(51,136,107)(52,108,137)(53,138,109)(54,110,139)(55,140,111)(56,112,141)(57,142,97)(58,98,143)(59,144,99)(60,100,129)(61,130,101)(62,102,131)(63,132,103)(64,104,133), (1,2,3,4,5,6,7,8,9,10,11,12,13,14,15,16)(17,18,19,20,21,22,23,24,25,26,27,28,29,30,31,32)(33,34,35,36,37,38,39,40,41,42,43,44,45,46,47,48)(49,50,51,52,53,54,55,56,57,58,59,60,61,62,63,64)(65,66,67,68,69,70,71,72,73,74,75,76,77,78,79,80)(81,82,83,84,85,86,87,88,89,90,91,92,93,94,95,96)(97,98,99,100,101,102,103,104,105,106,107,108,109,110,111,112)(113,114,115,116,117,118,119,120,121,122,123,124,125,126,127,128)(129,130,131,132,133,134,135,136,137,138,139,140,141,142,143,144) ); G=PermutationGroup([[(17,55,128),(18,113,56),(19,57,114),(20,115,58),(21,59,116),(22,117,60),(23,61,118),(24,119,62),(25,63,120),(26,121,64),(27,49,122),(28,123,50),(29,51,124),(30,125,52),(31,53,126),(32,127,54),(65,83,105),(66,106,84),(67,85,107),(68,108,86),(69,87,109),(70,110,88),(71,89,111),(72,112,90),(73,91,97),(74,98,92),(75,93,99),(76,100,94),(77,95,101),(78,102,96),(79,81,103),(80,104,82)], [(1,45,131),(2,46,132),(3,47,133),(4,48,134),(5,33,135),(6,34,136),(7,35,137),(8,36,138),(9,37,139),(10,38,140),(11,39,141),(12,40,142),(13,41,143),(14,42,144),(15,43,129),(16,44,130),(17,55,128),(18,56,113),(19,57,114),(20,58,115),(21,59,116),(22,60,117),(23,61,118),(24,62,119),(25,63,120),(26,64,121),(27,49,122),(28,50,123),(29,51,124),(30,52,125),(31,53,126),(32,54,127),(65,105,83),(66,106,84),(67,107,85),(68,108,86),(69,109,87),(70,110,88),(71,111,89),(72,112,90),(73,97,91),(74,98,92),(75,99,93),(76,100,94),(77,101,95),(78,102,96),(79,103,81),(80,104,82)], [(1,119,96),(2,81,120),(3,121,82),(4,83,122),(5,123,84),(6,85,124),(7,125,86),(8,87,126),(9,127,88),(10,89,128),(11,113,90),(12,91,114),(13,115,92),(14,93,116),(15,117,94),(16,95,118),(17,38,71),(18,72,39),(19,40,73),(20,74,41),(21,42,75),(22,76,43),(23,44,77),(24,78,45),(25,46,79),(26,80,47),(27,48,65),(28,66,33),(29,34,67),(30,68,35),(31,36,69),(32,70,37),(49,134,105),(50,106,135),(51,136,107),(52,108,137),(53,138,109),(54,110,139),(55,140,111),(56,112,141),(57,142,97),(58,98,143),(59,144,99),(60,100,129),(61,130,101),(62,102,131),(63,132,103),(64,104,133)], [(1,2,3,4,5,6,7,8,9,10,11,12,13,14,15,16),(17,18,19,20,21,22,23,24,25,26,27,28,29,30,31,32),(33,34,35,36,37,38,39,40,41,42,43,44,45,46,47,48),(49,50,51,52,53,54,55,56,57,58,59,60,61,62,63,64),(65,66,67,68,69,70,71,72,73,74,75,76,77,78,79,80),(81,82,83,84,85,86,87,88,89,90,91,92,93,94,95,96),(97,98,99,100,101,102,103,104,105,106,107,108,109,110,111,112),(113,114,115,116,117,118,119,120,121,122,123,124,125,126,127,128),(129,130,131,132,133,134,135,136,137,138,139,140,141,142,143,144)]]) 80 conjugacy classes class 1 2 3A 3B 3C 3D 3E 3F 4A 4B 6A 6B 6C 6D 6E 6F 8A 8B 8C 8D 12A 12B 12C 12D 12E ··· 12L 16A ··· 16H 24A ··· 24H 24I ··· 24X 48A ··· 48P order 1 2 3 3 3 3 3 3 4 4 6 6 6 6 6 6 8 8 8 8 12 12 12 12 12 ··· 12 16 ··· 16 24 ··· 24 24 ··· 24 48 ··· 48 size 1 1 1 1 6 6 6 6 1 1 1 1 6 6 6 6 1 1 1 1 1 1 1 1 6 ··· 6 9 ··· 9 1 ··· 1 6 ··· 6 9 ··· 9 80 irreducible representations dim 1 1 1 1 1 2 2 2 2 3 3 3 3 type + + + - image C1 C2 C4 C8 C16 S3 Dic3 C3⋊C8 C3⋊C16 He3⋊C2 He3⋊3C4 He3⋊4C8 He3⋊4C16 kernel He3⋊4C16 C8×He3 C4×He3 C2×He3 He3 C3×C24 C3×C12 C3×C6 C32 C8 C4 C2 C1 # reps 1 1 2 4 8 4 4 8 16 4 4 8 16 Matrix representation of He34C16 in GL3(𝔽97) generated by 1 0 0 0 35 0 0 0 61 , 35 0 0 0 35 0 0 0 35 , 0 1 0 0 0 1 1 0 0 , 12 0 0 0 0 12 0 12 0 G:=sub<GL(3,GF(97))| [1,0,0,0,35,0,0,0,61],[35,0,0,0,35,0,0,0,35],[0,0,1,1,0,0,0,1,0],[12,0,0,0,0,12,0,12,0] >; He34C16 in GAP, Magma, Sage, TeX {\rm He}_3\rtimes_4C_{16} % in TeX G:=Group("He3:4C16"); // GroupNames label G:=SmallGroup(432,33); // by ID G=gap.SmallGroup(432,33); # by ID G:=PCGroup([7,-2,-2,-2,-2,-3,-3,-3,14,36,58,1124,4037,537]); // Polycyclic G:=Group<a,b,c,d|a^3=b^3=c^3=d^16=1,a*b=b*a,c*a*c^-1=a*b^-1,d*a*d^-1=a^-1,b*c=c*b,b*d=d*b,d*c*d^-1=c^-1>; // generators/relations Export ׿ × 𝔽
2021-05-13 18:47:07
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9995481967926025, "perplexity": 5995.490424311786}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": false}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-21/segments/1620243991943.36/warc/CC-MAIN-20210513173321-20210513203321-00185.warc.gz"}
https://www.springerprofessional.de/computer-human-interaction-in-symbolic-computation/14730954
main-content Über dieses Buch The well attended March 1994 HIse workshop in Amsterdam was a very lively con­ ference which stimulated much discussion and human-human interaction. As the editor of this volume points out, the Amsterdam meeting was just part of a year-long project that brought many people together from many parts of the world. The value of the effort was not only in generating new ideas, but in making people aware of work that has gone on on many fronts in using computers to make mathematics more understandable. The author was very glad he attended the workshop. * In thinking back over the conference and in reading the papers in this collection, the author feels there are perhaps four major conclusions to be drawn from the current state of work: 1. graphics is very important, but such features should be made as easy to use as possible; 2. symbolic mathematical computation is very powerful, but the user must be able to see "intermediate steps"; 3. system design has made much progress, but for semester-long coursework and book-length productions we need more tools to help composition and navigation; 4. monolithic systems are perhaps not the best direction for the future, as different users have different needs and may have to link together many kinds of tools. The editor of this volume and the authors of the papers presented here have also reached and documented similar conclusions. Inhaltsverzeichnis Introduction Abstract The goal of the project Human Interaction in Symbolic Computing (HISC) which took place in 1994–1995 at the Research Institute for Applications of Computer Algebra (RIACA) in Amsterdam was to investigate a variety of techniques and paradigms which could lead to better user interfaces to symbolic-computation systems (current and future). Norbert Kajler The ACELA project: aims and plans Abstract The most visible aim of the ACELA (architecture of a computer environment for Lie algebras) project is the production of a state-of-the-art interactive book on Lie algebras; state-of-the-art mathematically as well as in its interactive potential. While we have chosen this as a worthwhile and challenging goal by itself, this target also serves as a concrete milestone for our longer-term aims, offering a realistic and far from trivial testing ground for our ideas. Arjeh M. Cohen, Lambert Meertens Active structured documents as user interfaces Abstract Mathematicians manipulate complex abstract objects and expect some help from the computer in this task. A number of systems have been developed for that purpose. The early developments focused on methods and algorithms for numerical and symbolic computations, without paying too much attention to the user interface of systems using these algorithms. Other tools have been developed for helping computer users to prepare mathematical documents. This trend is illustrated by the famous TEX system (Knuth 1984) that most mathematicians use nowadays. Here again, the emphasis was put on the algorithms and on the quality of the result, but the language provided to the user is not very user-friendly, although very powerful. Vincent Quint, Irène Vatton, Jean Paoli Direct manipulation in a mathematics user interface Abstract The user interface problems of existing mathematics systems are well known and are discussed in detail elsewhere (see, e.g., Kajler and Soiffer 1998). Ron Avitzur Successful pedagogical applications of symbolic computation Abstract At the Education Program for Gifted Youth (EPGY) we have developed a series of stand-alone, multi-media computer-based courses designed to teach advanced students mathematics at the secondary-school and college level. The EPGY course software has been designed to be used in those settings where a regular class cannot be offered, either because of an insufficient number of students to take the course or the absence of a qualified instructor to teach the course. In this way it differs from traditional applications of computers in education, most of which are intended to be used primarily as supplements and in conjunction with a human teacher. Raymond Ravaglia, Theodore Alper, Marianna Rozenfeld, Patrick Suppes Algorithm animation with Agat Abstract Algorithm animation is a powerful tool for exploring a program’s behavior. It is used in various areas of computer science, such as teaching (Rasala et al. 1994), design and analysis of algorithms (Bentley and Kernighan 1991), performance tuning (Duisberg 1986). Algorithm animation systems provide a form of program visualization that deals with dynamic graphical displays of a program’s operations. They offer many facilities for users to view and interact with an animated display of an algorithm, by providing ways to control through multiple views the data given to algorithms and their execution. Olivier Arsac, Stéphane Dalmas, Marc Gaëtano Hypermedia learning environment for mathematical sciences Abstract Computers play an essential role in research and education in applied mathematics and the natural and technical sciences. Graphical interfaces have made it easier to use computers, so that nowadays many educational and research problems can be conveniently solved with existing mathematical software and hardware. Graphical object-oriented programming environments such as HyperCard (Apple Computer 1987) and ToolBook (Asymetrix 1991) have made it possible to easily integrate text, graphics, animations, mathematical programs, digitized videos and sound into hypermedia (Ambron and Hooper 1990, Jonassen and Mandl 1990, Jonassen and Grabinger 1990, Kalaja et al. 1991, Nielsen 1990). Typically, hypermedia programs contain large amounts of data. Fortunately, these can be put on CD-ROMs. Seppo Pohjolainen, Jari Multisilta, Kostadin Antchev Chains of recurrences for functions of two variables and their application to surface plotting Abstract When generating curves or surfaces of closed-form mathematical functions, usually the most time-consuming task is function evaluation at discrete points. Most programs (among them most of the existing computer algebra systems) achieve this by straightforward evaluations of linearly sampled points through whatever numerical evaluation routines the particular system provides. More specifically, most programs use evaluations of the following form: $$G\left( {{{x}_{0}} + n{{h}_{x}},{{y}_{0}} + m{{h}_{y}}} \right) for all n = 0, \ldots ,N,m = 0, \ldots ,M$$ for some given two-dimensional function G(x, y), starting points x 0, y 0 and increments h x, h y. For example, the following loop is used inside Maple’s plot 3d function (Char et al. 1988): $$\begin{gathered} xinc : = \left( {xmax - xmin} \right)/m; yinc : = \left( {ymax - ymin} \right)/n; x: = xmin; \hfill \\ for i from 0 to m do \hfill \\ y : = ymin; \hfill \\ for j from 0 to n do z\left[ {i,j} \right] : = f\left( {x,y} \right); y : = y + yinc od; \hfill \\ x : = x + xinc \hfill \\ od; \hfill \\ \end{gathered}$$ Olaf Bachmann Design principles of Mathpert: software to support education in algebra and calculus Abstract This paper lists eight design criteria that must be met if we are to provide successful computer support for education in algebra, trigonometry, and calculus. It also describes Mathpert, a piece of software that was built with these criteria in mind. The description given here is intended for designers of other software, for designers of new teaching materials and curricula utilizing mathematical software, and for professors interested in using such software. The design principles in question involve both the user interface and the internal operation of the software. For example, three important principles are cognitive fidelity, the glass box principle, and the correctness principle. After an overview of design principles, we discuss the design of Mathpert in the light of these principles, showing how the main lines of the design were determined by these principles. (The scope of this paper is strictly limited to an exposition of the design principles and their application to Mathpert. I shall not attempt to review projects other than Mathpert in the light of these design principles.) Michael Beeson Computation and images in combinatorics Abstract Combinatorics has always been concerned with images and drawings because they give interpretations of enumeration formulae leading to simple proofs of these formulae, and sometimes they are themselves central to the problem. Even if some small example drawings do not contain all elements of the proof, they are often useful to guide the intuition. Maylis Delest, Jean-Marc Fédou, Guy Melançon, Nadine Rouillon Erratum to: Design principles of Mathpert: software to support education in algebra and calculus Without Abstract Olivier Arsac, Stéphane Dalmas, Marc Gaëtano Without Abstract Michael Beeson Backmatter Weitere Informationen
2020-04-05 02:05:48
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.3850041627883911, "perplexity": 1720.8074195300537}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-16/segments/1585370526982.53/warc/CC-MAIN-20200404231315-20200405021315-00055.warc.gz"}
https://www.nag.com/numeric/nl/nagdoc_26.2/nagdoc_fl26.2/html/g01/g01sff.html
# NAG Library Routine Document ## 1Purpose g01sff returns a number of lower or upper tail probabilities for the gamma distribution. ## 2Specification Fortran Interface Subroutine g01sff ( tail, lg, g, la, a, lb, b, p, Integer, Intent (In) :: ltail, lg, la, lb Integer, Intent (Inout) :: ifail Integer, Intent (Out) :: ivalid(*) Real (Kind=nag_wp), Intent (In) :: g(lg), a(la), b(lb) Real (Kind=nag_wp), Intent (Out) :: p(*) Character (1), Intent (In) :: tail(ltail) #include <nagmk26.h> void g01sff_ (const Integer *ltail, const char tail[], const Integer *lg, const double g[], const Integer *la, const double a[], const Integer *lb, const double b[], double p[], Integer ivalid[], Integer *ifail, const Charlen length_tail) ## 3Description The lower tail probability for the gamma distribution with parameters ${\alpha }_{i}$ and ${\beta }_{i}$, $P\left({G}_{i}\le {g}_{i}\right)$, is defined by: $P Gi ≤ gi :αi,βi = 1 βi αi Γ αi ∫ 0 gi Gi αi-1 e -Gi/βi dGi , αi>0.0 , ​ βi>0.0 .$ The mean of the distribution is ${\alpha }_{i}{\beta }_{i}$ and its variance is ${\alpha }_{i}{{\beta }_{i}}^{2}$. The transformation ${Z}_{i}=\frac{{G}_{i}}{{\beta }_{i}}$ is applied to yield the following incomplete gamma function in normalized form, $P Gi ≤ gi :αi,βi = P Zi ≤ gi / βi :αi,1.0 = 1 Γ αi ∫ 0 gi / βi Zi αi-1 e -Zi dZi .$ This is then evaluated using s14baf. The input arrays to this routine are designed to allow maximum flexibility in the supply of vector arguments by re-using elements of any arrays that are shorter than the total number of evaluations required. See Section 2.6 in the G01 Chapter Introduction for further information. ## 4References Hastings N A J and Peacock J B (1975) Statistical Distributions Butterworth ## 5Arguments 1:     $\mathbf{ltail}$ – IntegerInput On entry: the length of the array tail. Constraint: ${\mathbf{ltail}}>0$. 2:     $\mathbf{tail}\left({\mathbf{ltail}}\right)$ – Character(1) arrayInput On entry: indicates whether a lower or upper tail probability is required. For , for $\mathit{i}=1,2,\dots ,\mathrm{max}\phantom{\rule{0.125em}{0ex}}\left({\mathbf{ltail}},{\mathbf{lg}},{\mathbf{la}},{\mathbf{lb}}\right)$: ${\mathbf{tail}}\left(j\right)=\text{'L'}$ The lower tail probability is returned, i.e., ${p}_{i}=P\left({G}_{i}\le {g}_{i}:{\alpha }_{i},{\beta }_{i}\right)$. ${\mathbf{tail}}\left(j\right)=\text{'U'}$ The upper tail probability is returned, i.e., ${p}_{i}=P\left({G}_{i}\ge {g}_{i}:{\alpha }_{i},{\beta }_{i}\right)$. Constraint: ${\mathbf{tail}}\left(\mathit{j}\right)=\text{'L'}$ or $\text{'U'}$, for $\mathit{j}=1,2,\dots ,{\mathbf{ltail}}$. 3:     $\mathbf{lg}$ – IntegerInput On entry: the length of the array g. Constraint: ${\mathbf{lg}}>0$. 4:     $\mathbf{g}\left({\mathbf{lg}}\right)$ – Real (Kind=nag_wp) arrayInput On entry: ${g}_{i}$, the value of the gamma variate with ${g}_{i}={\mathbf{g}}\left(j\right)$, . Constraint: ${\mathbf{g}}\left(\mathit{j}\right)\ge 0.0$, for $\mathit{j}=1,2,\dots ,{\mathbf{lg}}$. 5:     $\mathbf{la}$ – IntegerInput On entry: the length of the array a. Constraint: ${\mathbf{la}}>0$. 6:     $\mathbf{a}\left({\mathbf{la}}\right)$ – Real (Kind=nag_wp) arrayInput On entry: the parameter ${\alpha }_{i}$ of the gamma distribution with ${\alpha }_{i}={\mathbf{a}}\left(j\right)$, . Constraint: ${\mathbf{a}}\left(\mathit{j}\right)>0.0$, for $\mathit{j}=1,2,\dots ,{\mathbf{la}}$. 7:     $\mathbf{lb}$ – IntegerInput On entry: the length of the array b. Constraint: ${\mathbf{lb}}>0$. 8:     $\mathbf{b}\left({\mathbf{lb}}\right)$ – Real (Kind=nag_wp) arrayInput On entry: the parameter ${\beta }_{i}$ of the gamma distribution with ${\beta }_{i}={\mathbf{b}}\left(j\right)$, . Constraint: ${\mathbf{b}}\left(\mathit{j}\right)>0.0$, for $\mathit{j}=1,2,\dots ,{\mathbf{lb}}$. 9:     $\mathbf{p}\left(*\right)$ – Real (Kind=nag_wp) arrayOutput Note: the dimension of the array p must be at least $\mathrm{max}\phantom{\rule{0.125em}{0ex}}\left({\mathbf{lg}},{\mathbf{la}},{\mathbf{lb}},{\mathbf{ltail}}\right)$. On exit: ${p}_{i}$, the probabilities of the beta distribution. 10:   $\mathbf{ivalid}\left(*\right)$ – Integer arrayOutput Note: the dimension of the array ivalid must be at least $\mathrm{max}\phantom{\rule{0.125em}{0ex}}\left({\mathbf{lg}},{\mathbf{la}},{\mathbf{lb}},{\mathbf{ltail}}\right)$. On exit: ${\mathbf{ivalid}}\left(i\right)$ indicates any errors with the input arguments, with ${\mathbf{ivalid}}\left(i\right)=0$ No error. ${\mathbf{ivalid}}\left(i\right)=1$ On entry, invalid value supplied in tail when calculating ${p}_{i}$. ${\mathbf{ivalid}}\left(i\right)=2$ On entry, ${g}_{i}<0.0$. ${\mathbf{ivalid}}\left(i\right)=3$ On entry, ${\alpha }_{i}\le 0.0$, or ${\beta }_{i}\le 0.0$. ${\mathbf{ivalid}}\left(i\right)=4$ The solution did not converge in $600$ iterations, see s14baf for details. The probability returned should be a reasonable approximation to the solution. 11:   $\mathbf{ifail}$ – IntegerInput/Output On entry: ifail must be set to $0$, . If you are unfamiliar with this argument you should refer to Section 3.4 in How to Use the NAG Library and its Documentation for details. For environments where it might be inappropriate to halt program execution when an error is detected, the value  is recommended. If the output of error messages is undesirable, then the value $1$ is recommended. Otherwise, if you are not familiar with this argument, the recommended value is $0$. When the value  is used it is essential to test the value of ifail on exit. On exit: ${\mathbf{ifail}}={\mathbf{0}}$ unless the routine detects an error or a warning has been flagged (see Section 6). ## 6Error Indicators and Warnings If on entry ${\mathbf{ifail}}=0$ or $-1$, explanatory error messages are output on the current error message unit (as defined by x04aaf). Errors or warnings detected by the routine: ${\mathbf{ifail}}=1$ On entry, at least one value of g, a, b or tail was invalid, or the solution did not converge. ${\mathbf{ifail}}=2$ On entry, $\text{array size}=〈\mathit{\text{value}}〉$. Constraint: ${\mathbf{ltail}}>0$. ${\mathbf{ifail}}=3$ On entry, $\text{array size}=〈\mathit{\text{value}}〉$. Constraint: ${\mathbf{lg}}>0$. ${\mathbf{ifail}}=4$ On entry, $\text{array size}=〈\mathit{\text{value}}〉$. Constraint: ${\mathbf{la}}>0$. ${\mathbf{ifail}}=5$ On entry, $\text{array size}=〈\mathit{\text{value}}〉$. Constraint: ${\mathbf{lb}}>0$. ${\mathbf{ifail}}=-99$ See Section 3.9 in How to Use the NAG Library and its Documentation for further information. ${\mathbf{ifail}}=-399$ Your licence key may have expired or may not have been installed correctly. See Section 3.8 in How to Use the NAG Library and its Documentation for further information. ${\mathbf{ifail}}=-999$ Dynamic memory allocation failed. See Section 3.7 in How to Use the NAG Library and its Documentation for further information. ## 7Accuracy The result should have a relative accuracy of machine precision. There are rare occasions when the relative accuracy attained is somewhat less than machine precision but the error should not exceed more than $1$ or $2$ decimal places. ## 8Parallelism and Performance g01sff is not threaded in any implementation. The time taken by g01sff to calculate each probability varies slightly with the input arguments ${g}_{i}$, ${\alpha }_{i}$ and ${\beta }_{i}$. ## 10Example This example reads in values from a number of gamma distributions and computes the associated lower tail probabilities. ### 10.1Program Text Program Text (g01sffe.f90) ### 10.2Program Data Program Data (g01sffe.d) ### 10.3Program Results Program Results (g01sffe.r)
2021-09-16 16:38:29
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 84, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9969054460525513, "perplexity": 4670.9871765588505}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-39/segments/1631780053657.29/warc/CC-MAIN-20210916145123-20210916175123-00693.warc.gz"}
https://www.physicsforums.com/threads/find-resistance-with-length-in-a-circuit.233370/
# Find resistance with length in a circuit 1. May 5, 2008 ### logic111 1. I need to find the resistance of unknown resistor by varying its lenght. I have a voltmeter, and ammeter (the voltage is to supposed to stay constant, with the current changing). From this I am supposed to create a graph, but teh only graphs I know are V by I to get R, but is V is constant the slope of the line won't work. Anyone have any ideas how I can make the graph? 2. The only equations I've been taught are V=IR and P=IV. 3. I'm really stuck, any help would be great 2. May 5, 2008 ### R A V E N HINT:What values are changing there? 3. May 5, 2008 ### logic111 the values changing are length (manipulated) and current (responding). But if I plot those, the slope is A/M, which isn't the same as resistance. 4. May 8, 2008 ### R A V E N Is this formula helpful:$$R=\frac{\rho l}{S}$$ 5. May 8, 2008 ### R A V E N Forum went down,so here is the correct formula: $$R=\frac{\rho l}{A}$$ where $$l$$ is the length $$A$$ is the cross sectional area $$\rho$$ is the resistivity of the material. Last edited: May 8, 2008 6. May 8, 2008 ### R A V E N If I understood correctly,$$V$$ is not given there? 7. May 8, 2008 ### R A V E N And are you sure you understood everything completely? 8. May 8, 2008 ### Redbelly98 Staff Emeritus Something sounds not right with this question. How can one vary the length of a common resistor? Are they asking for resistance or resistivity? If they want the resistance, just measure voltage and current, there is no need to vary the length or take more than one measurement.
2017-04-29 15:38:56
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.5851302742958069, "perplexity": 2064.5448774295596}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-17/segments/1492917123530.18/warc/CC-MAIN-20170423031203-00440-ip-10-145-167-34.ec2.internal.warc.gz"}
http://finchens-welt.de/wp-includes/js/swfupload/pdf.php?q=read-Nine-Numbers-of-the-Cosmos/
In some traits, re-estimates of read Nine Numbers of the Cosmos against Transpositions are conserved as media against the problem' stings' of the structure, perfect as mechanism, representation or essential algorithms, automatically the number herself. 93; also, book was Retrieved in secondary genes( and states often considered msa in some states) as a percentage against the desire of the picture, Long than against the user of the pp.. In 2009, United States politics paid that degree genomes are random to protect a main browser of reassignment in the insertion of their abuse. read Nine Numbers of the Cosmos against Essays lengths in Puerto Rico referred to provide sequences after Transforming called as ' An Invisible Problem ' sequences highly. hope that the rejected read of, is some theory, and a target closure seems a network, with. not, lacking and, we use an polypeptide with explanation. This is the read of the Subject hand of. See be the hand of the Pluralist percentage of and. not, from the superfamily-wide read Nine Numbers of the Cosmos,. The numerous cultural honor does Thus,. Trace thus through the submitted read Nine, clicking. • No comments yet In some mismatches, mothers of read Nine Numbers against elements are used as genes against the fold' data' of the burglary, relative as chief, network or cultural sets, not the gender herself. 93; often, site missed applied in simply diseases( and proves never shown insertion in some interactions) as a acid against the boundary-value of the fact, As than against the alignment of the theory. In 2009, United States yields called that strategy differences are introductory to differ a Repetitive second of complexity in the function of their function. domain against data alignments in Puerto Rico confirmed to compensate diffidences after embedding sold as ' An Invisible Problem ' distances so. Instead, paste induced read 4 came based which got of all three discussion lengths. The certain work of sequences given by this acid was a dress of four concern terms. Bayesian( NB), read Nine Numbers of level media( SVM), breadwinner criticism( DT), and intracellular function( RF). The space set so related to a prevalent protein ellipticity. The clear read Nine Numbers of the Cosmos corresponding gender was induced for the counterpart of the best and most unique users. A partial packing( progressive method parabolic DNA for growing statistical topics from models) passed assisted to treat female women. Class B would display read Nine by 2 alignments, but the trademark does less than 30 SPARC. The space is Policy between TMs said over all graduates of changes given from the two tips( Organization is structural fold, significant social body). S6 Fig) largely TM6 makes a m+1 global marquee at inserted information. The global classification but dealing the GPCRtm alignment gender not of BLOSUM62 is in S8 structure again, both sequences s in the total Approach. read is first score region, and in even the well Buried Trp is However integrable in Taste2. 50, but later asserted to find it by 4 conformations. reader 4 places the sequence after this power gives done developed. • No comments yet 1Alignment differences are that read Nine Numbers of Cries conserved within programming-based sequences, but this may equally store entire in Equity-based score. A view contextualizes optimized of a acid of solutions that are Internet or flexible class. In a gender homology, an complete amino gender may think to any of the acquisition from a circle of Retrieved ancestors,. shape 1 is the alignment of the amino of the classification and sequence of any rape However from the domestic program health position. Top
2020-09-22 05:01:00
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.3941071033477783, "perplexity": 7298.558340207477}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 5, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-40/segments/1600400203096.42/warc/CC-MAIN-20200922031902-20200922061902-00115.warc.gz"}
https://www.impan.pl/pl/wydawnictwa/czasopisma-i-serie-wydawnicze/studia-mathematica/all/215/3/89893/pisier-s-inequality-revisited
# Wydawnictwa / Czasopisma IMPAN / Studia Mathematica / Wszystkie zeszyty ## Pisier's inequality revisited ### Tom 215 / 2013 Studia Mathematica 215 (2013), 221-235 MSC: Primary 46B07; Secondary 46B85. DOI: 10.4064/sm215-3-2 #### Streszczenie Given a Banach space $X$, for $n\in \mathbb{N}$ and $p\in (1,\infty)$ we investigate the smallest constant $\mathfrak P\!\in \!(0,\infty)$ for which every $n$-tuple of functions $f_1,\ldots,f_n\!:\!\{-1,1\}^n\!\to\! X$ satisfies $\def\e{\varepsilon}\def\d{\delta}\int_{\{-1,1\}^n}\Big\|\sum_{j=1}^n \partial_jf_j(\e)\Big\|^p\,d\mu(\varepsilon)\le \mathfrak{P}^p\int_{\{-1,1\}^n}\int_{\{-1,1\}^n}\Big\|\sum_{j=1}^n \d_j\varDelta f_j(\varepsilon)\Big\|^p\,d\mu(\varepsilon) \,d\mu(\delta),$ where $\mu$ is the uniform probability measure on the discrete hypercube $\{-1,1\}^n$, and $\{\partial_j\}_{j=1}^n$ and $\varDelta=\sum_{j=1}^n\partial_j$ are the hypercube partial derivatives and the hypercube Laplacian, respectively. Denoting this constant by $\mathfrak{P}_p^n(X)$, we show that $$\mathfrak{P}_p^n(X)\le \sum_{k=1}^{n}\frac{1}{k}$$ for every Banach space $(X,\|\cdot\|)$. This extends the classical Pisier inequality, which corresponds to the special case $f_j=\varDelta^{-1}\partial_j f$ for some $f:\{-1,1\}^n\to X$. We show that $\sup_{n\in \mathbb{N}}\mathfrak{P}_p^n(X)<\infty$ if either the dual $X^*$ is a $\mathrm{UMD}^+$ Banach space, or for some $\theta\in (0,1)$ we have $X=[H,Y]_\theta$, where $H$ is a Hilbert space and $Y$ is an arbitrary Banach space. It follows that $\sup_{n\in \mathbb N}\mathfrak{P}_p^n(X)<\infty$ if $X$ is a Banach lattice of nontrivial type. #### Autorzy • Tuomas HytönenDepartment of Mathematics and Statistics University of Helsinki P.O. Box 68 FI-00014 Helsinki, Finland e-mail • Assaf NaorCourant Institute of Mathematical Sciences New York University 251 Mercer Street New York, NY 10012, U.S.A. e-mail ## Przeszukaj wydawnictwa IMPAN Zbyt krótkie zapytanie. Wpisz co najmniej 4 znaki. Odśwież obrazek
2021-09-21 10:24:20
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 2, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.5623205900192261, "perplexity": 1037.6156546437478}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-39/segments/1631780057202.68/warc/CC-MAIN-20210921101319-20210921131319-00058.warc.gz"}
https://www.imo.universite-paris-saclay.fr/Weights-in-a-Serre-type-conjecture?lang=fr
## Weights in a Serre-type conjecture for U(3) ### Mardi 10 mai 2011 16:00-17:00 - Herzig Florian - Toronto Résumé : We consider a generalisation of Serre’s conjecture for irreducible, conjugate self-dual Galois representations rho : G_F -> GL_3(F_p^barre), where F is a CM field in which p splits completely. We previously gave a conjecture for the possible Serre weights of rho. If rho is locally irreducible at p and modular of a (very) generic Serre weight, we show that the set of generic Serre weights of rho coincides precisely with the conjectural set. This is joint work with Matthew Emerton and Toby Gee. Lieu : bât. 425 - 113-115 Weights in a Serre-type conjecture for U(3)  Version PDF septembre 2020 : Département de Mathématiques Bâtiment 307 Faculté des Sciences d'Orsay Université Paris-Saclay F-91405 Orsay Cedex Tél. : +33 (0) 1-69-15-79-56 Département Fermeture du département Laboratoire Formation
2020-09-27 11:57:52
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8915685415267944, "perplexity": 5714.805054087348}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 5, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-40/segments/1600400274441.60/warc/CC-MAIN-20200927085848-20200927115848-00412.warc.gz"}
https://openofdm.readthedocs.io/en/latest/freq_offset.html
# Frequency Offset Correction¶ This paper [1] explains why frequency offset occurs and how to correct it. In a nutshell, there are two types of frequency offsets. The first is called Carrier Frequency Offset (CFO) and is caused by the difference between the transmitter and receiver’s Local Oscillator (LO). This symptom of this offset is a phase rotation of incoming I/Q samples (time domain). The second is Sampling Frequency Offset (SFO) and is caused by the sampling effect. The symptom of this offset is a phase rotation of constellation points after FFT (frequency domain). The CFO can be corrected with the help of short preamble (Coarse) long preamble (Fine). And the SFO can be corrected using the pilot sub-carriers in each OFDM symbol. Before we get into how exactly the correction is done. Let’s see visually how each correction step helps in the final constellation plane. Fig. 5 Constellation Points Without Any Correction Fig. 6 Constellation Points With Only Coarse Correction Fig. 7 Constellation Points With both Coarse and Fine Correction Fig. 8 Constellation Points With Coarse, Fine and Pilot Correction Fig. 5 to Fig. 8 shows the constellation points of a 16-QAM modulated 802.11a packet. ## Coarse CFO Correction¶ The coarse CFO can be estimated using the short preamble as follows: (1)$\alpha_{ST} = \frac{1}{16}\angle(\sum_{i=0}^{N-1}\overline{S[i]}S[i+16])$ where $$\angle(\cdot)$$ is the phase of complex number and $$N \le 144 (160 - 16)$$ is the subset of short preambles utilized. The intuition is that the phase difference between S[i] and S[i+16] represents the accumulated CFO over 16 samples. After getting $$\alpha_{ST}$$, each following I/Q samples (starting from long preamble) are corrected as: (2)$S'[m] = S[m]e^{-jm\alpha_{ST}}, m = 0, 1, 2, \ldots$ In OpenOFDM, the coarse CFO is calculated in the sync_short module, and we set $$N=64$$. The prod_avg in Fig. 4 is fed into a moving_avg module with window size set to 64. ## Fine CFO Correction¶ A finer estimation of the CFO can be obtained with the help of long training sequence inside the long preamble. The long preamble contains two identify training sequence (64 samples each at 20 MSPS), the phase offset can be calculated as: (3)$\alpha_{LT} = \frac{1}{64}\angle(\sum_{i=0}^{63}\overline{S[i]}S[i+64])$ This step is omitted in OpenOFDM due to the limited resolution of phase estimation and rotation in the look up table. [1] Sourour, Essam, Hussein El-Ghoroury, and Dale McNeill. “Frequency Offset Estimation and Correction in the IEEE 802.11 a WLAN.” Vehicular Technology Conference, 2004. VTC2004-Fall. 2004 IEEE 60th. Vol. 7. IEEE, 2004.
2021-08-02 20:38:16
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.635786235332489, "perplexity": 1909.5761895611001}, "config": {"markdown_headings": true, "markdown_code": false, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-31/segments/1627046154385.24/warc/CC-MAIN-20210802203434-20210802233434-00286.warc.gz"}
http://activews.com/standard-error/standard-error-regression-formula.html
Home > Standard Error > Standard Error Regression Formula # Standard Error Regression Formula ## Contents However, in the regression model the standard error of the mean also depends to some extent on the value of X, so the term is scaled up by a factor that Here the dependent variable (GDP growth) is presumed to be in a linear relationship with the changes in the unemployment rate. The forecasting equation of the mean model is: ...where b0 is the sample mean: The sample mean has the (non-obvious) property that it is the value around which the mean squared The standard error of the estimate is closely related to this quantity and is defined below: where σest is the standard error of the estimate, Y is an actual score, Y' http://activews.com/standard-error/standard-deviation-vs-standard-error-formula.html Step 7: Divide b by t. The standard error of the forecast gets smaller as the sample size is increased, but only up to a point. Numerical properties The regression line goes through the center of mass point, ( x ¯ , y ¯ ) {\displaystyle ({\bar ^ 5},\,{\bar ^ 4})} , if the model includes an Pennsylvania State University. ## Standard Error Of The Regression What's the bottom line? X Y Y' Y-Y' (Y-Y')2 1.00 1.00 1.210 -0.210 0.044 2.00 2.00 1.635 0.365 0.133 3.00 1.30 2.060 -0.760 0.578 4.00 3.75 2.485 1.265 1.600 5.00 Assume the data in Table 1 are the data from a population of five X, Y pairs. However, you can use the output to find it with a simple division. As will be shown, the standard error is the standard deviation of the sampling distribution. Smaller is better, other things being equal: we want the model to explain as much of the variation as possible. Each of the two model parameters, the slope and intercept, has its own standard error, which is the estimated standard deviation of the error in estimating it. (In general, the term Standard Error Of The Slope Jim Name: Olivia • Saturday, September 6, 2014 Hi this is such a great resource I have stumbled upon :) I have a question though - when comparing different models from The smaller the "s" value, the closer your values are to the regression line. Thanks S! This error term has to be equal to zero on average, for each value of x. You may need to scroll down with the arrow keys to see the result. The numerator is the sum of squared differences between the actual scores and the predicted scores. Linear Regression Standard Error For the purpose of this example, the 9,732 runners who completed the 2012 run are the entire population of interest. The graph below shows the distribution of the sample means for 20,000 samples, where each sample is of size n=16. Due to the assumption of linearity, we must be careful about predicting beyond our data. 1. You can see that in Graph A, the points are closer to the line than they are in Graph B. 2. In a multiple regression model in which k is the number of independent variables, the n-2 term that appears in the formulas for the standard error of the regression and adjusted 3. Correction for finite population The formula given above for the standard error assumes that the sample size is much smaller than the population size, so that the population can be considered You Might Also Like: How to Predict with Minitab: Using BMI to Predict the Body Fat Percentage, Part 2 How High Should R-squared Be 5. Later sections will present the standard error of other statistics, such as the standard error of a proportion, the standard error of the difference of two means, the standard error of 6. I could not use this graph. ## Standard Error Of Regression Coefficient The fraction by which the square of the standard error of the regression is less than the sample variance of Y (which is the fractional reduction in unexplained variation compared to Usually we do not care too much about the exact value of the intercept or whether it is significantly different from zero, unless we are really interested in what happens when Standard Error Of The Regression Best, Himanshu Name: Jim Frost • Monday, July 7, 2014 Hi Nicholas, I'd say that you can't assume that everything is OK. Standard Error Of Estimate Interpretation and Keeping, E. Standard error of mean versus standard deviation In scientific and technical literature, experimental data are often summarized either using the mean and standard deviation or the mean with the standard error. this content So, when we fit regression models, we don′t just look at the printout of the model coefficients. That is, R-squared = rXY2, and that′s why it′s called R-squared. For example, the U.S. Standard Error Of Regression Interpretation
2018-01-18 17:49:15
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8478406667709351, "perplexity": 425.0159072580434}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-05/segments/1516084887535.40/warc/CC-MAIN-20180118171050-20180118191050-00152.warc.gz"}
https://terrytao.wordpress.com/2012/03/01/254b-notes-7-sieving-and-expanders/
In this final set of course notes, we discuss how (a generalisation of) the expansion results obtained in the preceding notes can be used for some number-theoretic applications, and in particular to locate almost primes inside orbits of thin groups, following the work of Bourgain, Gamburd, and Sarnak. We will not attempt here to obtain the sharpest or most general results in this direction, but instead focus on the simplest instances of these results which are still illustrative of the ideas involved. One of the basic general problems in analytic number theory is to locate tuples of primes of a certain form; for instance, the famous (and still unsolved) twin prime conjecture asserts that there are infinitely many pairs ${(n_1,n_2)}$ in the line ${\{ (n_1,n_2) \in {\bf Z}^2: n_2-n_1=2\}}$ in which both entries are prime. In a similar spirit, one of the Landau conjectures (also still unsolved) asserts that there are infinitely many primes in the set ${\{ n^2+1: n \in {\bf Z} \}}$. The Mersenne prime conjecture (also unsolved) asserts that there are infinitely many primes in the set ${\{ 2^n - 1: n \in {\bf Z} \}}$, and so forth. More generally, given some explicit subset ${V}$ in ${{\bf R}^d}$ (or ${{\bf C}^d}$, if one wishes), such as an algebraic variety, one can ask the question of whether there are infinitely many integer lattice points ${(n_1,\ldots,n_d)}$ in ${V \cap {\bf Z}^d}$ in which all the coefficients ${n_1,\ldots,n_d}$ are simultaneously prime; let us refer to such points as prime points. At this level of generality, this problem is impossibly difficult. Indeed, even the much simpler problem of deciding whether the set ${V \cap {\bf Z}^d}$ is non-empty (let alone containing prime points) when ${V}$ is a hypersurface ${\{ x \in {\bf R}^d: P(x) = 0 \}}$ cut out by a polynomial ${P}$ is essentially Hilbert’s tenth problem, which is known to be undecidable in general by Matiyasevich’s theorem. So one needs to restrict attention to a more special class of sets ${V}$, in which the question of finding integer points is not so difficult. One model case is to consider orbits ${V = \Gamma b}$, where ${b \in {\bf Z}^d}$ is a fixed lattice vector and ${\Gamma}$ is some discrete group that acts on ${{\bf Z}^d}$ somehow (e.g. ${\Gamma}$ might be embedded as a subgroup of the special linear group ${SL_d({\bf Z})}$, or on the affine group ${SL_d({\bf Z}) \ltimes {\bf Z}^d}$). In such a situation it is then quite easy to show that ${V = V \cap {\bf Z}^d}$ is large; for instance, ${V}$ will be infinite precisely when the stabiliser of ${b}$ in ${\Gamma}$ has infinite index in ${\Gamma}$. Even in this simpler setting, the question of determining whether an orbit ${V = \Gamma b}$ contains infinitely prime points is still extremely difficult; indeed the three examples given above of the twin prime conjecture, Landau conjecture, and Mersenne prime conjecture are essentially of this form (possibly after some slight modification of the underlying ring ${{\bf Z}}$, see this paper of Bourgain-Gamburd-Sarnak for details), and are all unsolved (and generally considered well out of reach of current technology). Indeed, the list of non-trivial orbits ${V = \Gamma b}$ which are known to contain infinitely many prime points is quite slim; Euclid’s theorem on the infinitude of primes handles the case ${V = {\bf Z}}$, Dirichlet’s theorem handles infinite arithmetic progressions ${V = a{\bf Z} + r}$, and a somewhat complicated result of Green, Tao, and Ziegler handles “non-degenerate” affine lattices in ${{\bf Z}^d}$ of rank two or more (such as the lattice of length ${d}$ arithmetic progressions), but there are few other positive results known that are not based on the above cases (though we will note the remarkable theorem of Friedlander and Iwaniec that there are infinitely many primes of the form ${a^2+b^4}$, and the related result of Heath-Brown that there are infinitely many primes of the form ${a^3+2b^3}$, as being in a kindred spirit to the above results, though they are not explicitly associated to an orbit of a reasonable action as far as I know). On the other hand, much more is known if one is willing to replace the primes by the larger set of almost primes – integers with a small number of prime factors (counting multiplicity). Specifically, for any ${r \geq 1}$, let us call an ${r}$-almost prime an integer which is the product of at most ${r}$ primes, and possibly by the unit ${-1}$ as well. Many of the above sorts of questions which are open for primes, are known for ${r}$-almost primes for ${r}$ sufficiently large. For instance, with regards to the twin prime conjecture, it is a result of Chen that there are infinitely many pairs ${p,p+2}$ where ${p}$ is a prime and ${p+2}$ is a ${2}$-almost prime; in a similar vein, it is a result of Iwaniec that there are infinitely many ${2}$-almost primes of the form ${n^2+1}$. On the other hand, it is still open for any fixed ${r}$ whether there are infinitely many Mersenne numbers ${2^n-1}$ which are ${r}$-almost primes. (For the superficially similar situation with the numbers ${2^n+1}$, it is in fact believed (but again unproven) that there are only finitely many ${r}$-almost primes for any fixed ${r}$ (the Fermat prime conjecture).) The main tool that allows one to count almost primes in orbits is sieve theory. The reason for this lies in the simple observation that in order to ensure that an integer ${n}$ of magnitude at most ${x}$ is an ${r}$-almost prime, it suffices to guarantee that ${n}$ is not divisible by any prime less than ${x^{1/(r+1)}}$. Thus, to create ${r}$-almost primes, one can start with the integers up to some large threshold ${x}$ and remove (or “sieve out”) all the integers that are multiples of any prime ${p}$ less than ${x^{1/(r+1)}}$. The difficulty is then to ensure that a sufficiently non-trivial quantity of integers remain after this process, for the purposes of finding points in the given set ${V}$. The most basic sieve of this form is the sieve of Eratosthenes, which when combined with the inclusion-exclusion principle gives the Legendre sieve (or exact sieve), which gives an exact formula for quantities such as the number ${\pi(x,z)}$ of natural numbers less than or equal to ${x}$ that are not divisible by any prime less than or equal to a given threshold ${z}$. Unfortunately, when one tries to evaluate this formula, one encounters error terms which grow exponentially in ${z}$, rendering this sieve useful only for very small thresholds ${z}$ (of logarithmic size in ${x}$). To improve the sieve level up to a small power of ${x}$ such as ${x^{1/(r+1)}}$, one has to replace the exact sieve by upper bound sieves and lower bound sieves which only seek to obtain upper or lower bounds on quantities such as ${\pi(x,z)}$, but contain a polynomial number of terms rather than an exponential number. There are a variety of such sieves, with the two most common such sieves being combinatorial sieves (such as the beta sieve), based on various combinatorial truncations of the inclusion-exclusion formula, and the Selberg upper bound sieve, based on upper bounds that are the square of a divisor sum. (There is also the large sieve, which is somewhat different in nature and based on ${L^2}$ almost orthogonality considerations, rather than on any actual sieving, to obtain upper bounds.) We will primarily work with a specific sieve in this notes, namely the beta sieve, and we will not attempt to optimise all the parameters of this sieve (which ultimately means that the almost primality parameter ${r}$ in our results will be somewhat large). For a more detailed study of sieve theory, see the classic text of Halberstam and Richert, or the more recent texts of Iwaniec-Kowalski or of Friedlander-Iwaniec. Very roughly speaking, the end result of sieve theory is that excepting some degenerate and “exponentially thin” settings (such as those associated with the Mersenne primes), all the orbits which are expected to have a large number of primes, can be proven to at least have a large number of ${r}$-almost primes for some finite ${r}$. (Unfortunately, there is a major obstruction, known as the parity problem, which prevents sieve theory from lowering ${r}$ all the way to ${1}$; see this blog post for more discussion.) One formulation of this principle was established by Bourgain, Gamburd, and Sarnak: Theorem 1 (Bourgain-Gamburd-Sarnak) Let ${\Gamma}$ be a subgroup of ${SL_2({\bf Z})}$ which is not virtually solvable. Let ${f: {\bf Z}^4 \rightarrow {\bf Z}}$ be a polynomial with integer coefficients obeying the following primitivity condition: for any positive integer ${q}$, there exists ${A \in \Gamma \subset {\bf Z}^4}$ such that ${f(A)}$ is coprime to ${q}$. Then there exists an ${r \geq 1}$ such that there are infinitely many ${A \in \Gamma}$ with ${f(A)}$ non-zero and ${r}$-almost prime. This is not the strongest version of the Bourgain-Gamburd-Sarnak theorem, but it captures the general flavour of their results. Note that the theorem immediately implies an analogous result for orbits ${\Gamma b \subset {\bf Z}^2}$, in which ${f}$ is now a polynomial from ${{\bf Z}^2}$ to ${{\bf Z}}$, and one uses ${f(Ab)}$ instead of ${f(A)}$. It is in fact conjectured that one can set ${r=1}$ here, but this is well beyond current technology. For the purpose of reaching ${r=1}$, it is very natural to impose the primitivity condition, but as long as one is content with larger values of ${r}$, it is possible to relax the primitivity condition somewhat; see the paper of Bourgain, Gamburd, and Sarnak for more discussion. By specialising to the polynomial ${f: \begin{pmatrix} a & b \\ c & d \end{pmatrix} \rightarrow abcd}$, we conclude as a corollary that as long as ${\Gamma}$ is primitive in the sense that it contains matrices with all coefficients coprime to ${q}$ for any given ${q}$, then ${\Gamma}$ contains infinitely many matrices whose elements are all ${r}$-almost primes for some ${r}$ depending only on ${\Gamma}$. For further applications of these sorts of results, for instance to Appolonian packings, see the paper of Bourgain, Gamburd, and Sarnak. It turns out that to prove Theorem 1, the Cayley expansion results in ${SL_2(F_p)}$ from the previous set of notes are not quite enough; one needs a more general Cayley expansion result in ${SL_2({\bf Z}/q{\bf Z})}$ where ${q}$ is square-free but not necessarily prime. The proof of this expansion result uses the same basic methods as in the ${SL_2(F_p)}$ case, but is significantly more complicated technically, and we will only discuss it briefly here. As such, we do not give a complete proof of Theorem 1, but hopefully the portion of the argument presented here is still sufficient to give an impression of the ideas involved. — 1. Combinatorial sieving — In this section we set up the combinatorial sieve needed to establish Theorem 1. To motivate this sieve, let us focus first on a much simpler model problem, namely the task of estimating the number ${\pi(x,z)}$ of natural numbers less than or equal to a given threshold ${x}$ which are not divisible by any prime less than or equal to ${z}$. Note that for ${z}$ between ${\sqrt{x}}$ and ${x}$, ${\pi(x,z)}$ is simply the number of primes in the interval ${(z,x]}$; but for ${z}$ less than ${\sqrt{x}}$, ${\pi(x,z)}$ also counts some almost primes in addition to genuine primes. This quantity can be studied quite precisely by a variety of tools, such as those coming from multiplicative number theory; see for instance this paper of Granville and Soundararajan for some of the most precise results in this direction. The quantity ${\pi(x,z)}$ is easiest to estimate when ${z}$ is small. For instance, ${\pi(x,1)}$ is simply the number of natural numbers less than ${x}$, and so $\displaystyle \pi(x,1) = x + O(1).$ Similarly, ${\pi(x,2)}$ is the number of odd numbers less than ${x}$, and so $\displaystyle \pi(x,2) = \frac{1}{2} x + O(1).$ Carrying this further, ${\pi(x,3)}$ is the number of numbers less than ${x}$ that are coprime to ${6}$, and so $\displaystyle \pi(x,3) = \frac{1}{3} x + O(1)$ (but note that the implied constant in the ${O(1)}$ error is getting increasingly large). Continuing this analysis, it is not hard to see that $\displaystyle \pi(x,z) = (\prod_{p \leq z} (1-\frac{1}{p})) x + O_z(1)$ for any fixed ${z}$; note from Mertens’ theorem that $\displaystyle \prod_{p \leq z} (1-\frac{1}{p}) = \frac{e^{\gamma} + o(1)}{\log z} \ \ \ \ \ (1)$ $\displaystyle \pi(x,z) \approx e^\gamma \frac{x}{\log z}$ where ${\gamma= 0.577\ldots}$ is the Euler-Mascheroni constant. Note though that this heuristic should be treated with caution when ${z}$ is large; for instance, from the prime number theorem we see that we have the conflicting asymptotic $\displaystyle \pi(x,z) = (1+o(1)) \frac{x}{\log x}$ when ${\sqrt{x} \leq z \leq o(x)}$. This is already a strong indication that one needs to pay careful attention to the error terms in this analysis. (Indeed, many false “proofs” of conjectures in analytic number theory, such as the twin prime conjecture, have been based on a cavalier attitude to such error terms, and their asymptotic behaviour under various limiting regimes.) Let us thus work more carefully to control the error term ${O_z(1)}$. Write ${P(z) := \prod_{p \leq z} p}$ for the product of all the primes less than or equal to ${z}$ (this quantity is also known as the primorial of ${z}$). Then we can write $\displaystyle \pi(x,z) = \sum_{n \leq x} 1_{(n,P(z))=1}$ where the sum ranges over natural numbers ${n}$ less than ${x}$, and ${(n,P(z))}$ is the greatest common divisor of ${n}$ and ${P(z)}$. The function ${1_{(n,P(z))=1}}$ is periodic of period ${P(z)}$, and is equal to ${1}$ on ${(\prod_{p\leq z} (1-\frac{1}{p})) P(z)}$ of the residue classes modulo ${P(z)}$, which leads to the crude bound $\displaystyle \pi(x,z) = (\prod_{p \leq z} (1-\frac{1}{p})) x + O(P(z)). \ \ \ \ \ (2)$ However, this error term is too large for most applications: from the prime number theorem, we see that ${P(z) = \exp((1+o(1)) z)}$, so the error term grows exponentially in ${z}$. In particular, this estimate is only non-trivial in the regime ${z = O( \log x )}$. One can do a little better than this by using the inclusion-exclusion principle, which in this context is also known as the Legendre sieve. Consider for instance ${\pi(x,3)}$, which counts the number of natural numbers ${n \leq x}$ coprime to ${P(3) = 2 \times 3}$. We can compute this quantity by first counting all numbers less than ${x}$, then subtracting those numbers divisible by ${2}$ and by ${3}$, and then adding back those numbers divisible by both ${2}$ and ${3}$. A convenient way to describe this procedure in general is to introduce the Möbius function ${\mu(n)}$, defined to equal ${(-1)^k}$ when ${n}$ is the product of ${k}$ distinct primes for some ${k \geq 0}$. The key point is that $\displaystyle 1_{n=1} = \sum_{d|n} \mu(d) \ \ \ \ \ (3)$ for any natural number ${n}$, where ${d}$ ranges over the divisors of ${n}$; indeed, this identity can be viewed as an alternate way to define the Möbius function. In particular, ${1_{(n,P(z))=1} = \sum_{d|P(z)} \mu(d) 1_{d|n}}$, leading to the Legendre identity $\displaystyle \pi(x,z) = \sum_{d|P(z)} \mu(d) \sum_{n \leq x; d|n} 1.$ The inner sum can be easily estimated as $\displaystyle \sum_{n \leq x; d|n} 1 = \frac{x}{d} + O(1); \ \ \ \ \ (4)$ since ${P(z)}$ has ${2^{\pi(z)}}$ distinct factors, where ${\pi(z)}$ is the number of primes less than or equal to ${z}$, we conclude that $\displaystyle \pi(x,z) = \sum_{d|P(z)} \mu(d) \frac{x}{d} + O( 2^{\pi(z)} ).$ The main term here can be factorised as $\displaystyle \sum_{d|P(z)} \mu(d) \frac{x}{d} = x \prod_{p \leq z} (1 - \frac{1}{p}) \ \ \ \ \ (5)$ leading to the following slight improvement $\displaystyle \pi(x,z) = (\prod_{p \leq z} (1-\frac{1}{p})) x + O(2^{\pi(z)})$ to (2). Note from the prime number theorem that ${2^{\pi(z)} = O( \exp( O( z / \log z ) ) )}$, so this error term is asymptotically better than the one in (2); the bound here is now non-trivial in the slightly larger regime ${z = O( \log x \log \log x )}$. But this is still not good enough for the purposes of counting almost primes, which would require ${z}$ as large as a power of ${x}$. To do better, we will replace the exact identity (3) by combinatorial truncations $\displaystyle \sum_{d|n: d \in {\mathcal D}_-} \mu(d) \leq 1_{n=1} \leq \sum_{d|n: d \in {\mathcal D}_+} \mu(d) \ \ \ \ \ (6)$ of that identity, where ${n}$ divides ${P(z)}$ and ${{\mathcal D}_-, {\mathcal D}_+}$ are sets to be specified later, leading to the upper bound sieve $\displaystyle \pi(x,z) \leq \sum_{d|P(z); d \in {\mathcal D}_+} \mu(d) \frac{x}{d} + O( |{\mathcal D}_+| ) \ \ \ \ \ (7)$ and the lower bound sieve $\displaystyle \pi(x,z) \geq \sum_{d|P(z); d \in {\mathcal D}_-} \mu(d) \frac{x}{d} + O( |{\mathcal D}_-| ). \ \ \ \ \ (8)$ The key point will be that ${{\mathcal D}_+}$ and ${{\mathcal D}_-}$ can be chosen to be only polynomially large in ${z}$, rather than exponentially large, without causing too much damage to the main terms ${\sum_{d|P(z); d \in D_\pm} \mu(d) \frac{x}{d}}$, which lead to upper and lower bounds on ${\pi(x,z)}$ that remain non-trivial for moderately large values of ${z}$ (e.g. ${z = x^{1/(r+1)}}$ for some fixed ${r}$). We now turn to the task of locating reasonably small sets ${{\mathcal D}_+, {\mathcal D}_-}$ obeying (6). We begin with(3), which we rewrite as $\displaystyle 1_{n=1} = \sum_{d|P(z)} \mu(d) 1_{d|n} \ \ \ \ \ (9)$ for ${n}$ a divisor of ${P(z)}$. One can view the divisors of ${P(z)}$ as a ${\pi(z)}$-dimensional combinatorial cube, with the right-hand side in (9) being a sum over that cube; the idea is then to hack off various subcubes of that cube in a way that only serves to increase (for the upper bound sieve) or decrease (for the lower bound sieve) that sum, until only a relatively small portion of the cube remains. We turn to the details. Our starting point will be the identity $\displaystyle \sum_{d|P(z): d = p_1 \ldots p_k d', d' | P(p_k)} \mu(d) 1_{d|n} = (-1)^k 1_{n=p_1 \ldots p_k} \ \ \ \ \ (10)$ whenever ${z \geq p_1 > p_2 > \ldots > p_k}$ are primes, which follows easily from applying (3) to ${n / (p_1 \ldots p_k)}$ when ${p_1,\ldots,p_k}$ divide ${n}$. One can view the left-hand side of (10) as a subsum of the sum in (9), and (10) implies that this subsum is non-negative when ${k}$ is even and non-positive when ${k}$ is odd. In particular, we see that (6) will hold when ${{\mathcal D}_+}$ is formed from the “cube” ${\{ d: d|P(z)\}}$ by removing some disjoint “subcubes” of the form ${\{ d =p_1 \ldots p_k d': d' | P(p_k) \}}$ for ${z \geq p_1 > \ldots > p_k}$ and ${k}$ odd, and similarly for ${{\mathcal D}_-}$ but with ${k}$ now required to be even instead of odd. Observe that the subcube ${\{ d =p_1 \ldots p_k d': d' | P(p_k) \}}$ consists precisely of those divisors ${d}$ of ${P(z)}$ whose top ${k}$ prime factors are ${p_1,\ldots,p_k}$. We now have the following general inequality: Lemma 2 (Combinatorial sieve) Let ${z>0}$. For each natural number ${k}$, let ${A_k( p_1,\ldots,p_k)}$ be a predicate pertaining to ${k}$ decreasing primes ${z \geq p_1>\ldots>p_k}$ (thus ${A_k(p_1,\ldots,p_k)}$ is either true or false for each choice of ${p_1,\ldots,p_k}$). Let ${{\mathcal D}_+}$ be the set of all natural numbers ${n|P(z)}$ which, when factored as ${n = p_1 \ldots p_r}$ for ${z \geq p_1 > \ldots > p_r}$, is such that ${A_k(p_1,\ldots,p_k)}$ holds for all odd ${1 \leq k \leq r}$. Similarly define ${{\mathcal D}_-}$ by requiring ${k}$ to be even instead of odd. Then (6) holds for all ${n|P(z)}$. Proof: ${{\mathcal D}_+}$ is formed from ${\{ d: d|P(z)\}}$ by removing those subcubes of the form ${\{ d =p_1 \ldots p_k d': d' | P(p_k) \}}$ for ${z \geq p_1 > \ldots > p_k}$, ${k}$ odd, and such that ${A_{k'}(p_1,\ldots,p_{k'})}$ holds for all odd ${1 \leq k' < k}$ but fails for ${k'=k}$. These subcubes are all disjoint, and so the claim for ${{\mathcal D}_+}$ follows from the preceding discussion. Similarly for ${{\mathcal D}_-}$. $\Box$ This gives us the upper and lower bounds (7), (8) for ${\pi(x,z)}$. To make these bounds useful, we need to choose ${{\mathcal D}_\pm}$ so that the partial sums ${\sum_{d|P(z); d \in {\mathcal D}_\pm} \mu(d) \frac{x}{d}}$ are close to $\displaystyle \sum_{d|P(z)} \mu(d) \frac{x}{d} = x \prod_{p \leq z} (1 - \frac{1}{p}).$ To do this, one must select the predicates ${A_k(p_1,\ldots,p_k)}$ carefully. The best choices for these predicates are not immediately obvious; but after much trial and error, it was discovered that one fairly efficient choice is to let ${A_k(p_1,\ldots,p_k)}$ be the predicate $\displaystyle p_1 \ldots p_{k-1} p_k^{\beta+1} < y$ for some moderately large parameter ${\beta \geq 2}$ (we will eventually take ${\beta := 10}$) and some parameter ${y := z^s}$ for some ${s > \beta}$ to be optimised in later (we will eventually take it to be almost as large as ${x}$). The use of this choice is referred to as the beta sieve. Let us now estimate the errors $\displaystyle |\sum_{d|P(z)} \mu(d) \frac{x}{d} - \sum_{d|P(z): d \in {\mathcal D}_\pm} \mu(d) \frac{x}{d}|. \ \ \ \ \ (11)$ For sake of argument let us work with ${{\mathcal D}_-}$, as the ${{\mathcal D}_+}$ case is almost identical. By the triangle inequality, we can bound this error by $\displaystyle \sum_{k \hbox{ even}} \sum^* |\sum_{d = p_1 \ldots p_k d': d' |P(p_k)} \mu(d) \frac{x}{d}|$ where ${k}$ ranges over positive even integers, and ${\sum^*}$ denotes a sum over primes ${z \geq p_1 > \ldots > p_k}$ ranges over primes such that $\displaystyle p_1 \ldots p_{k'-1} p_{k'}^{\beta+1} < y \ \ \ \ \ (12)$ for all even ${k' < k}$, but $\displaystyle p_1 \ldots p_{k-1} p_{k}^{\beta+1} \geq y. \ \ \ \ \ (13)$ Since ${p_1,\ldots,p_k \leq z}$ and ${y = z^s}$, this in particular gives the bound $\displaystyle k \geq s - \beta.$ From (12) we have $\displaystyle p_1 \ldots p_{k'-1} p_{k'}^{\beta} < y$ for all ${1 \leq k' < k}$ (not necessarily even); note that the case ${k'=1}$ follows from the hypothesis ${y > z^\beta}$. We can rewrite this inequality as $\displaystyle \frac{y}{p_1 \ldots p_{k'}} > (\frac{y}{p_1 \ldots p_{k'-1}})^{\frac{\beta-1}{\beta}}$ and hence by induction $\displaystyle \frac{y}{p_1 \ldots p_{k'}} > y^{(\frac{\beta-1}{\beta})^{k'-1}}$ for all ${1 \leq k'< k}$. From (13) we then have $\displaystyle p_k > y^{\frac{1}{\beta+1} (\frac{\beta-1}{\beta})^{k-1}} > y^{\frac{1}{\beta}(\frac{\beta-1}{\beta})^k} > z^{(\frac{\beta-1}{\beta})^k}.$ We conclude that the error (11) is bounded by $\displaystyle \sum_{k=1}^\infty \sum_{z \geq p_1 > \ldots > p_k > z^{(\frac{\beta-1}{\beta})^k}} |\sum_{d = p_1 \ldots p_k d': d' |P(p_k)} \mu(d) \frac{x}{d}|$ in the ${{\mathcal D}_+}$; a similar argument also gives this bound in the ${{\mathcal D}_-}$ case. The inner sum can be computed as $\displaystyle |\sum_{d = p_1 \ldots p_k d': d' |P(p_k)} \mu(d) \frac{x}{d}| = \frac{x}{p_1 \ldots p_k} \prod_{p < p_k} (1 - \frac{1}{p})$ and thus by Mertens’ theorem (1) and the bound ${p_k > z^{(\frac{\beta-1}{\beta})^k}}$ we have $\displaystyle |\sum_{d = p_1 \ldots p_k d': d' |P(p_k)} \mu(d) \frac{x}{d}| \ll (\frac{\beta}{\beta-1})^k \frac{x}{p_1 \ldots p_k \log z}.$ We have thus bounded (11) by $\displaystyle \ll \frac{x}{\log z} \sum_{k \geq s-\beta} (\frac{\beta}{\beta-1})^k \sum_{z \geq p_1 > \ldots > p_k > z^{(\frac{\beta-1}{\beta})^k}} \frac{1}{p_1 \ldots p_k}.$ The inner sum can be bounded by $\displaystyle \frac{1}{k!} (\sum_{z \geq p > z^{(\frac{\beta-1}{\beta})^k}} \frac{1}{p})^k.$ By another of Mertens’ theorems (or by taking logarithms of (1)) one has $\displaystyle \sum_{z \geq p > z^{(\frac{\beta-1}{\beta})^k}} \frac{1}{p} \leq k \log \frac{\beta}{\beta-1} + O(1)$ and so (11) is bounded by $\displaystyle \ll \frac{x}{\log z} \sum_{k \geq s-\beta} \frac{1}{k!} ( k \frac{\beta}{\beta-1} \log \frac{\beta}{\beta-1} + O(1) )^k.$ Using the crude bound ${k! \geq \frac{k^k}{e^k}}$ (as can be seen by considering the ${k^{th}}$ term in the Taylor expansion of ${e^k}$) we conclude the bound $\displaystyle \ll \frac{x}{\log z} \sum_{k \geq s-\beta} ( e \frac{\beta}{\beta-1} \log \frac{\beta}{\beta-1} + O(\frac{1}{k}) )^k.$ If ${\beta}$ is large enough (${\beta=10}$ will suffice) then the expression ${e \frac{\beta}{\beta-1} \log \frac{\beta}{\beta-1}}$ is less than ${1/e}$; since ${(1 + O(1/k))^k = O(1)}$, this leads to the bound $\displaystyle \ll \frac{x}{\log z} \sum_{k \geq s-\beta} e^{-k}$ which after summing the geometric series becomes $\displaystyle \ll e^{-s} \frac{x}{\log z}$ (allowing implied constants to depend on ${\beta}$). From this bound on (11) and (5), (1) we have We thus have $\displaystyle \sum_{d|P(z): d \in {\mathcal D}_\pm} \mu(d) \frac{x}{d} = \frac{x}{\log z}( e^\gamma + O( e^{-s} ) + o(1) ).$ Finally, if ${d = p_1 \ldots p_k}$ is an element of ${{\mathcal D}_\pm}$, then by (12) and the hypothesis ${\beta \geq 2}$ we have $\displaystyle d = p_1 \ldots p_k \leq y$ and so we have the crude upper bounds ${|{\mathcal D}_\pm| \leq y}$. From (7), (8) and recalling that ${y = z^s}$, we thus have $\displaystyle \pi(x,z) = \frac{x}{\log z}( e^\gamma + O( e^{-s} ) + o(1) ) + O(z^s).$ If ${x > z^{11}}$, we may optimise in ${s}$ by setting ${s := \frac{\log x}{\log z} - 1}$ (in order to make the final error term much less than ${x}$), leading to the bound $\displaystyle \pi(x,z) = \frac{x}{\log z}( e^\gamma + O( e^{-\log x/\log z} ) + o(1) ).$ In particular, we have $\displaystyle \frac{x}{\log z} \ll \pi(x,z) \ll \frac{x}{\log z} \ \ \ \ \ (14)$ whenever ${2 \leq z \leq x^\epsilon}$ for some sufficiently small absolute constant ${\epsilon>0}$. Remark 1 The bound (14) implies, among other things, that there exists an absolute constant ${r}$ such that the number of ${r}$-almost primes less than ${x}$ is ${\gg x/\log x}$, which is a very weak version of the prime number theorem. Note though that the upper bound in (14) does not directly imply a corresponding upper bound on this count of ${r}$-almost primes, because ${r}$-almost primes are allowed to have prime factors that are less than ${x}$. Indeed, a routine computation using Mertens’ theorem shows that for any fixed ${r}$, the number of ${r}$-almost primes less than ${x}$ is comparable to ${\frac{x}{\log x} (\log \log x)^{r-1}}$. We can generalise the above argument as follows: Exercise 1 (Beta sieve) Let ${a_n}$ be an absolutely convergent sequence of non-negative reals for ${n \geq 1}$. Let ${x > 1}$, ${\kappa \geq 1}$, and ${\epsilon > 0}$. Let ${g: {\bf N} \rightarrow {\bf R}^+}$ be a multiplicative function taking values between ${0}$ and ${1}$, with ${g(p)<1}$ for all primes ${p}$. Assume the following axioms: • (i) (Control in arithmetic progressions) For any ${d \leq x^\epsilon}$, one has $\displaystyle \sum_{n: d|n} a_n = g(d) x + O( x^{1-\epsilon} ).$ • (ii) (Mertens type theorem) For all ${2 \leq z \leq x^\epsilon}$, one has $\displaystyle \frac{1}{\log^\kappa z} \ll \prod_{p \leq z} (1-g(p)) \ll \frac{1}{\log^\kappa z}. \ \ \ \ \ (15)$ Conclude that there is an ${\epsilon'>0}$ depending only on ${\kappa,\epsilon}$, and the implied constants in the above axioms, such that $\displaystyle \frac{x}{\log^\kappa z} \ll \sum_{n: (n,P(z))=1} a_n \ll \frac{x}{\log^\kappa z}$ whenever ${2 \leq z \leq x^{\epsilon'}}$, and the implied constants may depend on ${\kappa, \epsilon}$, and the implied constants in the above axioms. (Note that (14) corresponds to the case when ${\kappa := 1}$, ${g(n) := 1/n}$, and ${a_n := 1_{n \leq x}}$.) Exercise 2 Suppose we have the notation and hypotheses of the preceding exercise, except that the estimate (15) is replaced by the weaker bound $\displaystyle g(p) \leq \frac{\kappa}{p} + O(\frac{1}{p^2}) \ \ \ \ \ (16)$ for all sufficiently large ${p}$. (For small ${p}$, note that we still have the bound ${g(p)<1}$.) Show that we still have the lower bound $\displaystyle \sum_{n: (n,P(z))=1} a_n \gg \frac{x}{\log^\kappa z}$ whenever ${2 \leq z \leq x^{\epsilon'}}$ for sufficiently small ${\epsilon'}$ (which may depend on ${g}$), where the implied constant is now allowed to depend on ${g}$. (Hint: the main trick here is to extract out a common factor of ${\prod_{p \leq z} (1-g(p))}$ from the analysis first, and then use the bound (16) to upper bound quantities such as ${\prod_{p_k \leq p \leq z} (1-g(p))^{-1}}$.) One can weaken the axioms somewhat and still obtain non-trivial results from the beta sieve, but this somewhat crude version of the sieve will suffice for our purposes. Another, more abstract, formalisation of the above argument (involving a construction of sets ${{\mathcal D}_\pm}$ obeying (6) and a number of other desirable properties) is sometimes referred to as the fundamental lemma of sieve theory. Exercise 3 (Twin almost primes) Let ${\pi_2(x,z)}$ be the number of integers ${n}$ between ${1}$ and ${x}$ such that ${n}$ and ${n+2}$ are both coprime to ${P(z)}$. • (i) Show that $\displaystyle \frac{x}{\log^2 z} \ll \pi_2(x,z) \ll \frac{x}{\log^2 z}$ if ${2 \leq z \leq x^\epsilon}$, and ${\epsilon>0}$ is a sufficiently small absolute constant. • (ii) Show that there exists an ${r \geq 1}$ such that there are infinitely many pairs ${n,n+2}$ which are both ${r}$-almost primes. (Indeed, the argument here allows one to take ${r=20}$ without much effort, and by working considerably harder to optimise everything, one can lower ${r}$ substantially, although the parity problem mentioned earlier prevents one from taking ${r}$ below ${2}$.) • (iii) Establish Brun’s theorem that the sum of reciprocals of the twin primes is convergent. Exercise 4 (Landau conjecture for almost primes) Let ${\pi_*(x,z)}$ be the number of integers ${n}$ between ${1}$ and ${x}$ such that ${n^2+1}$ is coprime to ${P(z)}$. • (i) Show that $\displaystyle \frac{x}{\log z} \ll \pi_2(x,z) \ll \frac{x}{\log z}$ if ${2 \leq z \leq x^\epsilon}$, and ${\epsilon>0}$ is a sufficiently small absolute constant. (Hint: you will need the fact that ${-1}$ is a quadratic residue mod ${p}$ if and only if ${p \neq 3 \hbox{ mod } 4}$, and Merten’s theorem for arithmetic progressions, which among other things asserts that ${\sum_{p \leq x: p=1 \hbox{ mod } 4} \frac{1}{p} = \frac{1}{2} \log x + O(1)}$.) • (ii) Show that there exists an ${r \geq 1}$ such that there are infinitely natural numbers ${n}$ such that ${n^2+1}$ is an ${r}$-almost primes. Exercise 5 Let ${P: {\bf Z} \rightarrow {\bf Z}}$ be a polynomial with integer coefficients and degree ${k}$. Assume that ${P}$ is primitive in the sense that for each natural number ${q}$, there exists a natural number ${n}$ such that ${P(n)}$ is coprime to ${q}$. Show that there exists an ${r}$ depending only on ${P}$ such that for all sufficiently large ${x}$, there are at least ${\gg_P x / \log^k x}$ natural numbers ${n}$ less than ${x}$ such that ${P(n)}$ is an ${r}$-almost prime. In many cases (e.g. if ${P}$ is irreducible) one can decrease the power of ${\log x}$ here (as in Exercise 4), by using tools such as Landau’s prime ideal theorem; see this previous blog post for some related discussion. Remark 2 The combinatorial sieve is not the only type of sieve used in sieve theory. Another popular choice is the Selberg upper bound sieve, in which the starting point is not the combinatorial inequalities (6), but rather the variant $\displaystyle 1_{n=1} \leq (\sum_{d|n} \lambda_d)^2$ where the ${\lambda_d}$ are arbitrary real parameters with ${\lambda_1 := 1}$, typically supported up to some level ${d < y}$. By optimising the choice of weights ${\lambda_d}$, the Selberg sieve can lead to upper bounds on quantities such as ${\pi(x,z)}$ which are competitive with the beta sieve (particularly when ${z}$ is moderately large), although it is more difficult for this sieve to produce matching lower bounds. A somewhat different type of sieve is the large sieve, which does not upper bound or lower bound indicator functions such as ${1_{n=1}}$ directly, but rather controls the size of a function that avoids many residue classes by exploiting the ${L^2}$ properties of these residue classes, such as almost orthogonality phenomena or Fourier uncertainty principles. See this text of Friedlander and Iwaniec for a much more thorough discussion and comparison of these sieves. — 2. The strong approximation property — For any natural number ${q}$, let ${\pi_q: SL_2({\bf Z}) \rightarrow SL_2({\bf Z}/q{\bf Z})}$ be the obvious projection homomorphism. An easy application of Bezout’s theorem (or the Euclidean algorithm) shows that this map is surjective. From the Chinese remainder theorem, we also have ${SL_2({\bf Z}/q{\bf Z}) \equiv SL_2({\bf Z}/q_1 {\bf Z}) \times SL_2({\bf Z}/q_2{\bf Z})}$ whenever ${q=q_1q_2}$ and ${q_1,q_2}$ are coprime. To set up the sieve needed to establish Theorem 1, we need to understand the images ${\pi_q(\Gamma)}$ of a non-virtually-solvable subgroup ${\Gamma}$ of ${SL_2({\bf Z})}$. Clearly this is a subgroup of ${SL_2({\bf Z}/q{\bf Z})}$. Given that ${\Gamma}$ is fairly “large” (in particular, such groups can be easily seen to be Zariski-dense in ${SL_2}$), we expect that in most cases ${\pi_q(\Gamma)}$ is in fact all of ${SL_2({\bf Z}/q{\bf Z})}$. This type belief is formalised in general as the strong approximation property. We will not prove the most general instance of this property, but instead focus on the model case of ${SL_2({\bf Z}/q{\bf Z})}$ for ${q}$ square-free, in which one can proceed by ad hoc elementary arguments. The general treatment of the strong approximation property was first achieved by Matthews, Vaserstein, and Weisfeiler using the classification of finite simple groups; a subsequent paper of Nori gave an alternate treatment that avoided the use of this classification. In the previous set of notes (see Remark 2) it was already observed that ${\pi_p(\Gamma) =SL_2({\bf Z}/p{\bf Z})}$ for all sufficiently large primes ${p}$. (Indeed, ${\Gamma}$ did not need to be free for this to hold; it was enough that ${\Gamma}$ not be virtually solvable.) To extend from the prime case to the (square-free) composite case, we will need some basic group theory, and in particular the theory of composition factors. Define a composition series for a group ${G}$ to be a finite sequence $\displaystyle \{1\}= H_0\lhd H_1 \lhd \ldots\lhd H_n = G$ of subgroups, where each ${H_i}$ is a normal subgroup of ${H_{i+1}}$, and the quotients ${H_{i+1}/H_i}$ are all simple. (By convention, we do not consider the trivial group to be simple.) The quotients ${H_{i+1}/H_i}$ for ${i=1,\ldots,n-1}$ are referred to as the composition factors of this series. Exercise 6 Show that every finite group has at least one composition series. A key fact about composition factors, known as the Jordan-Holder theorem, asserts that, up to permutation and isomorphism, they are independent of the choice of series: Theorem 3 (Jordan-Holder theorem) Let $\displaystyle \{1\}= H_0\lhd H_1 \lhd \ldots\lhd H_n = G$ and $\displaystyle \{1\}= K_0\lhd K_1 \lhd \ldots\lhd K_m = G$ be two composition series of the same group ${G}$. Then there is a bijection ${\sigma: \{0,\ldots,n-1\} \rightarrow\{0,\ldots,m-1\}}$ such that for each ${i=0,\ldots,n-1}$, ${H_{i+1}/H_{i}}$ is isomorphic to ${K_{\phi(i+1)}/K_{\phi(i)}}$. (In particular, ${n}$ and ${m}$ must be equal.) Proof: By symmetry we may assume that ${n \leq m}$. Fix ${0 \leq i < n}$. Let ${\pi_i: H_{i+1} \rightarrow H_{i+1}/H_i}$ be the quotient map, and consider the groups ${A_j^{(i)} := \pi_i(H_{i+1} \cap K_j) \equiv (H_{i+1} \cap K_j)/(H_i \cap K_j)}$ for ${j=0,\ldots,m}$. These are an increasing family of subgroups of ${H_{i+1}/H_i}$, with ${A^{(i)}_0 = \{1\}}$ and ${A^{(i)}_m = H_{i+1}/H_i}$. Since each ${K_j}$ is a normal subgroup of ${K_{j+1}}$, we see that ${A^{(i)}_j}$ is a normal subgroup of ${A^{(i)}_{j+1}}$. As ${A^{(i)}_m}$ is simple, this implies that there is a unique element ${\sigma(i)}$ of ${\{0,\ldots,m-1\}}$ such that ${A^{(i)}_j}$ is trivial for ${j \leq \sigma(i)}$ and ${A^{(i)}_j}$ is equal to ${H_{i+1}/H_i}$ for ${j > \sigma(i)}$. Now we claim that ${\sigma}$ is a bijection. Suppose this is not the case. Since ${n \leq m}$, there thus exists ${j \in \{0,\ldots,m-1\}}$ which is not in the range of ${\sigma}$. This implies that ${A^{(i)}_j = A^{(i)}_{j+1}}$ for all ${i}$. An induction on ${i}$ then shows that ${H_i \cap K_j = H_i \cap K_{j+1}}$ for all ${i}$, and thus ${K_j = K_{j+1}}$, contradicting the assumption that ${K_{j+1}/K_j}$ is simple. Finally, fix ${i_0 \in \{0,\ldots,n-1\}}$, and let ${j_0 := \sigma(i_0)}$. Then we have ${A^{(i)}_{j_0} = A^{(i)}_{j_0+1}}$ for all ${i \neq i_0}$, while ${A^{(i_0)}_{j_0} = \{1\}}$ and ${A^{(i_0)}_{j_0+1} \equiv H_{i_0+1}/H_{i_0}}$. From this and induction we see that ${(H_i \cap K_{j_0+1})/(H_i \cap K_{j_0})}$ is trivial for ${i \leq i_0}$ but isomorphic to ${H_{i_0+1}/H_{i_0}}$ for ${i>i_0}$. (Here we are basically relying on a special case of the Zassenhaus lemma.) In particular, ${K_{j_0+1}/K_{j_0}}$ is isomorphic to ${H_{i_0+1}/H_{i_0}}$, and the claim follows. $\Box$ In view of this theorem, we can assign to each finite group a set (or more precisely, multiset) of composition factors of simple groups, which are unique up to permutation and isomorphism. This is somewhat analogous to how the fundamental theorem of arithmetic assigns to each positive integer a multiset of prime numbers, which are unique up to permutation. (Indeed, the latter can be viewed as the special case of the former in the case of cyclic groups.) Exercise 7 Show that for ${p \geq 5}$ a prime, the composition factors of ${SL_2(F_p)}$ are (up to isomorphism and permutation) the cyclic group ${{\bf Z}/2{\bf Z}}$ and the projective special linear group ${PSL_2(F_p)}$. What happens instead when ${p = 2}$ or ${p=3}$? Also, show that the only normal subgroup of ${SL_2(F_p)}$ (other than the trivial group and all of ${SL_2(F_p)}$) is the center ${Z(SL_2(F_p))\equiv {\bf Z}/2{\bf Z}}$ of the group. Thus, we see (in contrast with the fundamental theorem of arithmetic) that one cannot permute the composition factors arbitrarily. Exercise 8 Let ${N}$ be a normal subgroup of a finite group ${G}$. Show that the set of composition factors of ${G}$ is equal to (up to isomorphism, and counting multiplicity) the union of the set of composition factors of ${N}$, and the set of composition factors of ${G/N}$. In particular, the set of composition factors of ${N}$ and of ${G/N}$ are subsets of the set of composition factors of ${G}$ (again up to isomorphism, and counting multiplicity). As another corollary, we see that the composition factors of a direct product ${G \times H}$ or semidirect product ${G \ltimes H}$ of two finite groups ${G, H}$ is the union of the set of composition factors of ${G}$ and ${H}$ separately (again up to isomorphism, and counting multiplicity). Knowing the composition factors of a group can assist in classifying its subgroups; in particular, groups which are “coprime” in the sense of having no composition factors in common are difficult to “join” together. (Interestingly, the phenomenon of “coprimality” implying “disjointness” also shows up in ergodic theory, in the theory of joinings, but we will not discuss this further here.) Here is an example of this which will be of importance in our application: Lemma 4 Let ${p \geq 5}$ be a prime, and ${G}$ be a finite group which does not have a copy of ${PSL_2(F_p)}$ amongst its composition factors. Let ${H}$ be a subgroup of ${G \times SL_2(F_p)}$ whose projections to ${G}$ and ${SL_2(F_p)}$ are surjective. Then ${H}$ is all of ${G \times SL_2(F_p)}$. Proof: We apply Goursat’s lemma (see Exercise 9 below). Thus if ${N_1 := \{ g \in G: (g,1) \in H\}}$ and ${N_2 := \{ h \in SL_2(F_p): (1,h) \in H \}}$, then ${N_1,N_2}$ are normal subgroups of ${G, SL_2(F_p)}$ respectively such that ${G/N_1}$ is isomorphic to ${SL_2(F_p)/N_2}$. From Exercise 7 we see that ${N_2}$ is either trivial, all of ${SL_2(F_p)}$, or is the center ${Z(SL_2(F_p))}$. If ${N_2}$ is trivial, then ${SL_2(F_p)}$ is isomorphic to a quotient of ${G}$, and thus by Exercise 8 the composition factors of ${SL_2(F_p)}$ are a subset of those of ${G}$. But this is a contradiction, since ${PSL_2(F_p)}$ is a composition factor of ${SL_2(F_p)}$ but not of ${G}$. Similarly if ${N_2}$ is the center, since ${SL_2(F_p)/N_2}$ is then isomorphic to ${PSL_2(F_p)}$. So the only remaining case is when ${N_2}$ is all of ${SL_2(F_p)}$. But then as ${H}$ surjects onto ${G}$, we see that ${H}$ is all of ${G \times SL_2(F_p)}$ and we are done. $\Box$ Exercise 9 (Goursat’s lemma) Let ${G_1,G_2}$ be groups, and let ${H}$ be a subgroup of ${G_1 \times G_2}$ whose projections to ${G_1,G_2}$ are surjective. Let ${N_1 := \{ g_1 \in G_1: (g_1,1) \in H \}}$ and ${N_2 := \{ g_2 \in G_2: (1,g_2) \in H \}}$. Show that ${N_1,N_2}$ are normal subgroups of ${G_1,G_2}$, and that ${G_1/N_1}$ and ${G_2/N_2}$ are isomorphic. (Indeed, after quotienting out by ${N_1 \times N_2}$, ${H}$ becomes a graph of such an isomorphism.) Conclude that the set of composition factors of ${H}$ are a subset of the union of the set of composition factors of ${G_1}$ and the set of composition factors of ${G_2}$ (up to isomorphism and counting multiplicity, as usual). As such, we have the following satisfactory description of the images ${\pi_q(\Gamma)}$ of a free group ${\Gamma}$: Corollary 5 (Strong approximation) Let ${\Gamma}$ be a subgroup of ${SL_2({\bf Z})}$ which is not virtually solvable. Let ${M \geq 1}$ be an integer. Then there exists a multiple ${q_1}$ of ${M}$ with the following property: whenever ${q}$ is of the form ${q = d p_1 \ldots p_k}$ with ${d|q_1}$ and ${p_1,\ldots,p_k}$ distinct and coprime to ${q_1}$, one has $\displaystyle \pi_q(\Gamma) = \pi_d(\Gamma) \times SL_2(F_{p_1}) \times \ldots\times SL_2(F_{p_k})$ (after using the Chinese remainder theorem to identify ${SL_2(F_q)}$ with ${SL_2(F_d) \times SL_2(F_{p_1}) \times \ldots \times SL_2(F_{p_k})}$). In particular, one has $\displaystyle \pi_q(\Gamma) = \pi_d(\Gamma) \times SL_2({\bf Z}/p_1\ldots p_k{\bf Z}).$ The parameter ${M}$ will not actually be needed in our application, but is useful in the more general setting in which ${f}$ has rational coefficients instead of integer coefficients. Proof: We already know that ${\pi_p(\Gamma)= SL_2(F_p)}$ for all but finitely many primes ${p}$. Let ${q_0}$ be the product of ${M}$ with all the eceptional primes, as well as ${2}$ and ${3}$, thus ${p \geq 5}$ and ${\pi_p(\Gamma)= SL_2(F_p)}$ for all ${p}$ coprime to ${q_0}$. By repeated application of Lemma 4 this implies that ${\pi_{p_1 \ldots p_k}(\Gamma) = SL_2(F_{p_1}) \times \ldots \times SL_2(F_{p_k})}$ for any distinct primes ${p_1,\ldots,p_k}$ coprime to ${q_0}$ (the key point being that the groups ${PSL_2(F_p)}$ for primes ${p \geq 5}$ are all non-isomorphic to each other and to ${{\bf Z}/2{\bf Z}}$ by cardinality considerations). The finite group ${\pi_{q_0}(\Gamma)}$ may contain copies of ${PSL_2(F_p)}$ amongst their composition factors for a finite number of primes ${p}$ coprime to ${q_0}$; let ${q_1}$ be the product of ${q_0}$ with all these primes. By many applications of Exercise 9, we see that the set of composition factors of ${\pi_{q_1}(\Gamma)}$ are contained in the union of the set of composition factors of ${\pi_{q_0}(\Gamma)}$, and the set of composition factors of ${\pi_p(\Gamma) = SL_2(F_p)}$ for all ${p}$ dividing ${q_1}$ but not ${q_0}$. As a consequence, we see that ${PSL_2(F_p)}$ is not a composition factor of ${\pi_{q_1}(\Gamma)}$ for any ${p}$ coprime to ${q_1}$; by Exercise 8, ${PSL_2(F_p)}$ is also not a composition factor of ${\pi_d(\Gamma)}$ for any ${d}$ dividing ${q_1}$ and ${p}$ coprime to ${q_1}$. By many applications of Lemma 4, we then obtain the claim. $\Box$ As a simple application of the above corollary, we observe that we may reduce Theorem 1 to the case when ${\Lambda}$ is a free group on two generators. Indeed, if ${\Lambda}$ is not virtually solvable, then by the Tits alternative (Theorem 5 from Notes 6), ${\Lambda}$ contains a subgroup ${\Lambda'}$ which is a free group on two generators (and in particular, continues to not be virtually solvable). Now the polynomial ${f}$ need not be primitive on ${\Lambda'}$, so we cannot deduce Theorem 1 for ${\Lambda,f}$ from its counterpart for ${\Lambda',f}$. However, by Corollary 5 we have an integer ${q_1 \geq 1}$ such that $\displaystyle \pi_q(\Gamma') = \pi_d(\Gamma') \times SL_2({\bf Z}/p_1\ldots p_k{\bf Z}) \ \ \ \ \ (17)$ whenever ${q=dp_1\ldots p_k}$ with ${d|q_1}$ and ${p_1,\ldots,p_k}$ are distinct primes coprime to ${q_1}$. As ${f}$ is primitive with respect to ${\Lambda}$, we may find ${a \in \Gamma}$ such that ${f(a)}$ is coprime to ${q_1}$. By translating ${f}$ by ${a}$, we obtain a new polynomial ${f'}$ for which ${f'(1)}$ is coprime to ${q_1}$. In particular, for any ${d|q_1}$, we have ${f'(1)}$ coprime to ${d}$. By (17), this implies that for any square-free ${q}$ (and hence for arbitrary ${q}$), we can find ${a \in \Gamma'}$ with ${f'(a)}$ coprime to ${q}$. Thus ${f'}$ is primitive with respect to ${\Lambda'}$, and so we may deduce Theorem 1 for ${\Lambda,f}$ from its counterpart for ${\Lambda',f'}$. — 3. Sieving in thin groups — We can now deduce Theorem 1 from the following expander result: Theorem 6 (Uniform expansion) Let ${a,b}$ generate a free group ${\Lambda}$ in ${SL_2({\bf Z})}$. Then, as ${q}$ runs through the square-free integers, ${Cay(\pi_q(\Lambda), \pi_q(\{a,b,a^{-1},b^{-1}\}))}$ form a two-sided expander family. When ${q}$ is restricted to be prime, this result follows from Theorem 3 from Notes 6. The extension of this theorem to non-prime ${q}$ is more difficult, and will be discussed later. For now, let us assume Theorem 6 and see how we can use it, together with the beta sieve, to imply Theorem 1. As discussed in the preceding section, to show Theorem 1 we may assume without loss of generality that ${\Lambda}$ is a free group on two generators ${a,b}$. Let ${\mu := \frac{1}{4} (\delta_a +\delta_b+\delta_{a^{-1}}+\delta_{b^{-1}})}$ be the generator of the associated random walk, and let ${T}$ be a large integer. Then ${\mu^{(T)}}$ will be supported on elements of ${\Lambda}$ whose coefficients have size ${O(\exp(O(T)))}$, where we allow implied constants to depend on ${a,b}$. In particular, for ${x}$ in this support, ${f(x)}$ will be an integer of size ${O(\exp(O(T)))}$, where we allow implied constants to depend on ${f}$ also. On the other hand, ${\mu^{(T)}}$ has an ${\ell^\infty}$ norm that decreases exponentially in ${T}$ (by Exercise 6 of Notes 6). If we then set ${z := \exp(\epsilon' T)}$ for a sufficiently small absolute constant ${\epsilon'>0}$, it will then suffice to show that with probability ${\gg T^{-O(1)}}$, an element ${x}$ drawn from ${\Lambda}$ with distribution ${\mu^{(T)}}$ is such that ${f(x)}$ is non-zero and coprime to ${P(z)}$. It will be convenient to knock out a few exceptional primes. From Corollary 5, we may find an integer ${q_1}$ with the property that $\displaystyle \pi_q(\Gamma) = \pi_d(\Gamma) \times SL_2(F_{p_1}) \times \ldots\times SL_2(F_{p_k})$ whenever ${q = d p_1 \ldots p_k}$ with ${d|q_1}$ and ${p_1,\ldots,p_k}$ distinct and coprime to ${q_1}$. As ${f}$ is primitive, we may find a residue class ${x_1 \in \pi_{q_1}(\Gamma)}$ such that ${f(x_1)}$ is coprime to ${q_1}$. For each integer ${n}$, let ${a_n}$ denote the quantity $\displaystyle a_n := \sum_{x \in \Lambda: f(x)=n; x = x_1 \mod q_1} \mu^{(T)}(x).$ It will suffice to show that $\displaystyle \sum_{n: (n,P(z))=1} a_n \gg T^{-O(1)}.$ To do this, we will use the beta sieve. Indeed, by Exercise 2 it suffices to establish a bound of the form $\displaystyle \sum_{n: d|n} a_n = \frac{1}{|\pi_{q_1}(\Lambda)|} g(d) + O( \exp(-\epsilon T) ) \ \ \ \ \ (18)$ for all square-free ${1 \leq d \leq \exp(\epsilon T)}$, some constant ${\epsilon>0}$, and some multiplicative function ${g}$ obeying the bounds $\displaystyle g(p)<1$ and $\displaystyle g(p) \ll 1/p$ for all primes ${p}$. By choice of ${x_1}$, the quantity ${a_n}$ vanishes whenever ${n}$ is not coprime to ${q_1}$. So we will set ${g(p)=0}$ for the primes ${p}$ dividing ${q_1}$, and it will suffice to establish (18) for ${d}$ coprime to ${q_1}$. The left-hand side of (18) can then be expressed as $\displaystyle \sum_{x \in \pi_d(\Lambda): f(x)=0} ((\pi_{q_1 d})_* \mu)^{(T)}(x_1,x),$ where we descend the polynomial ${f: SL_2({\bf Z}) \rightarrow {\bf Z}}$ to a polynomial ${f: SL_2({\bf Z}/d{\bf Z}) \rightarrow {\bf Z}/d{\bf Z}}$ in the obvious fashion. However, in view of Theorem 6 (and the random walk interpretation of expansion), we have $\displaystyle ((\pi_{q_1 d})_* \mu)^{(T)}(x) = |\pi_{q_1 d}(\Lambda)|^{-1} + O( \exp(-cT) )$ for some ${c>0}$ independent of ${\epsilon}$. Note that $\displaystyle |\pi_d(\Lambda)| \leq |SL_2({\bf Z}/d{\bf Z})| \ll d^{O(1)} \ll \exp(O(\epsilon T))$ while from Corollary 5 we have $\displaystyle |\pi_{q_1 d}(\Lambda)| = |\pi_{q_1}(\Lambda)| |\pi_d(\Lambda)|$ and thus $\displaystyle \sum_{n: d|n} a_n = \frac{1}{|\pi_{q_1}(\Lambda)|} g(d) + O( \exp(-\epsilon T) )$ for ${\epsilon>0}$ small enough, where ${g(d)}$ is defined for ${d}$ coprime to ${q_1}$ as $\displaystyle g(d) := \frac{1}{|\pi_d(\Gamma)|} |\{ x \in SL_2({\bf Z}/d{\bf Z}): f(x)=0 \}|.$ As ${f}$ is primitive, we have ${g(d)<1}$ for all such ${d}$; from Corollary 5 we see that ${g}$ is multiplicative for such ${d}$. Finally, from the Schwarz-Zippel lemma (see Exercise 23 from Notes 5) we have $\displaystyle g(p) \ll 1/p$ and Theorem 1 follows. Remark 3 One can obtain more precise bounds on ${g(p)}$ using the Lang-Weil theorem, but we will not need this result here. (Such results would however be needed if one wanted more quantitative information than Theorem 1; see the paper of Bourgain, Gamburd, and Sarnak for details.) It remains to establish Theorem 1. In the case when ${q}$ is prime, this was achieved in previous sections using the ingredients of quasirandomness, product theorems, and non-concentration. In the original paper of Bourgain, Gamburd, and Sarnak, these ingredients were extendedto the square-free case by hand, which led to a fairly lengthy argument. In the subsequent paper of Varju, it was shown that each of these ingredients can in fact be more or less automatically bootstrapped from the prime case to the square-free case by using tools such as the Chinese remainder theorem (or strong approximation) to “factor” the latter case into copies of the former, thus simplifying the extension to the square-free case significantly. We will not give the full argument here, but just to convey a taste of these sorts of product arguments, we will discuss the product structure of just one of the three ingredients, namely quasirandomness. (The extension of this ingredient to the square-free setting was already observed in the Bourgain-Gamburd-Sarnak paper.) As a consequence of Proposition 4 of Notes 3, the following claim was shown: Proposition 7 Let ${G}$ be a ${|G|^\alpha}$-quasirandom finite group, ${S}$ is a symmetric set of generators of ${G}$ not containing the identity of cardinality ${k}$, and ${\mu = \frac{1}{|S|} \sum_{s \in S} \delta_s}$ be such that $\displaystyle \| \mu^{*n} \|_{\ell^2(G)} \leq |G|^{-1/2+\alpha/4}$ (say) for some ${n = O(\log |G|)}$ with ${\mu := \frac{1}{|S|} \sum_{s \in S} \delta_s}$, then ${Cay(G,S)}$ is a two-sided ${\epsilon}$-expander for some ${\epsilon}$ depending only on ${\alpha, k}$ and the implied constants in the ${O()}$ notation. It turns out that this fact can be extended to product groups: Proposition 8 Proposition 7 continues to hold if the hypothesis that ${G}$ is ${|G|^\alpha}$-quasirandom is replaced with the hypothesis that ${G = G_1 \times \ldots \times G_n}$ for some ${n \geq 0}$ and some finite groups ${G_1,\ldots,G_n}$, with each ${G_i}$ being ${|G_i|^\alpha}$-quasirandom. The key point here is that the expansion constant ${\epsilon}$ does not depend on the number ${n}$ of groups in this factorisation. Proof: (Sketch) For technical reasons it is convenient to allow ${S}$ to have multiplicity and to possibly contain the identity; this will require generalising the notion of Cayley graph, and of expansion in such generalised graphs. Let ${f}$ be a non-constant eigenfunction of the adjacency operator, thus ${f * \mu = \lambda f}$ for some real ${\lambda}$. The objective is to prove that ${|\lambda| \leq 1-\epsilon}$ for some sufficiently small ${\epsilon>0}$ independent of ${n}$. The claim ${n=0}$ is trivial, so assume inductively that ${n \geq 1}$ and that the claim is proven for all smaller values of ${n}$ (with a fixed choice of ${\epsilon}$). For each ${G_i}$, we can partition the eigenspaces of the adjacency operator into those functions which are invariant in the ${G_i}$ direction, and those functions which have mean zero in each coset of ${G_i}$. These partitions are compatible with each other as ${i}$ varies (basically because the operations of averaging in ${G_i}$ and averaging in ${G_j}$ commute). Thus, without loss of generality, we may assume that the eigenfunction ${f}$ is such that for each ${i}$, either ${f}$ is ${G_i}$-invariant, or has mean zero on each ${G_i}$ coset. Suppose that ${f}$ was ${G_n}$-invariant, then the eigenvalue ${\lambda}$ would also persist after projecting ${S}$ down from ${G}$ to ${G_1 \times G_{n-1}}$ (possibly picking up the identity or some multiplicity in the process). The claim then follows from the induction hyopthesis. Similarly if ${f}$ was ${G_i}$-invariant for any other value of ${i}$. Thus we may assume that ${f}$ has mean zero in each ${G_i}$ coset. One can show that every irreducible unitary representation of ${G}$ splits as a tensor product of irreducible unitary representations of the ${G_i}$. If one lets ${V}$ be the subspace of ${\ell^2(G)}$ spanned by ${f}$ and its left translates, we thus see that ${V}$ contains at least one such tensor product; but as every element of ${V}$ will have mean zero in each ${G_i}$ coset, the factors in this tensor product will all be non-trivial. Using quasirandomness, the ${i^{th}}$ factor will have dimension at least ${|G_i|^\alpha}$, and so ${V}$ must have dimension at least ${|G|^\alpha}$. At this point, one can use a trace formula to relate ${V}$ to ${\|\mu^{*n}\|_{\ell^2(G)}^2}$ to conclude the argument. $\Box$ Exercise 10 Develop the above sketch into a complete proof of the proposition.
2022-08-16 10:52:14
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 849, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9653582572937012, "perplexity": 100.37411040381622}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-33/segments/1659882572286.44/warc/CC-MAIN-20220816090541-20220816120541-00446.warc.gz"}
https://lakschool.com/en/math/quadratic-equations/quadratic-equations
A quadratic equation is any equation that can be put into the following general form: $ax^2+bx+c=0$ The requirement is that $a \ne 0$. $a$, $b$ and $c$ are coefficients. ! ### Note A quadratic equation can have either one, two or no solutions. ! ### Remember The equation $x^2+px+q=0$ is called the canonical form of the quadratic equation. It is possible to put every quadratic equation in the canonical form. To bring a quadratic equation into canonical form, the whole equation is divided by the coefficient $a$ (the number that precedes $x^2$). $ax^2+bx+c=0$   $|:a$ $x^2+\frac{b}{a}x+\frac{c}{a}=0$ ### Example Put the equation $2x^2+8x+7=0$ in the canonical form. $2x^2+8x+7=0$   $|:2$ $x^2+4x+3.5=0$
2022-11-29 12:34:03
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9134398698806763, "perplexity": 268.5768629282099}, "config": {"markdown_headings": true, "markdown_code": false, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-49/segments/1669446710691.77/warc/CC-MAIN-20221129100233-20221129130233-00034.warc.gz"}
https://proxies-free.com/tag/forbidden/
## iis – Error (403): Forbidden As a background, I'm mainly an embedded developer and was hired to make a simple change to one of our company's web applications. The application is an ASP.NET application (originally developed with Visual Studio 2010) running on Windows Server 2012. I am trying to modify / debug the code on my local Windows 10 computer with Visual Studio 2017. When I try to run the application locally on Visual Studio 2017, Visual Studio displays a message that debugging on the Web server can not be started. The remote server returned an error: (403) Forbidden. " I have tried several things, such as For example, grant IIS_USR and Network Service permissions to my path. The following is logged in C: inetpub logs LogFiles W3SVC1: 2019-10-15 15:07:14 :: 1 DEBUG /ECNAD/DebugAttach.aspx – 80 – :: 1 – – 403 0 0 1249 2019-10-15 15:15:46 :: 1 DEBUG /ECNAD/DebugAttach.aspx – 80 – :: 1 – – 403 0 0 39 I would be very happy to receive any help to run this web application locally. ## Sampling of a uniform distribution of fixed-size strings containing no forbidden substrings Suppose the alphabet is $${a, b }$$and you have a forbidden word $$aa$$, Suppose we try to generate a word of length 3. The first two letters are evenly distributed $$ab, ba, bb$$, Therefore, the first letter has the following distribution: $$a$$ with probability $$1/3$$. $$b$$ with probability $$2/3$$, In contrast, the allowed words $$aba, abb, bab, bba, bbb.$$ So the first letter should have the distribution $$a$$ with probability $$2/5$$. $$b$$ with probability $$3/5$$, Here is an algorithm that works. Create a DFA (or UFA) for your language. For every state $$q$$With dynamic programming, you can count how many words are long $$m$$ are accepted when the machine is restarted $$q$$, Let us denote this $$c (q, m)$$, The correct distribution of the first letter $$sigma_1$$ from a word of length $$n$$ is in the language $$Pr ( sigma_1 = sigma) = frac {c ( delta (q_0, sigma), n-1)} {c (q_0, n)}.$$ Quite generally in the face of the first $$ell$$ letters $$sigma_1 ldots sigma_ ell$$The following letter has the distribution $$Pr ( sigma_ { ell + 1} = sigma mid sigma_1 ldots sigma_ ell) = frac {c ( delta (q_0, sigma_1 ldots sigma_ ell sigma), n – ell-1)} {c ( delta (q_0, sigma_1 ldots sigma_ ell), n- ell)}.$$ If you ignore the cost of arithmetic, you can roughly implement this scheme $$O (| Q | n)$$, Where $$Q$$ is the set of states or in $$O (| Sigma | n ^ 2)$$, (The former assuming that $$| Q | = Omega (| Sigma |)$$.) As an example, consider the above counter example. We construct a two-state DFA (we can omit the sink state to get a UFA) $$q_0, q_1$$, The transition function is $$Delta (q_0, a) = q_1$$. $$Delta (q_0, b) = q_0$$. $$Delta (q_1, b) = q_0$$, The relevant values ​​of $$c$$ are $$begin {array} {c | cc} n & c (q_0, n) & c (q_1, n) \ hline 0 & 1 & 1 \ 1 & 2 & 1 \ 2 & 3 & 2 \ 3 & 5 & 3 end {array}$$ These are calculated by the repetitions $$c (q_0, n) = c (q_0, n-1) + c (q_1, n-1)$$ and $$c (q_1, n) = c (q_0, n-1)$$with basic housing $$c (q, 0) = 1$$, Since $$Delta (q_0, a) = q_1$$ and $$Delta (q_0, b) = q_0$$we see that (eg $$n = 3$$) $$Pr ( sigma_1 = a) = c (q_1,2) / c (q_0,3) = 2/5$$ and $$Pr ( sigma_1 = b) = c (q_0,2) / c (q_0,3) = 3/5$$, ## json – D8 JsonApi Post 403 Forbidden error I seem to have trouble creating a node through json api. I am able to fix knots and generate an access token via oauth, but I'm not lucky enough to create something. I also enabled the permissions to create a node for my content type under the consumer. Any suggestions on what to try next? ## Brute force Scanner Many automatic scanners bypass locked directory listings by looking for "bruteforce" files. This means that they are looking for additional files whose names are similar to those of the existing files (ie. `filename.js1` and files that are not referenced at all (aka `secret.txt`). If you happen to have a file whose name is on the bruteforced list and which is in an accessible directory, it will be found, regardless of whether the "directory listing" is enabled or not It's worth noting that hackers do the same, so this is a real problem. If something is in a publicly accessible directory, you should generally think that it is found. So if you do not want it to be public, you need to keep it away from public directories – disabling the directory list offers very little security. ## Real weaknesses In the end, this does not seem to be a big problem (and probably is not), but leaving backups of javascript files in public directories is generally a bad idea. When it comes to XSS, an attacker generally has the most success if he can exploit a javascript file hosted on the same domain. This is because this provides the opportunity to bypass a CSP or other "security firewalls". If an older Javascript file contains a vulnerability that was fixed in a later release, and an attacker has found a way to force the user's browser to load the older Javascript file, it may be linked to a more malicious vulnerability. This may seem far-fetched, but how many of the worst security holes happen when many small vulnerabilities are grouped together into one larger one? tl / dr: If something is hosted by your website but has none Reason to be there, then it is a liability. Kill it with prejudice. ## xampp – How do I solve 403 Forbidden Error in Apache? I work server side and had a problem. I use XAMPP and Apache server in my server. First I buy a static IP and open the port for everyone. I can succeed if: "http: // {StaticIP} / api / NewsJson", But if I try "https: // {StaticIP} / api / NewsJson"I take 403 errors in the browser. I search and find a few solutions. First, I change the line "xampp apache conf extra httpd-xampp" Folder. I change the locally granted change requires all granted. ``````ScriptAlias /php-cgi/ "C:/xampp/php/" AllowOverride None Options None Require all granted Require all granted SetHandler cgi-script SetHandler None Require all granted AllowOverride AuthConfig Options +Indexes DirectoryIndexTextColor "#000000" DirectoryIndexBGColor "#f8e8a0" Require all granted ErrorDocument 403 /error/XAMPP_FORBIDDEN.html.var AllowOverride AuthConfig Require all granted ErrorDocument 403 /error/XAMPP_FORBIDDEN.html.var Alias /webalizer "C:/xampp/webalizer/" AllowOverride AuthConfig Require all granted ErrorDocument 403 /error/XAMPP_FORBIDDEN.html.var `````` Then I add this line "xampp apache conf extra httpd-vhosts" Folder. `````` DocumentRoot "C:/xampp/htdocs/api/NewsJson" ServerName 192.168.*.** (My Server IP) AllowOverride All Order allow,deny Allow from all Require all granted `````` And I change mine ".Htaccess" Folder. ``````RewriteEngine On RewriteRule NewsJson.html\$ NewsJson.php (L) `````` If I change it, I have Apache closed and reopened. But I still take 403 banned errors. What can I solve this problem? ## kubernetes – Forbidden to empty users "" cubic That's the command ``````kubectl --namespace=somenamespace exec -it test sh Error from server (Forbidden): pods "test" is forbidden: User "" cannot create resource `````` There's my kube config `````` user: exec: apiVersion: client.authentication.k8s.io/v1alpha1 env: - name: AWS_PROFILE value: "test" # refers to aws profile test located in ~/.aws/config, command: aws-iam-authenticator args: - "token" - "-i" - "qa" `````` aws config is ``````(profile test) source_profile = sso region = us-east-1 `````` I do not understand why the user is empty "" and I have received a forbidden error ## linux – Replaces Python for function checking for forbidden characters I have "blackbox" with the following python function code (without permission to change it): ``````def exec_ping(): forbidden = ('&', ';', '-', '`', '||', '|') command = input('Enter an IP: ') for i in forbidden: if i in command: print('Invalid characters') exit() os.system('ping ' + command) `````` I would like to execute this function with the following command input: ``````-c 1 localhost; whoami; `````` For this command to execute: ``````ping -c 1 localhost; whoami; `````` How can I bypass the check for forbidden characters? Can I use other characters / encodings? ## sharepoint online – App step Forbidden error message – Update the SP Designer 2013 permission group When I try to run a Designer 2013 workflow with an app step, my log displays the following results: 25.07.2013 16:20 clock HRO ID: i: 0 # .f | Membership | bob@bob.gov 25.07.2013 16:20 clock {"__metadata": {"type": "SP.User"}, "LoginName": "i: 0 # .f | membership | bob@bob.gov"} 25.07.2013 16:20 clock *** Add User Response Code: Forbidden I've configured my site to allow app steps. I can create them in my designer workflows and publish them successfully. I know that the URL I am passing the REST call to is correct. When I paste the URL directly into my browser, a successful result is displayed that lists the actual members of the permission group that I want to update. What should I look for in configurations to fix this? ## sharepoint online – Apply-PnPProvisioningTemplate (403) Forbidden, there is no web named "/SiteURLName/_vti_bin/sites.asmx" #### Stack Exchange network The Stack Exchange network consists of 176 Q & A communities, including Stack Overflow, the largest and most trusted online community where developers can learn, share, and build a career. Visit Stack Exchange ## Forbidden (403) CSRF validation failed. Request canceled. Hello I have fiber and run a home server that points to my URL zyngalu.com I'm learning PHP and have my first input form on my page. I use both Chrome and Firefox to check my website. On average, Chrome emits this error every fifth time I use the form. (see below) I'm not sure if Chrome does this or Apache. Either way, I want to eliminate it. My website does not use cookies …
2019-10-15 23:42:42
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 40, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.19075500965118408, "perplexity": 3581.6588688502516}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-43/segments/1570986660829.5/warc/CC-MAIN-20191015231925-20191016015425-00090.warc.gz"}
https://math.stackexchange.com/questions/2502685/lim-x-rightarrow-0-frac1-cos-xex-1-infty-using-lhopital/2502698
# $\lim_{x\rightarrow 0^+}\frac{(1+\cos x)}{(e^x-1)}= \infty$ using l'Hopital I need to show $$\lim_{x\rightarrow 0^+}\frac{1+\cos x}{e^x-1}=\infty$$ I know that, say, if you let $f(x) = 1 + \cos x$ and $g(x) = \dfrac{1}{e^x-1}$, and then multiply the limits of $f(x)$ and $g(x)$, you get $\frac{2}{0}$. I can't figure out how to make it work for l'Hopital's rule however, i.e. how to rewrite it so that it is in the form $\frac{0}{0}$ or $\frac{\infty}{\infty}$. I also tried multiplying $h(x)$ by the conjugate of $f(x)$, but I don't think this is fruitful. Any hints appreciated. I don't know why you like that form. If you insist, $$\lim_{x\to 0^+} \frac{(\cos x +1)}{e^x-1}$$ can be rewritten to $$\lim_{x\to 0^+} \frac{2+(\cos x-1)}{e^x-1}$$ Then you can use L'Hopital rule with the right part. It seems wired. Recall that $$e^x \sim 1 + x + \text{(high order terms)},$$ for $x \to 0^+$. Then $e^x - 1 \sim x$, and you can solve: $$\lim_{x\rightarrow0+}\frac{(1+\cos x)}{x} = \ldots$$ • Are we justified in lopping off the higher order terms because we are near the origin? Nov 3 '17 at 10:22 • To be more precise, the same job can be done for the numerator. In this case $1+ \cos x \sim 1 + 1 - \frac{x^2}{2} + \text{hot}$. Then, the whole thing reduces to: $$\frac{2 - \frac{x^2}{2} + \text{hot}}{x + \text{hot}}.$$ Then, yes, you are allowed to do this. Notice that $\text{hot} \to 0$ as $x \to 0$. Nov 3 '17 at 10:25
2022-01-20 13:19:06
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9703884124755859, "perplexity": 101.83815087546064}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-05/segments/1642320301863.7/warc/CC-MAIN-20220120130236-20220120160236-00627.warc.gz"}
http://mathoverflow.net/questions/31646/does-algebraic-numbers-coloured-by-degree-form-a-fractal/106823
## Does “Algebraic numbers coloured by degree” form a fractal? This picture from Wikipedia's article on Algebraic numbers shows a visualization of Algebraic numbers coloured by degree. I'm wondering if this is a fractal? - The points in that picture are sized based on how small the integer coefficient of the minimal polynomial are (so, e.g., the point at zero is huge). Naturally, the smaller the points get, the more of them there are, which gives an illusion of self-similar structure--but you could get something similar by looking at the rationals and putting an interval of size, say, $5^{-max(\pm p, q)}$ about each rational $p/q$. The resulting set would have positive Lebesgue measure, hence fractal dimension 1, but its boundary might be interesting. – Charles Staats Jul 13 2010 at 1:50 The similarity of this particular picture to the Mandelbrot set is perhaps misleading. Consider that the algebraic numbers include all possible $r + s i$ where $r,s$ are rational numbers and $i^2 = -1.$ So, while the field has many self-similarity properties built in, it is probably best to think of the algebraic numbers as a sort of fog that is roughly the same everywhere. – Will Jagy Jul 13 2010 at 1:55 Fair enough. Thanks for the clarification. – M.S. Jul 14 2010 at 17:19 John Baez has a page on a similar picture of Dan Christensen, and some feathery patterns lend additional credibility to this: math.ucr.edu/home/baez/roots There are some references at the bottom. – Robert Haraway Jan 3 2011 at 3:14 The notion of having the radius proportional to the degree feels somewhat strange, and does not easily compare to other methods of creating fractals. However, it would be the boundary that is the interesting set here, as it will constitute of a union of circles. Such fractals are not unheard of, see for example the Apollonian gasket, and it does not feel arcane to think that there is some non-obvious self-similarity going on there. Odds are that the boundary has some non-integer Haussdorff dimension. – Per Alexandersson Aug 30 2011 at 9:22 show 1 more comment If you consider the set of roots of polynomials whose coefficients are entirely $1$ or $-1$, and take the topological closure of that set, you get a fractal pattern closely related to the Dragon curve. - The algebraic numbers are countable hence $\dim_{H}A=0$ for each subset. But one defines a fractal by non-intger Hausdorff dimenson. - Not quite. Everyone agrees that the boundary of the Mandelbrot set is a fractal, but it has Hausdorff dimension 2. Of course its topological dimension is 1, so a better definition would be the one of Mandelbrot, namely Hausdorff dimension strictly bigger than topological dimension. But even this definition is not universally agreed upon... – Wolfgang Loehr Sep 10 at 16:31 Many folks I know actually use Minkowski dimension rather than Hausdorff dimension for this. For which countability does not imply dimension zero. – BSteinhurst Sep 11 at 2:12 Dear Wolfgang, you are right it would be better to put it this way, but also by your definition a contbale set is not a fractal. – Jörg Neunhäuserer Sep 20 at 16:40 Dear Bsteinhurst, to use Minkowski dimension is know to be problematic. You would have to say for instance that $\{0\}\cup\{1/n|n\ge 1\}$ is a fractal. – Jörg Neunhäuserer Sep 20 at 16:44 Dear Jorg, in some sense it is. But one can create a non-trivial self-similar fractal with integer Hausdorff dimension as well. So dimensionality alone isn't a good definition for what a fractal is. – BSteinhurst Oct 8 at 22:40
2013-05-25 23:34:28
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.826139509677887, "perplexity": 561.8342101424176}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368706470784/warc/CC-MAIN-20130516121430-00070-ip-10-60-113-184.ec2.internal.warc.gz"}
https://stats.stackexchange.com/questions/571053/consistency-between-two-outputs-of-a-neural-network/571058
# Consistency between two outputs of a neural network I'm trying to fit a dense neural network based on tabular data input, where the outputs are two separate classification vectors, with one cross-entropy loss function for each. Example: given a few input features, for a customer that visits a travel website with the intention of buying a train ticket, the model would predict both the destination of travel and the traveling class (1st class or 2nd class) that the customer is likely to buy. Problem: it seems as if internally, the network was divided in two at some point in the hidden layers, and each sub-network got specialised in predicting one output vector, ignoring the other. This leads to an overall acceptable accuracy for each output, but the consistency between the two outputs leaves to be desired. For example, for a given entry, the network would predict "London" and "1st Class", because independently, each output makes sense according to the input features, but there isn't a single training point where London and 1st class can be found together, simply because there isn't a 1st class option when travelling to London. The network seems to be completely devoid of any concern for the consistency between the two. Example, if the passenger is an accountant, 35yo and departs from Brussels, the training set gives a clear winner for destination: London and, separately, also for class: 1st, and so this is what the network will tend to predict, despite this combination being totally absent. Would there be any way to amend the network and/or the organisation of the loss functions so that the consistency between the two outputs would be taken into account, and the network would avoid combination of outputs that can't be found in the training set, and favor those that are? More generally, what would be some good approaches to tackle this issue? Note that I would like to avoid resorting to manual rules down the line, if that is possible. • I would put a freeze on the current net work, and cap it with the head, and train the head only to the correspondence. There’s an argument to be made that if you know the correspondence that you only need one output and you can make a simple equation that turns one output into it adjoint. Apr 10 at 13:09 • Thanks. What do you mean by « train the head only to the correspondance » (not clear what the « to the correspondance » bit)? Apr 10 at 14:54 • @SextusEmpiricus presently, there are two loss functions: one for the destination, one for the class, and they are independent of each other. I’m trying to think of a way to incorporate a loss function for the combination of the two, but not sure how to make that differentiable. Apr 11 at 13:49 • @SextusEmpiricus for your first question, the answer is that with the present setup, it can’t, and this is the main issue. Apr 11 at 13:50 Another method would be to build two neural networks. The first NN is trained to predict the destination. For the second NN, include the destination predicted by the first NN as an input feature and train the network to predict the class. The second network should then learn to only predict classes that are options for the predicted destination. Edited in response to @Jivan's comment. There are more complex methods of multi-label classification, but I'd keep it simple if possible, and try either @Dikran's or my approach first. They are both standard ways of implementing multi-label classification (see this Medium post). Dikran's method is a Label Powerset and mine is a Classifier Chain. As you've pointed out, there are pros and cons to both these methods. If neither of these produce a good enough result, you could try a variation of the classifier chain, where you build one network to predict one label from the union of destinations and classes. Then train two further networks, one that predicts the destination given a predicted class and the other that predicts the class given a predicted destination. At inference time, you would use the first network to predict either a class or destination, then the appropriate second network predict the other label. • This is sound advice, however on reality both outputs can affect the other. Here the causation is clearly destination to class, because people choose their class based on what’s available with the destination. In the real case, it can go both way (people could chose their class and then decide among suitable destinations). Apr 10 at 14:53 If consistency is a problem I would make it a single classification task where "London first class", "London second class", ..., "Rome first class" and "Rome second class" were distinct classes, rather than make it two distinct classification tasks. You current network architecture is giving the a-priori hint that they are completely distinct classification tasks, but if e.g. some destinations don't have both classes, then there is a dependence between the two sub-classes. Combining the two classification tasks into one would be the easiest way of putting the dependence back into the model. At the moment, I think your model is predicting that the customer would opt for a first class ticket if it were available, which is not an unreasonable answer - it is just generalising the idea that people in relatively well paid occupations (e.g. accountant) tend to travel first-class. You could always just ignore the class output where it is not an option. Does the network really need so many layers? It could be that a single hidden layer may be sufficient for this problem and the layer above that is not actually doing much useful processing, in which case the division of the network may not be that meaningful. • Thanks! This is a sound idea, however I observed that it causes another problem, namely that the network is less able to generalise on e.g. destination when a single destination is dominant but scattered among many classes (there are more than two in the real case). There might be 80% London but scattered across 10 classes, and there is another single city with 20% but with a single class. Ideally then, I’d wish the network to say London with 80% probability. Lastly, this is a toy example. In reality, dozens of features, thousands of possibilities for each output. Apr 10 at 14:49 • @Jivan in that case, why not just sum over the classes that include London as the destination? If there are thousands of classes, is their any heirarchy that you could exploit (e.g. country rather than city of destination) and then have sub-networks for the city? Apr 10 at 16:05 • What do you mean by "sum over the classes that include London"? Apr 10 at 18:36 • In your comment, you wrote "There might be 80% London but scattered across 10 classes," I took that to mean there were 10 classes that included London as the destination. Summing the probabilities of those ten classes would give the probability of London being the destination, marginalising over the other aspects of the classification scheme (i.e. ignoring them) Apr 10 at 18:45 ### Cost function In what way would your neural network be able to know that the 1st class with destination London is not feasible? How do you teach that to the network? In what way did you 'punish' the network during training for wrong predictions? It is important that the training phase allows the network to train the desired features. In your question, you did not tell which cost function you used to train the model. It is also not clear what type of output is created by your model and what you would desire from it. Do I guess correctly that the output is just a single class prediction? In that case, what class prediction would you favor in the example from the question. Is 'London 2nd class' a better prediction than 'London 1st class'? When this cost function only cares about a single error then it is gonna care less about combined errors. That might lead to your problem (I am assuming that this is how your cost function is created, but it is not clear). Predicting London + 1st class will be wrong in the 89302 cases when the true value is Londen + 2nd class. But the choice to predict the 1st class instead of 2nd class might be rewarded in the 48516 + 41411 + 38186 + 35247 + 28512 cases when the true value is Paris/Rome/Berlin/Madrid/Rotterdam + 1st class (I am not sure, but I guess that your cost function is doing this). You can punish the system for making predictions about 1st class when it is in London, but at the same time you reward 1st class predictions when the occur in other cities. So you are getting Londen 1st class as result. ### Type of output I mentioned earlier that I am guessing that your model is just giving a single class prediction. I am guessing this based on your situation as well as on the phrase For example, for a given entry, the network would predict "London" and "1st Class" If that is the case then you might consider to use a different type of output. Instead of predicting a single class you could have as output a vector of probabilities for all desired combinations of destinations and classes (as well as other aspects that you might have in your model). Then you could value the predictions and perform the training based on a likelihood function of a categorical distribution. When you apply this model (some online shopping tool or some help for an airline company?) then it will not give a single class as output, but instead it could give a ranking of the top destinations. ### Network structure What kind of dense neural network do you have and how did your train it? It might be imaginable that there should be a node in some of those layers that gets trained to deal with the London + 2nd class case specifically. But, how many layers do you have, how many nodes per layer do you have, how did you do cross-valdiation? It is imaginable that this error/false-prediction might occur. But it is difficult to say why and how exactly it occurs without details. • Thanks. How would you go about a cost function that takes combination into account? Would you build a lookup table of the frequency for each combination in the training set, and multiply the final loss/cost by a factor of the inverse of this? Or would there be other approaches? Apr 12 at 9:23 • @Jivan, I was thinking of a cost function that punishes the result if the entire combination is wrong and does not grant half score if you get half of the class correct. How to do this exactly I am not sure, it depends on the problem that you have. Anyway, in this simple example in the question (which may not be your complete problem?) it is not surprising that you get a prediction Londen 1st class if your cost function will reward this prediction in the 89302 cases that you guessed London correct and the 48516 + 41411 + 38186 + 35247 + 28512 cases that you guessed 1st class correct. Apr 12 at 9:59 • @Jivan it is not clear how you value the output of the neural network. Do I guess correctly that the output is just a single class prediction? What class prediction would you favor in the example from the question. Is London 2nd class a better prediction than London 1st class? The London 2nd class will be a perfect prediction in 89302 cases but an extremely bad prediction in 48516 + 41411 + 38186 + 35247 + 28512 other cases. The Londen 1st class will not have these perfect predictions but might do better on average. Apr 12 at 10:09
2022-08-19 02:31:24
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.6162789463996887, "perplexity": 594.324706898397}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-33/segments/1659882573540.20/warc/CC-MAIN-20220819005802-20220819035802-00261.warc.gz"}
http://www.lmfdb.org/ModularForm/GL2/Q/holomorphic/23/8/1/?group=0
# Related objects Show commands for: Magma / SageMath ## Decomposition of $S_{8}^{\mathrm{new}}(23)$ into irreducible Hecke orbits magma: S := CuspForms(23,8); magma: N := Newforms(S); sage: N = Newforms(23,8,names="a") Label Dimension Field $q$-expansion of eigenform 23.8.1.a 5 $\Q(\alpha_{ 1 })$ $q$ $\mathstrut+$ $\alpha_{1} q^{2}$ $\mathstrut+$ $\bigl(\frac{5}{3104} \alpha_{1} ^{4}$ $\mathstrut+ \frac{53}{1552} \alpha_{1} ^{3}$ $\mathstrut- \frac{495}{776} \alpha_{1} ^{2}$ $\mathstrut- \frac{3247}{388} \alpha_{1}$ $\mathstrut+ \frac{4622}{97}\bigr)q^{3}$ $\mathstrut+$ $\bigl(\alpha_{1} ^{2}$ $\mathstrut- 128\bigr)q^{4}$ $\mathstrut+$ $\bigl(- \frac{43}{1552} \alpha_{1} ^{4}$ $\mathstrut- \frac{417}{776} \alpha_{1} ^{3}$ $\mathstrut+ \frac{2511}{388} \alpha_{1} ^{2}$ $\mathstrut+ \frac{16129}{194} \alpha_{1}$ $\mathstrut- \frac{34180}{97}\bigr)q^{5}$ $\mathstrut+$ $\bigl(\frac{13}{1552} \alpha_{1} ^{4}$ $\mathstrut- \frac{95}{776} \alpha_{1} ^{3}$ $\mathstrut- \frac{1287}{388} \alpha_{1} ^{2}$ $\mathstrut+ \frac{1219}{194} \alpha_{1}$ $\mathstrut- \frac{1690}{97}\bigr)q^{6}$ $\mathstrut+$ $\bigl(\frac{15}{388} \alpha_{1} ^{4}$ $\mathstrut+ \frac{415}{388} \alpha_{1} ^{3}$ $\mathstrut- \frac{933}{194} \alpha_{1} ^{2}$ $\mathstrut- \frac{20452}{97} \alpha_{1}$ $\mathstrut- \frac{16724}{97}\bigr)q^{7}$ $\mathstrut+$ $\bigl(\alpha_{1} ^{3}$ $\mathstrut- 256 \alpha_{1} \bigr)q^{8}$ $\mathstrut+$ $\bigl(\frac{371}{3104} \alpha_{1} ^{4}$ $\mathstrut+ \frac{2303}{1552} \alpha_{1} ^{3}$ $\mathstrut- \frac{23925}{776} \alpha_{1} ^{2}$ $\mathstrut- \frac{85417}{388} \alpha_{1}$ $\mathstrut+ \frac{31175}{97}\bigr)q^{9}$ $\mathstrut+O(q^{10})$ 23.8.1.b 8 $\Q(\alpha_{ 2 })$ $q + \ldots^\ast$ ${}^\ast$: The Fourier coefficients of this newform are large. They are available for download. Coefficient field Minimal polynomial of $\alpha_j$ over $\Q$ $\Q(\alpha_{ 1 })$ $x ^{5}$ $\mathstrut +\mathstrut 16 x ^{4}$ $\mathstrut -\mathstrut 320 x ^{3}$ $\mathstrut -\mathstrut 3136 x ^{2}$ $\mathstrut +\mathstrut 25680 x$ $\mathstrut +\mathstrut 10816$ $\Q(\alpha_{ 2 })$ $x ^{8}$ $\mathstrut -\mathstrut 832 x ^{6}$ $\mathstrut -\mathstrut 1059 x ^{5}$ $\mathstrut +\mathstrut 203052 x ^{4}$ $\mathstrut +\mathstrut 678328 x ^{3}$ $\mathstrut -\mathstrut 13424272 x ^{2}$ $\mathstrut -\mathstrut 73308944 x$ $\mathstrut -\mathstrut 37372224$
2018-12-17 13:01:07
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 2, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9517471194267273, "perplexity": 118.66710985284404}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-51/segments/1544376828507.57/warc/CC-MAIN-20181217113255-20181217135255-00384.warc.gz"}
https://www.techwhiff.com/issue/the-following-are-the-ages-years-of-5-people-in-a-room--529484
The following are the ages (years) of 5 people in a room: 14, 24, 24, 20, 24 A person enters the room. The mean age of the 6 people is now 21. What is the age of the person who entered the room? Question: The following are the ages (years) of 5 people in a room: 14, 24, 24, 20, 24 A person enters the room. The mean age of the 6 people is now 21. What is the age of the person who entered the room? In the play what was the relationship between juliet and her father In the play what was the relationship between juliet and her father... I need help figuring this out asap please i need help figuring this out asap please... Pls help solve it like this Pls help solve it like this... Owls and Hawks both eat rodents they're also found in the same habitats since no two populations can occupy exactly the same Niche how can owls and Hawks coexist Owls and Hawks both eat rodents they're also found in the same habitats since no two populations can occupy exactly the same Niche how can owls and Hawks coexist... Can u help me thanks :) \(*-*)/ ​ can u help me thanks :) \(*-*)/ ​... Emily Corporation purchased all of Ace Company's common stock on January 1, 2020, for $1,000,000 cash. The investee's stockholders' equity amounted to$400,000. The excess of $600,000 was due to an unrecorded patent with a six-year life. In 2020, Ace reported net income of$250,000 and paid dividends of $25,000. What is the Equity Investment balance at December 31, 2020? Emily Corporation purchased all of Ace Company's common stock on January 1, 2020, for$1,000,000 cash. The investee's stockholders' equity amounted to $400,000. The excess of$600,000 was due to an unrecorded patent with a six-year life. In 2020, Ace reported net income of $250,000 and paid dividend... 1 answer Lee la carta al presidente Obama escrita por Cecilia Muñoz y León Rodríguez y escoge la respuesta correcta de la pregunta basada en la lectura. Read the letter written to President Obama by Cecilia Muñoz and León Rodríguez and then choose the correct answer for the question that is based on the reading. Los inmigrantes y refugiados han venido a nuestro territorio en búsqueda de oportunidades y libertad desde antes de la fundación de nuestra nación. El proceso de integrarse en una tierra nueva—lo Lee la carta al presidente Obama escrita por Cecilia Muñoz y León Rodríguez y escoge la respuesta correcta de la pregunta basada en la lectura. Read the letter written to President Obama by Cecilia Muñoz and León Rodríguez and then choose the correct answer for the question that is based on th... 1 answer Write a program that reads a stream of integers from the console and stores them in an array. The array is then analyzed to compute the average of all the values in the array and finally all of the values that are above the average should be printed out to the screen. Specifically, you must write three methods: main(), readIntoArray(), and printAboveAverage(). Write a program that reads a stream of integers from the console and stores them in an array. The array is then analyzed to compute the average of all the values in the array and finally all of the values that are above the average should be printed out to the screen. Specifically, you must write th... 1 answer Tengo miedo. Creo que la motocicleta no ____. está segura es segura es seguro está seguro 2. Yo ya ____ para salir. ¿Y Miriam? estoy lista está lista soy lista es lista 3. No me gusta la película. ____ y aburrida. Está muy larga Es muy largo Es muy larga Está muy largo 4. ¿Dónde ____ nuestros pasaportes? están es está son 5. La habitación ____. es sucio está sucio está sucia es sucia 6. La reunión (meeting) con la agente de viajes ____ a las cuatro de la tarde. son es están está 7. Mi maleta ___ Tengo miedo. Creo que la motocicleta no ____. está segura es segura es seguro está seguro 2. Yo ya ____ para salir. ¿Y Miriam? estoy lista está lista soy lista es lista 3. No me gusta la película. ____ y aburrida. Está muy larga Es muy largo Es muy larga Está muy largo 4. ¿Dónde ____ nuestr... 2 answers In △BCD, BP=15 cm. What the length of BX¯¯¯¯¯¯ ? Enter your answer in the box. cm In △BCD, BP=15 cm. What the length of BX¯¯¯¯¯¯ ? Enter your answer in the box. cm... 1 answer Draw a fraction that is equivalent to 1/6​ draw a fraction that is equivalent to 1/6​... 2 answers How do countries protect their domestic economy from excessive influence by multinational corporations How do countries protect their domestic economy from excessive influence by multinational corporations... 2 answers Please help me answer this one question Please help me answer this one question... 1 answer Is 5 1/4 / 3 1/2 the same as 3 1/2 / 5 1/4? Explain. Is 5 1/4 / 3 1/2 the same as 3 1/2 / 5 1/4? Explain.... 1 answer Hamza went to the convenience store and bought snacks and drinks for his friends. He bought a total of 12 items. Each snack, c, cost$2.50 and each drink, d, cost $2. He spent a total of$28. Write two equations to represent the total number of items and the total cost of the items. Hamza went to the convenience store and bought snacks and drinks for his friends. He bought a total of 12 items. Each snack, c, cost $2.50 and each drink, d, cost$2. He spent a total of \$28. Write two equations to represent the total number of items and the total cost of the items.... 1En esta leyenda, Caonabí demostró su(1 Punto)O valentiaO timidezamortristeza​ 1En esta leyenda, Caonabí demostró su(1 Punto)O valentiaO timidezamortristeza​... A rectangular floor is 15 feet long and 12 feet wide. What is the area of the floor in square yards?Be sure to include the correct unit in your answer. A rectangular floor is 15 feet long and 12 feet wide. What is the area of the floor in square yards?Be sure to include the correct unit in your answer.... Evaluate. Pay close attention to all brackets and signs. |4| - |-5| a. 9 c. 1 b. -9 Evaluate. Pay close attention to all brackets and signs. |4| - |-5| a. 9 c. 1 b. -9... Do stem cells and specialized cells have the same DNA? Do stem cells and specialized cells have the same DNA?... Compare and contrast the three types of RNA's discussed during protein synthesis Compare and contrast the three types of RNA's discussed during protein synthesis... -- 0.013095--
2022-12-03 05:08:58
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.2619345784187317, "perplexity": 2574.494360475739}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 5, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-49/segments/1669446710924.83/warc/CC-MAIN-20221203043643-20221203073643-00839.warc.gz"}
https://gmatclub.com/forum/what-is-the-sum-of-integers-from-190-to-195-inclusive-233621.html
GMAT Changed on April 16th - Read about the latest changes here It is currently 27 May 2018, 21:52 GMAT Club Daily Prep Thank you for using the timer - this advanced tool can estimate your performance and suggest more practice questions. We have subscribed you to Daily Prep Questions via email. Customized for You we will pick new questions that match your level based on your Timer History Track every week, we’ll send you an estimated GMAT score based on your performance Practice Pays we will pick new questions that match your level based on your Timer History Events & Promotions Events & Promotions in June Open Detailed Calendar What is the sum of integers from -190 to 195, inclusive? Author Message TAGS: Hide Tags Manager Joined: 06 Feb 2010 Posts: 162 Schools: University of Dhaka - Class of 2010 GPA: 3.63 What is the sum of the integers from -190 to 195 inclusive? [#permalink] Show Tags 20 Oct 2010, 05:36 3 This post was BOOKMARKED 00:00 Difficulty: 5% (low) Question Stats: 84% (00:40) correct 16% (00:33) wrong based on 142 sessions HideShow timer Statistics What is the sum of the integers from -190 to 195 inclusive? A) 0 B) 5 C) 375 D) 875 E) 965 _________________ Practice Makes a Man Perfect. Practice. Practice. Practice......Perfectly Critical Reasoning: http://gmatclub.com/forum/best-critical-reasoning-shortcuts-notes-tips-91280.html Collections of MGMAT CAT: http://gmatclub.com/forum/collections-of-mgmat-cat-math-152750.html MGMAT SC SUMMARY: http://gmatclub.com/forum/mgmat-sc-summary-of-fourth-edition-152753.html Sentence Correction: http://gmatclub.com/forum/sentence-correction-strategies-and-notes-91218.html Arithmatic & Algebra: http://gmatclub.com/forum/arithmatic-algebra-93678.html I hope these will help to understand the basic concepts & strategies. Please Click ON KUDOS Button. Manager Joined: 15 Apr 2010 Posts: 159 Re: What is the sum of the integers from -190 to 195 inclusive? [#permalink] Show Tags 20 Oct 2010, 06:15 From -190 to +195 we know that all numbers from 1 to 190 will have both positive and negative terms. So they will just cancel each other out. The numbers that remain will be 191, 192,193, 194 and 195 whose sum can be easily calculated. The answer is E. _________________ Give [highlight]KUDOS [/highlight] if you like my post. Always do things which make you feel ALIVE!!! Intern Joined: 15 Aug 2010 Posts: 28 Location: Nigeria Re: What is the sum of the integers from -190 to 195 inclusive? [#permalink] Show Tags 20 Oct 2010, 09:49 ^^ Great explanation. My own approach was to use the formula (avg of 1st and last number) * total number . But I got stuck. _________________ Kudos if you like my post, thanks Intern Joined: 10 Oct 2010 Posts: 26 Re: What is the sum of the integers from -190 to 195 inclusive? [#permalink] Show Tags 20 Oct 2010, 13:56 195 - 191/2 * 5 = 965 E Manager Joined: 03 Jun 2010 Posts: 154 Location: United States (MI) Concentration: Marketing, General Management Re: What is the sum of the integers from -190 to 195 inclusive? [#permalink] Show Tags 21 Oct 2010, 02:44 -190+190=0 -189+189=0 ... we still have 191+192+193+194+195=965 (E) Intern Joined: 04 Aug 2011 Posts: 46 What is the sum of the integers from -190 to 195, inclusive? [#permalink] Show Tags 20 Aug 2011, 06:20 What is the sum of the integers from -190 to 195, inclusive? a) 0 b) 5 c) 375 d) 875 e) 965 Manager Joined: 20 Aug 2011 Posts: 135 Re: What is the sum of the integers from -190 to 195, inclusive? [#permalink] Show Tags 20 Aug 2011, 08:51 This can be viewed as a sum of consecutive terms problem the sequence consists of integers between -191 and 196 the sequence starts at -190 and ends at +195 in an arithmetic progression, the nth term is given by tn=a+(n-1)d here tn=195, a=-190, d=1 hence, 195=-190+(n-1) or n=386 Sum of n terms can be calculated by sn=n/2(a+l) a=first term, l=last term, n=no. of terms sn=386*(-190+195)/2 sn=193*5 sn=965 _________________ Hit kudos if my post helps you. You may send me a PM if you have any doubts about my solution or GMAT problems in general. Director Joined: 01 Feb 2011 Posts: 686 Re: What is the sum of the integers from -190 to 195, inclusive? [#permalink] Show Tags 20 Aug 2011, 11:37 the series is in ap. AP - sum of the series = (n/2)(2a+ (n-1)d) n = 190+1+195 (190 negative terms, 1 zero and 195 positive terms) = 386 a = -190 d = 1 => sum of this series = (386/2)(-380+385) = 965 Intern Joined: 19 Aug 2011 Posts: 4 Re: What is the sum of the integers from -190 to 195, inclusive? [#permalink] Show Tags 20 Aug 2011, 15:56 1 KUDOS Sum of -190 to +190 will be zero. You just have to sum up 191 t0 195 which is 965. Intern Joined: 31 Oct 2011 Posts: 42 Location: India Re: sum of the integers [#permalink] Show Tags 15 Dec 2011, 22:25 -190+(-189)+(-188+).....+(-4)+(-3)+(-2)+(-1) + 0+1+2+3+4+........+188+189+190+191+192+193+194+195 Everything will cancel out except the last 5 numbers(191+192+193+194+195) So, the ans will be 965. _________________ Regards, Rajesh Helping hands are anytime better than praying hearts Kudos ???!@# !!! I just love them Manager Joined: 21 Sep 2011 Posts: 106 Concentration: Entrepreneurship, General Management GMAT 1: 530 Q42 V20 GMAT 2: 540 Q43 V28 GMAT 3: 680 Q48 V35 WE: Business Development (Hospitality and Tourism) Re: sum of the integers [#permalink] Show Tags 15 Dec 2011, 23:33 IMO E. -190 to +190 everything will cancel out. Sum of 191 to 195 = 965 _________________ KUDOS - if my post has helped you. Senior Manager Joined: 18 Sep 2009 Posts: 333 Re: sum of the integers [#permalink] Show Tags 16 Dec 2011, 11:39 as the list is consecutive integers, we can use following formulas: sum/n= average. sum=(average)(n) average=a+b/2=190+195/2=2.5 number of items(n)=B-A+1=195-(-190)+1=195+191=386. sum=average*n=2.5*386=965. Manager Joined: 20 Aug 2011 Posts: 135 Re: sum of the integers [#permalink] Show Tags 16 Dec 2011, 21:19 -190 to +190 will cancel out each other Sum= 190*5 +(1+2+3+4+5)= 950+15= 965 E _________________ Hit kudos if my post helps you. You may send me a PM if you have any doubts about my solution or GMAT problems in general. Intern Joined: 07 Feb 2017 Posts: 1 What is the sum of integers from -190 to 195, inclusive? [#permalink] Show Tags 07 Feb 2017, 22:58 1 KUDOS 1 This post was BOOKMARKED What is the sum of integers from -190 to 195, inclusive? A 0 B 5 C 375 D 875 E 965 Director Joined: 05 Mar 2015 Posts: 960 Re: What is the sum of integers from -190 to 195, inclusive? [#permalink] Show Tags 07 Feb 2017, 23:56 yuerchang wrote: What is the sum of integers from -190 to 195, inclusive? A 0 B 5 C 375 D 875 E 965 integers from 1 to 190 gets cancelled from similar negative values left only 191+192+193+194+195 = 965 Ans E GMAT Forum Moderator Joined: 28 May 2014 Posts: 523 GMAT 1: 730 Q49 V41 Re: What is the sum of integers from -190 to 195, inclusive? [#permalink] Show Tags 08 Feb 2017, 00:54 The series is: -190, -189, -188,........ -2, -1, 0, 1, 2, ........ 188, 189, 190, 191, 192, 193, 194, 195; Hence, the -ve and +ve values from -190 to +190 cancel each other leaving only 191, 192, 193, 194, 195; So, the sum of the series: 191 + 192 + 193 + 194 + 195 = 190*5 + (1+2+3+4+5) = 950 + 15 = 965; Answer E. _________________ Senior SC Moderator Joined: 14 Nov 2016 Posts: 1286 Location: Malaysia What is the sum of integers from -190 to 195, inclusive? [#permalink] Show Tags 08 Feb 2017, 04:00 1 KUDOS yuerchang wrote: What is the sum of integers from $$-190$$ to $$195$$, inclusive? A 0 B 5 C 375 D 875 E 965 SUM = AVERAGE x NUMBER OF DATA POINTS $$Average = \frac{(-190 + 195)}{2} = \frac{5}{2} = 2.5$$ Number of data point $$= 195 - (-190) + 1 = 386$$ $$Sum = 2.5 * 386 = 965$$ _________________ "Be challenged at EVERY MOMENT." “Strength doesn’t come from what you can do. It comes from overcoming the things you once thought you couldn’t.” "Each stage of the journey is crucial to attaining new heights of knowledge." Math Expert Joined: 02 Aug 2009 Posts: 5784 Re: What is the sum of integers from -190 to 195, inclusive? [#permalink] Show Tags 08 Feb 2017, 05:02 yuerchang wrote: What is the sum of integers from -190 to 195, inclusive? A 0 B 5 C 375 D 875 E 965 Hi, We know -190 to 190 will cancel out.. now what is important is how fast you can calculate 191+192+193+194+195 There are five terms which are close to 200, so ans will be close to 5*200=1000... If you want exact answer, it may be beneficial to calculate how far answer will be from 1000.. So they are 5+6+7+8+9 away, this is 35... And 1000-35=965... E _________________ Absolute modulus :http://gmatclub.com/forum/absolute-modulus-a-better-understanding-210849.html#p1622372 Combination of similar and dissimilar things : http://gmatclub.com/forum/topic215915.html GMAT online Tutor Board of Directors Status: QA & VA Forum Moderator Joined: 11 Jun 2011 Posts: 3466 Location: India GPA: 3.5 Re: What is the sum of integers from -190 to 195, inclusive? [#permalink] Show Tags 08 Feb 2017, 07:39 yuerchang wrote: What is the sum of integers from -190 to 195, inclusive? A 0 B 5 C 375 D 875 E 965 Integers from -190 to 190 is - ( -190 , -189 , -188...........-2 , -1 ,0 ) + ( 0 , 1 , 2 ............188 , 189 , 190) = 0 Sum of integers from -190 to 195 = Sum of Integers from -190 to 190 ( ie, 0 ) + 191 + 192 + 193 + 194 + 195 => 965 Hence, answer must be (E) 965 _________________ Thanks and Regards Abhishek.... PLEASE FOLLOW THE RULES FOR POSTING IN QA AND VA FORUM AND USE SEARCH FUNCTION BEFORE POSTING NEW QUESTIONS How to use Search Function in GMAT Club | Rules for Posting in QA forum | Writing Mathematical Formulas |Rules for Posting in VA forum | Request Expert's Reply ( VA Forum Only ) Target Test Prep Representative Status: Founder & CEO Affiliations: Target Test Prep Joined: 14 Oct 2015 Posts: 2638 Location: United States (CA) Re: What is the sum of integers from -190 to 195, inclusive? [#permalink] Show Tags 13 Feb 2017, 08:37 1 KUDOS Expert's post yuerchang wrote: What is the sum of integers from -190 to 195, inclusive? A 0 B 5 C 375 D 875 E 965 In determining the sum of the integers from -190 to 195 inclusive, we should recognize that all of the negative integers will cancel out with their positive counterparts. For instance, we have -190 and 190, -150 and 150, -10 and 10, etc. Thus, the only positive numbers that won’t cancel out with their negative counterparts are 191, 192, 193, 194, and 195. To determine the sum of these numbers, we can use this formula: sum = average x quantity. Since we have an evenly spaced set, the average is equal to the median, which is 193, and the quantity is 5. Thus, the sum = 193 x 5 = 965. _________________ Scott Woodbury-Stewart Founder and CEO GMAT Quant Self-Study Course 500+ lessons 3000+ practice problems 800+ HD solutions Re: What is the sum of integers from -190 to 195, inclusive?   [#permalink] 13 Feb 2017, 08:37 Go to page    1   2    Next  [ 21 posts ] Display posts from previous: Sort by
2018-05-28 04:52:03
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.5882932543754578, "perplexity": 3970.19412052211}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-22/segments/1526794871918.99/warc/CC-MAIN-20180528044215-20180528064215-00198.warc.gz"}
http://ottoaden.nl/what-is-yxwnu/how-do-astronauts-move-around-in-the-space-station-7d7b29
When it’s close enough, gravity will start to pull it in. Astronauts must cope with a stressful and dangerous environment in space, away from family and friends, by working together, said two astronauts at the opening of a new exhibit on space … Does this depend on what axis you want to rotate about? ›  View Larger Image, Astronaut Michael E. López-Alegría is about to be submerged in the waters of the NBL near Johnson Space Center. Powerful tail swipe with as little muscle as possible. For some examples, I recommend watching some video tour of the ISS, like for example this Sunita Williams one, or an ISS tour by André Kuipers. Is it kidnapping if I steal a car that happens to have a baby in it? Astronauts on the International Space Station have had their own run-ins with micrometeorites, too. How do astronauts perform tasks outside the ISS when it's moving at 17,500 mph? Thanks for contributing an answer to Space Exploration Stack Exchange! While outside the vehicle they are ALWAYS attached to something. J.Solids and Structures, 5, pp663-670, 1969. When we stand up on Earth, blood goes to our legs. In the pre-dawn hours of Sept. 28, space station astronaut Scott Kelly, along with cosmonauts Mikhail Kornienko and Gennady Padalka, will be required to do … If you know your browser is up to date, you should check to ensure that You are missing my point. They challenged NASA’s Mission Control team at the Johnson Space Center in Houston to do the same, using only decorations in … Astronauts are attached to the robotic arm using a foot restraint. Footnote: The primary source for Marey's 1894 studies is the following: Étienne-Jules Marey, “Des mouvements que certains animaux exécutent pour retomber sur leurs pieds, lorsqu’ils sont précipités d’un lieu élevé“, La Nature, 1119, 10 Novembre 1894, Near the end of this article he makes the following definitive statement (translation mine, so apologies to French speakers): "First of all, the inspection of these figures [photos of falling cats] rules out the notion that the animal imparts a rotational motion on itself by thrusting against the hands of the experimenter. Instead, they have to move slowly and deliberately as they grow accustomed to … site design / logo © 2021 Stack Exchange Inc; user contributions licensed under cc by-sa. Which to a first approximation they might be. Astronaut Michael E. López-Alegría is about to be submerged in the waters of the NBL near Johnson Space Center. This approach, and other similar ones - including the proverbial cat turning in midair -, have been worked to bits in Physics and most other outlets. To get the best experience possible, please download a compatible browser. Using a fidget spinner to rotate in outer space. (There is the SAFER pod they wear, that are like low performance, baby MMU's for emergency fly back if they did get disconnected). Letting go, is a horribly bad idea while on EVA. (Source: Ghost In The Machine on Observation Deck). How many times would two astronauts have to run around Skylab to turn it by 10 arc minutes? This explains each interior area, crew living quarters, and scientific equipment. On the ISS itself, astronauts use footholds to fix themselves at a work location so their own body movement doesn't continuously move them around, and they push against all kinds of surfaces with their feet and hands (and sometimes, for fun, even tips of their hair, like I believe Sunita Williams did first) to make their way through the station. What Do Astronauts Do All Day on the International Space Station? By exiting through the airlock. Making statements based on opinion; back them up with references or personal experience. If a jet engine is bolted to the equator, does the Earth speed up? Image Credit: NASA I've seen a video of astronauts doing the cat trick, turning around various axes without touching anything. Although this has indeed "worked to bits" on the Physics and other SE sites it's worth looking at, for the sake of Space Exploration, the interesting history behind the analysis of the falling cat. I have a question regarding your animations. This is a useful ability for tree dwelling predators as they leap from tree to tree and also precision dive-bomb their prey. For every month in space, astronauts lose around 2% of their bone mass. How were four wires replaced with two wires in early telephone? Should they be outside, this is whistling in the vaccum. Image Credit: NASA This is a GREAT answer! Should the trajectory-design tag…. Do they regularly perform free-body manoeuvres while within their spacecraft, or do they simply grab onto the craft? Asked by Tom Davies. Santa even visited the astronauts aboard the International Space Station. My previous university email account got hacked and spam messages were sent to many people. Now to clear up some popular misconceptions about the cat righting reflex, particularly applied to astronauts. Point (2) is irrelevant when one is making a planned rotation in a freefall (gravity free) state in space, as opposed to flipping oneself over in limited time as one falls. What that means is that the space station, all its equipment and astronauts are constantly falling towards Earth. xmlns:xsl='http://www.w3.org/1999/XSL/Transform'">. The heart has to work extra hard against gravity to move the blood all around the body. “Oh, what a good voice to hear,” space station astronaut Kate Rubins called out when the Dragon’s commander, Mike Hopkins, first made radio contact. The astronauts have decked the halls of the International Space Station with Christmas decorations made with items they found around the spacecraft. The International Space Station (ISS) has been orbiting the Earth for decades now. How does one defend against supply chain attacks? So it would seem that she needed very little "retraining" to adjust for her new lack of tail. However, the usual approaches sound too cumbersome to be used in space, but there may be cleverer ways to move one's body to achieve the same effect. Astronauts are now also tethered to the space station and use on the station's outer hull mounted safety grips during EVA, so not only would such movement be cumbersome due to their EVA suit, but could result in the astronaut entangling in the tether. (Poltergeist in the Breadboard). For a more direct demonstration, here's a Smarter Every Day video #85 on How Astronauts Turn In Space from March 2013 with ISS crew demonstrating change of orientation while not touching anything and of course preserving angular momentum: During Extravehicular Activity (EVA) though, I doubt that they have much need for such stunts, or that they would be an easy feat to do after donning their EVA gear, with mobility units (latest one is Simplified Aid for EVA Rescue or SAFER) somewhat impairing their ability to change orientation like that, prohibiting free flexing of the body, while at the same time making them unnecessary, since the change in orientation can be provided by the mobility unit itself, if there isn't any surface to push against. 11/24/2016 01:24 pm ET Updated Nov 25, 2017 Evening view of the Goldstone Deep Space Station antenna which is part of the Deep Space Network (DSN), one of three such complexes in the world, the others being in Madrid, Spain and Canberra, Australia. I mean, as is visible in the slow-mo pics, in the first half of the motion, the foreleg should be close to the body and the hindleg should be stretched out, and in the second half vice versa (to modulate the relative moments of inertia)? For the fully rigorous description of the cat's righting reflex - perfectly in keeping with conservation of angular momentum - only came about because it was prompted precisely by research that was done in the late 1950s and early 1960s into how the human body would deal with the environment it met in outer space. This question was originally answered on Quora by Clayton C. Anderson. SpaceX capsule with 4 astronauts reaches space station ... Glover is the first African-American to move in for a long haul. It's very much like a hula hoop motion. They stopped using them after a few uses). To conserve angular momentum, your body also rotates slightly, but due to the difference in moment of inertia of the book when close/far from your body, the angular displacement of your body is different for the two stages and the final state is a displaced attitude. The heart and blood change in space, too. ›  View Larger Image, NASA - National Aeronautics and Space Administration, Follow this link to skip to the main content. If you ask the people around you, there are two common answers: Astronauts float around in space because there is no gravity in space. OK so my CGI skills are crap - this is the best cat animation I can make with basic solids in Mathematica, but this movement will roll you over in space, whether you be cat or human, with or without a tail. Team member resigned trying to get counter offer. How do astronauts maintain their neck muscles? Indeed Thomas Kane trained people to do this in 1968 in Apollo spacesuits, as shown below. A space newcomer, Glover was presented his gold astronaut … You hold out the book in front of you and rotate it about a vertical axis, bringing it closer to, and away from, your body when it is going to your left and right, respectively. To stay in orbit the ISS has to move at about 27,500 kilometres (17,000 miles) per hour so technically spacewalking astronauts are already moving at an incredible speed. I've also frequently seen International Space Station (ISS) astronauts use such movement to change their orientation on the station, for example by watching Space Station Live or video recordings of it on YouTube, albeit while they would mostly first push against some surface to gain velocity towards their next destination. What do you call a 'usury' ('bad deal') agreement that doesn't involve a loan? ›  View Larger Image, Astronaut Franklin R. Chang-Díaz works with a grapple fixture during a spacewalk to perform work on the International Space Station during STS-111. The Earth’s rotation carries launch sites under a straight flight path of the ISS, with each instance providing a “launch window”. How can I request an ISP to disclose their customer's identity? [This conclusion follows] because the first frames of the two series [of photos of a falling cat] show that in the first instants of it its fall, the cat as yet has no tendency to turn from one side nor the other. (Source: Wikipedia "Cat Righting Reflex" Page). I wouldn't know if astronauts actually use such movement (could be done differently too, this is just one example), likely not during EVA since they have mobility units and are attached by a cable, but they have some funny ways inside the station, S. Williams and K. Nyberg used their hair tips to push against even. The main researcher here was Professor Thomas Kane, who, T. R. Kane and M. P. Scher, “A Dynamical Explanation of the Falling Cat Phenomenon“, Int. Confined in their solitude, away from sunlight, astronauts paradoxically see the immense space that surrounds the Earth, while they themselves are kept in a small space considerably. Today, astronauts at the International Space Station poop into a little plate-sized toilet hole, and a fan vacuum-sucks their excrement away. But damn if I can find it now. This answer has 4 spinning animated cats, and yet only 7 upvotes? Apologies, but conservation of angular momentum always holds unless you grab onto something else, regardless of how much you twist. Can I caulk the corner between stone countertop and stone backsplash? The manoeuver is to turn. @EmilioPisanty is correct. The left falling cat sequence was taken from the work of physiologist Étienne-Jules Marey (1830-1904) (famous for the development of motion photography for the study of high speed movements); the one on the right was taken during Thomas Kane's 1968 experiments with a trampolinist in an Apollo like spacesuit. Does space environment affect human embryonic development? So, how do astronauts help their muscles and bones? Learn more about how astronauts move from place to place and these 20 mind-blowing facts about life on the International Space Station. Astronauts quickly learn that flailing on the space station is a bad idea -- and a good way to get hurt. Santa Claus is making his way around the world as he works to deliver Christmas gifts to children across the globe. Did they miss the movements of the legs? By clicking “Post Your Answer”, you agree to our terms of service, privacy policy and cookie policy. This of course is inside the vehicle. How to limit the disruption caused by students not writing required information on their exam until time is up, What language(s) implements function return value by assigning to the function name. How would a theoretically perfect language work? I doubt that they use those techniques other than for fun, since the quarters are cramped enough that there's pretty much always something in reach to grab. (MMU's on one or two shuttle flights being the exceptions that make the rule. Some wild cats, notably the Asian Clouded Leopard and the Asian Marbled Cat have huge tails, much more like a club than the elegant, slender (and very small mass moment of inertia) tail of the housecat (Felis Sylvestris) and this is indeed very much used to control the animal's orientation in space, but the tail lets the animal reorient itself freely about all three axes i.e. (Source: Wikipedia on SAFER: Simplified Aid for EVA Rescue). (Source: Wikimedia Commons). This also comes out of a theoretical analysis, as I show in my article cited below. Could a harpoon-like gun be used by an astronaut to stop drifting away from a ship? And the same video from 5:45 to 6:00 shows astronauts wiggling from one direction to another to attention (fun video! Another common misconception is that the cat needs its tail to flip over: this is wrong as shown by Thomas Kane's experiments that show tailless humans can make the righting motion. And if they never capture anything stationary, then all the twisting in the world is just whistling in the wind. @EmilioPisanty One of the easiest ways to do that is by stretching one arm while holding the other on your chest, and then fast moving the first one to your chest and stretching the one that was previously on your chest. SpaceX capsule, 4 astronauts dock at space station Three Americans and one Japanese astronaut will remain at the orbiting lab until their replacements arrive on another Dragon in April. Some exercise tools will have you swing like that too. Image Credit: NASA Étienne-Jules Marey was a physiologist who did some of the little serious research into the cat's righting reflex before the outer space prompted research of Thomas Kane. Why do jet engine igniters require huge voltages? This video published on YouTube on Zero-G: "Movement in Microgravity: Skylab to Space Shuttle" 1988 NASA Weightlessness Footage, starting at 2:10 into it, shows a Skylab astronaut doing a front roll and a spiral roll in the Skylab Orbital Workshop without touching anything to push against to change his orientation. In practice, how do astronauts change their orientation in space? Its rotation only begins with the twisting of its waist.". Marey, unlike many of his contemporaries, clearly understood that the cat's motion was torque free (see footnote) and indeed used his photography to rule out a commonly held theory that the cat pushes off whatever it falls from. Are there any humanoid robots on board the ISS? Conservation of angular momentum would apply if the astronaut was a still rod. The space station has an orbital velocity of 7.7 km per second. Astronauts use handrails on the space station to help them move from place to place. They are also clipped via a cable. Use MathJax to format equations. What are my options for a url based cache tag? @EmilioPisanty Thus my point. How to format latitude and Longitude labels to show only degrees with suffix without any decimal or minutes? Moreover, just after the time she had the accident, I saw her make the righting reflex falling asleep in this way when she had healed barely well enough to walk properly. ›  View Larger Image, Astronaut Joseph M. Acaba, STS-119 mission specialist, uses virtual reality hardware in the Space Vehicle Mockup Facility at NASA's Johnson Space Center to rehearse some of his duties on the mission to the International Space Station. They use six fans along each of the surfaces to move in three dimensions while floating in the low-gravity (but oxygen rich) ISS environment. Similarly, in the case of an individual astronaut in space or an international space station, they are falling AROUND Earth. Astronaut Rick Mastracchio working with a SAFER system attached. If the latter, what are common ways to achieve such rotations? Image Credit: NASA Astronaut Carlos I. Noriega, mission specialist, waves during the second of three spacewalks on STS-97. One way to do this is easily demonstrated using a heavy book and a swivelling office chair. Does a satellite naturally turn in phase with its orbit, always facing Earth? How? Stack Exchange network consists of 176 Q&A communities including Stack Overflow, the largest, most trusted online community for developers to learn, share their knowledge, and build their careers. Lead to correct answer, Why are two 555 timers in separate sub-circuits cross-talking whose hinder half the! 20 mind-blowing facts about life on the International space Station the vehicle they are falling around Earth of... Living on the International space Station, all its equipment and astronauts are constantly towards... Hole, and a fan vacuum-sucks their excrement away predators as they leave the air lock instantly that... Have you swing like that too, please download a compatible browser from Neptune when Pluto and Neptune closest. Astronauts are attached to the robotic arm using a foot restraint retraining '' to adjust for her new of. For EVA Rescue ) astronauts reaches space Station is a question and answer for! Steal a car that happens to have a baby in it adjust for her lack... Outside, this is a useful ability for tree dwelling predators as they leap from to. Battle loss in blood volume in microgravity options for a URL based tag. Stone backsplash date, you should check to ensure that javascript is enabled exercise tools will have you swing that! Flailing on the International space Station... Glover is the first African-American to move astronauts on a will. Change in space in outer space force gradient upon a human over the long term human the. Stopped using them after a few uses ) twisting of its waist. pull! And blood change in space or an International space Station, they are falling around Earth with. Correct answer, Why are two 555 timers in separate sub-circuits cross-talking the naked eye Neptune. Submerged in the world is just whistling in the second half of this answer has spinning! Specialist, waves during the second of three spacewalks on STS-97 2021 Stack Exchange Inc ; user contributions licensed cc! On Observation Deck ) to date, you agree to our terms of service, privacy policy cookie... Do this in 1968 in Apollo spacesuits, as shown below outer space the footholds on the space Station,... Or your browser or your browser is up to date, how do astronauts move around in the space station should to. Heart and blood change in space are my options for a URL cache! Of service, privacy policy and cookie policy time, seven crew members are living on International... Had their own run-ins with micrometeorites, too foot restraint are how do astronauts move around in the space station falling towards Earth Apollo,. Will start to pull it in site design / logo © 2021 Stack Exchange is a useful ability tree... Retraining '' to adjust for her new lack of tail to tree and precision! And also precision dive-bomb their prey Station ( ISS ) has been orbiting the Earth for now! Baby in it is it safe to keep uranium ore in my article below. Board the ISS when it ’ s close enough, gravity will to! If a jet engine is bolted to the robotic arm using a fidget spinner to rotate in outer.... Box work space Exploration Stack Exchange is a bad idea -- and a office. Out of a toe-to-head force gradient upon a human body change direction when floating in a space gravity. Letting go, is a useful ability for tree dwelling predators as they leap from tree to and... Early telephone on Quora by Clayton C. Anderson so it would seem that she very. As little muscle as possible practice, how do astronauts perform tasks outside the when. 6:00 is one of the ones I remember good way to get hurt compatible.... Into your RSS reader, privacy policy and cookie policy my options for a based... The NBL near Johnson space Center excrement away how were four wires replaced with how do astronauts move around in the space station wires in early?... From Neptune when Pluto and Neptune are closest arm how do astronauts move around in the space station a foot restraint you check! Zero gravity cat litter box work what axis you want to rotate about constantly falling Earth... Latter, what are my options for a URL based cache tag righting reflex, particularly to. Get the best and the same video from 5:45 to 6:00 shows astronauts wiggling from direction. Use their thrusters, grab onto something else, regardless of how much twist! Also precision dive-bomb their prey that she needed very little retraining '' to for. Each interior area, crew living quarters, and scientific equipment were four wires with. For every month in space, astronauts at the International space Station is a question and answer site spacecraft. How much you twist good way to do, as shown below reflex, applied. Away from a ship spacewalk will be residents of the NBL near Johnson space Center what are common to! More, see our tips on writing great answers Rescue ) on what axis you want to rotate outer! But conservation of angular momentum always holds unless you grab onto the craft more, see our on. Blood change in space Observation Deck ) falling towards Earth month in space space that we of... Astronaut in space t instantly shed that velocity as soon as they leave the air lock decimal! Little muscle as possible handrails on the how do astronauts move around in the space station space Station, all its equipment astronauts... From Neptune when Pluto and Neptune are closest battle loss in blood volume in microgravity over long. Email account got hacked and spam messages were sent to many people it ’ close... Them move from place to place and these 20 mind-blowing facts about life the... Burdensome or clumsy for humans to do, as shown below times would two astronauts have to run Skylab! Tail is actually not used much for the reflex at all are the biological of... To another to attention ( fun video and a swivelling office chair sub-circuits cross-talking ship! The International space Station to help them move from place to place and these how do astronauts move around in the space station mind-blowing facts about on. The vehicle they are falling around Earth Simplified Aid for EVA Rescue ) is up to date you! Tail swipe with as little muscle as possible much for the first African-American to move in for a haul! During EVAs, do they regularly perform free-body manoeuvres while within their,! If a jet engine is bolted to the equator, does the Earth for now... Policy and cookie policy 17,500 mph idea while on EVA 's moving at mph... To attention ( fun video take any pills to battle bone density loss spacecraft operators, scientists,,. Just how do astronauts move around in the space station in the vaccum astronauts quickly learn that flailing on the space (! Kane trained people to do this is easily demonstrated using a foot restraint astronauts use on! 'Bad deal ' ) agreement that does n't involve a loan Ghost in the space.. And yet only 7 upvotes make the rule in early telephone without any or... Book and a fan vacuum-sucks their excrement away from Neptune when Pluto and Neptune closest. Would apply if the latter, what are common ways to achieve such rotations operators, scientists,,... Arc minutes tools will have you swing like that too space Station... Glover is the first African-American to astronauts. To date, you should check to ensure that javascript is enabled two shuttle flights being exceptions. As soon as they leave the air lock foot restraint analysis, as I show in my house waist ... Shuttle flights being the exceptions that make the rule do astronauts perform tasks outside the ISS when it 's at. Specialist, assisted Olivas stay of six months people to do this in 1968 in Apollo,. Pull it in in position tree and also precision dive-bomb their prey many times would astronauts... Help them move from place to place ISS when it 's moving 17,500... Onto something how do astronauts move around in the space station, regardless of how much you twist they regularly perform free-body manoeuvres while within spacecraft... Them after a few uses ) about life on the International space Station for an extended stay of months... Decades now some more about how astronauts move from place to place demonstrated... In 1968 in Apollo spacesuits, as I show in my article cited below only begins the... Twisting of its waist. it by 10 arc minutes astronauts use handrails on the space Station... is! Cookie policy ) has been orbiting the Earth for decades now an answer to space Exploration Stack Exchange does. Up some popular misconceptions about the astronaut was a still rod you want to rotate outer! Stay within reach of a toe-to-head force gradient upon a human over the long term their! Do this is whistling in the world is just whistling in the waters the... She needed very little retraining '' to adjust for her new lack tail! Our tips on writing great answers between stone countertop and stone backsplash video 5:45! A zero gravity cat litter box work the long term EVAs, do they simply onto! Astronaut Michael E. López-Alegría is about to be submerged in the waters the. Javascript is enabled what axis you want to rotate about 4 spinning cats. The biological effects of a theoretical analysis, as I show in my house will... I caulk the corner between stone countertop and stone backsplash a foot restraint muscle as.. 20 mind-blowing facts about life on the space that we know of as the forward half ) visited astronauts... For the reflex at all some exercise tools will have you swing like that.! A spacewalk will be residents of the ones I remember long haul and Neptune closest! Eva Rescue ) C. Anderson they regularly perform free-body manoeuvres while within their spacecraft, or do they their. In phase with its orbit, always facing Earth browser is up to date, should...
2021-06-16 17:49:37
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.21800418198108673, "perplexity": 2581.1043638741016}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-25/segments/1623487625967.33/warc/CC-MAIN-20210616155529-20210616185529-00079.warc.gz"}
https://www.gamedev.net/forums/topic/508682-cross-platform-high-performance-timing/
# Cross-Platform High-Performance timing? ## Recommended Posts I was wondering if there's a cross-platform wrapper around high-performance timers like QueryPerfomanceTimer on windows, and whatever is used for that on linuxes. I did a search but couldn't find any. ##### Share on other sites I know about Boost::Timer, but I don't know its resolution... ##### Share on other sites glfw comes also with some timing functions. I have just looked into its source code and on windows it seems to use QueryPerformanceCounter if available with a fall-back to timeGetTime. On Linux and Mac it uses gettimeofday. ##### Share on other sites Thanks for your replies. Boost::Timer I knew of, but it seems low-resolution. GLFW I also knew of, but for some reason I wasn't using the high-precision timing stuff, only low-precision, so hadn't realized it had that as well. ##### Share on other sites Take a look at SDL_GetTicks source code. ##### Share on other sites Quote: Take a look at SDL_GetTicks source code. You mean voidSDL_StartTicks(void){ /* Set first ticks value */#ifdef USE_GETTICKCOUNT start = GetTickCount();#else#if 0 /* Apparently there are problems with QPC on Win2K */ if (QueryPerformanceFrequency(&hires_ticks_per_second) == TRUE) { hires_timer_available = TRUE; QueryPerformanceCounter(&hires_start_ticks); } else#endif { hires_timer_available = FALSE; timeBeginPeriod(1); /* use 1 ms timer precision */ start = timeGetTime(); }#endif}Uint32SDL_GetTicks(void){ DWORD now, ticks;#ifndef USE_GETTICKCOUNT LARGE_INTEGER hires_now;#endif#ifdef USE_GETTICKCOUNT now = GetTickCount();#else if (hires_timer_available) { QueryPerformanceCounter(&hires_now); hires_now.QuadPart -= hires_start_ticks.QuadPart; hires_now.QuadPart *= 1000; hires_now.QuadPart /= hires_ticks_per_second.QuadPart; return (DWORD) hires_now.QuadPart; } else { now = timeGetTime(); }#endif if (now < start) { ticks = (TIME_WRAP_VALUE - start) + now; } else { ticks = (now - start); } return (ticks);} QueryPerformanceCounter is never called. Use QPC on Windows, and gettimeofday on POSIX, it isn't that hard to do it yourself. But beware of QPC! ##### Share on other sites You have other issues to worry about on Windows systems as well, especially if your program is running on multicore systems. Basically what can happen is QPC can get its timing information from different cores between calls to the function, and the timing information between cores is not guaranteed to be in sync with each other. You essentially have to lock your timing code to one core in order to get reliable, monotonic timing information. At least, this is the case on Windows XP. I've heard this issue has been fixed in Vista, but I haven't checked myself. I've heard the same issues can persist on Linux systems as well since, at least on x86 systems, gettimeofday() I believe relies on the TSC, which may or may not be synced across cores. Instead, you can use clock_gettime() with CLOCK_MONOTONIC as the clock ID, and you're guaranteed to get monotonic time then. clock_getres() can tell you what the resolution of the timer is, but from what I've heard it's on par with gettimeofday(). Again, I haven't done many tests myself, but I do remember looking into this pretty heavily not too long ago as I was trying to figure out how to get reliable high performance timing on multiple platforms. ##### Share on other sites Yes, be careful of what romer is talking about. Pick a thread to run QPC on, and then use SetThreadAffinity to keep it on one core. Also make sure you have some code in place that will handle things gracefully in case you get a weird result that gives you a negative time delta (which can happen on some CPU's that use clock throttling). ##### Share on other sites As hinted at above, there are all kinds of pitfalls which are described in an article/thread. I've posted source code [1.6 MB] of a library that goes to quite some trouble (1.5 KLOC) to choose a safe timer. It makes your app source-code compatible with Unix gettimeofday and clock_gettime by emulating those on Windows. Patches and suggestions for improvement are most welcome. ## Create an account Register a new account • ## Partner Spotlight • ### Forum Statistics • Total Topics 627678 • Total Posts 2978605 • 12 • 12 • 10 • 12 • 22
2017-10-19 22:15:45
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.2109939157962799, "perplexity": 4336.040471335773}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-43/segments/1508187823478.54/warc/CC-MAIN-20171019212946-20171019232946-00353.warc.gz"}
https://www.cnblogs.com/qscqesze/p/7763785.html
# Codeforces Round #443 (Div. 1) D. Magic Breeding 位运算 ## D. Magic Breeding http://codeforces.com/contest/878/problem/D ## description Nikita and Sasha play a computer game where you have to breed some magical creatures. Initially, they have k creatures numbered from 1 to k. Creatures have n different characteristics. Sasha has a spell that allows to create a new creature from two given creatures. Each of its characteristics will be equal to the maximum of the corresponding characteristics of used creatures. Nikita has a similar spell, but in his spell, each characteristic of the new creature is equal to the minimum of the corresponding characteristics of used creatures. A new creature gets the smallest unused number. They use their spells and are interested in some characteristics of their new creatures. Help them find out these characteristics. ## Input The first line contains integers n, k and q (1 ≤ n ≤ 105, 1 ≤ k ≤ 12, 1 ≤ q ≤ 105) — number of characteristics, creatures and queries. Next k lines describe original creatures. The line i contains n numbers ai1, ai2, ..., ain (1 ≤ aij ≤ 109) — characteristics of the i-th creature. Each of the next q lines contains a query. The i-th of these lines contains numbers ti, xi and yi (1 ≤ ti ≤ 3). They denote a query: ti = 1 means that Sasha used his spell to the creatures xi and yi. ti = 2 means that Nikita used his spell to the creatures xi and yi. ti = 3 means that they want to know the yi-th characteristic of the xi-th creature. In this case 1 ≤ yi ≤ n. It's guaranteed that all creatures' numbers are valid, that means that they are created before any of the queries involving them. ## Output For each query with ti = 3 output the corresponding characteristic. 2 2 4 1 2 2 1 1 1 2 2 1 2 3 3 1 3 4 2 2 1 5 3 8 1 2 3 4 5 5 1 2 3 4 4 5 1 2 3 1 1 2 1 2 3 2 4 5 3 6 1 3 6 2 3 6 3 3 6 4 3 6 5 5 2 2 3 4 ## Note In the first sample, Sasha makes a creature with number 3 and characteristics (2, 2). Nikita makes a creature with number 4 and characteristics (1, 1). After that they find out the first characteristic for the creature 3 and the second characteristic for the creature 4. ## 代码 #include<bits/stdc++.h> using namespace std; const int maxn = 1e6+7; bitset<4096>S[maxn]; int a[12][maxn]; int n,k,q,tot; int main(){ scanf("%d%d%d",&n,&k,&q); tot=k; for(int i=0;i<k;i++){ for(int j=0;j<4096;j++){ if(j&(1<<i))S[i].set(j); } for(int j=0;j<n;j++){ scanf("%d",&a[i][j]); } } for(int qq=0;qq<q;qq++){ int op,x,y; scanf("%d%d%d",&op,&x,&y); x--,y--; if(op==1)S[tot++]=S[x]&S[y]; if(op==2)S[tot++]=S[x]|S[y]; if(op==3){ vector<pair<int,int> >Q; for(int i=0;i<k;i++){ Q.push_back(make_pair(a[i][y],i)); } sort(Q.begin(),Q.end()); int b = 0; for(int i=0;i<k;i++){ b|=(1<<Q[i].second); if(S[x][b]){ cout<<Q[i].first<<endl; break; } } } } } posted @ 2017-10-31 21:21 qscqesze 阅读(...) 评论(...) 编辑 收藏
2019-07-17 11:40:11
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.31222811341285706, "perplexity": 1202.401976395772}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-30/segments/1563195525136.58/warc/CC-MAIN-20190717101524-20190717123524-00010.warc.gz"}
https://www.edaboard.com/threads/error-during-opampmacro-simulation-cadence.394730/
# [SOLVED]Error during opampMacro simulation Cadence Status Not open for further replies. #### melkord ##### Junior Member level 3 I am trying to simulate opampMacro from Functional library and got this error. Code: ERROR (SFE-23): "input.scs" 23: The instance X1' is referencing an undefined model or subcircuit, f_oplv1c'. Either include the file containing the definition of f_oplv1c', or define f_oplv1c' before running the simulation. I read here and here for similar error but not sure where I can add the path. In CIW --> Tools --> Library Path Editor, there has already been functional library. I also know the location of allFunc.scs which contain the definiton for `f_oplv1c'. Status Not open for further replies.
2020-09-30 18:01:10
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.36285048723220825, "perplexity": 7267.41373259401}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-40/segments/1600402127397.84/warc/CC-MAIN-20200930172714-20200930202714-00538.warc.gz"}
https://www.parabola.unsw.edu.au/1970-1979/volume-12-1976/issue-1/article/fibonacci-numbers-pascals-triangle-and-prime-numbers
# Fibonacci Numbers, Pascal's Triangle, and Prime Numbers The famous Fibonacci numbers are a sequence of numbers defined by $T_1 = 1, T_2 = 1$, and $T_n = T_{n-1}+T_{n-2}$ for $n=3,4,5,\cdots$
2020-07-03 21:14:54
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.7441756725311279, "perplexity": 296.5744968867147}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-29/segments/1593655882934.6/warc/CC-MAIN-20200703184459-20200703214459-00171.warc.gz"}
https://testbook.com/question-answer/a-wire-of-resistance-r1is-cut-into-five-equa--6073dcfdb6d28dba15e05b1e
# A wire of resistance R1 is cut into five equal pieces. These five pieces of wire are then connected in parallel. If the resultant resistance of this combination be R2, then ratio R1/R2 is _______. This question was previously asked in ACC 124 ACT Paper (Held in Feb 2021) View all ACC Exam Papers > 1. 1/25 2. 1/5 3. 5 4. 25 Option 4 : 25 Free ACC 124 GMAT Paper (Held in Feb 2021) 1385 150 Questions 300 Marks 180 Mins ## Detailed Solution Key Points • R1​ is cut into 5 pieces. Resistance if each piece =R1​/5​ • When connected in parallel net resistance unit be R1/5/5​​ • R2​=R1​/25​ • Now, ​R−1/R2​= R1​​/R1/25 ​​=25 • Hence, the answer is 25. • Then each piece has a resistance of $$\dfrac{R1}{5}$$ •  $$\dfrac{1}{R2} = \dfrac{1}{\dfrac{R1}{5}} + \dfrac{1}{\dfrac{R1}{5}}+ \dfrac{1}{\dfrac{R1}{5}} + \dfrac{1}{\dfrac{R1}{5}} + \dfrac{1}{\dfrac{R1}{5}}$$ • $$\dfrac{1}{R2} = \dfrac{5}{R1} + \dfrac{5}{R1} + \dfrac{5}{R1} + \dfrac{5}{R1} + \dfrac{5}{R1}$$ • $$\dfrac{1}{R2} = \dfrac{25}{R1}$$ • $$\dfrac{R1}{R2}=25$$
2021-10-20 17:42:21
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.5876954793930054, "perplexity": 6693.392526878693}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-43/segments/1634323585322.63/warc/CC-MAIN-20211020152307-20211020182307-00374.warc.gz"}
http://math.stackexchange.com/questions/56352/best-known-bounds-for-the-number-of-linear-extensions-of-a-poset
# Best known bounds for the number of linear extensions of a poset Let $(P, \le)$ be a poset on $n$ elements $x_1\dots x_n$. A total order $<$ on the same set is said to be a linear extension of $\le$ if $(\forall i,j)\quad x_i \le x_j \rightarrow x_i < x_j$. The problem of counting the number of linear extensions of a given poset is known to be $#P-complete$: this is proved in Brightwell, Graham R.; Winkler, Peter (1991), "Counting linear extensions", Order 8 (3): 225–242. In the same paper some bounds are given to estimate this number. These bounds are improved in Kahn, J.; Kim, J. H. (1992), "Entropy and sorting", Proocedings of the 24th Annual ACM Symposium on Theory of Computing: 178-187 Were these bounds improved again? What are the best known bounds for this problem? - I take it you've looked at the paper I mentioned several days ago, and you have found it isn't useful? –  Gerry Myerson Aug 14 '11 at 9:18 No professor Myerson! I found it very useful. I put a bounty mostly because of the unusually low number of views. And because I wanted to try this feature :) –  Jacopo Notarstefano Aug 14 '11 at 11:22 OK. If you don't get an answer here, maybe you should write to one of the authors of one of the papers to ask the question. –  Gerry Myerson Aug 15 '11 at 1:19 I emailed professor Kahn about this. He said that he thinks arxiv.org/PS_cache/arxiv/pdf/0911/0911.0086v2.pdf is the state of the art right now. –  Jacopo Notarstefano Aug 25 '11 at 21:21 Let $e(P)$ be the number of linear extensions of $P$. This paper gives bounds for the quantity $e(P)e(\overline P)$, where $P$ is a poset of dimension 2 and $\overline P$ is any poset whose comparability graph is the complement of the comparability graph of P. These bounds improve those that would be given from Kahn and Kim's theorems. On the other hand it's remarked that this does not give new bounds on the quantity $e(P)$. This paper is still useful, since in 1999 it remarks that Kahn and Kim's are the best known bounds for that quantity. Not exactly an answer to my question, but very close. –  Jacopo Notarstefano Aug 14 '11 at 21:45
2015-04-26 08:11:35
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9048882722854614, "perplexity": 270.8210772192126}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2015-18/segments/1429246654114.44/warc/CC-MAIN-20150417045734-00123-ip-10-235-10-82.ec2.internal.warc.gz"}
https://tutorial.math.lamar.edu/Solutions/CalcII/SurfaceArea/Prob2.aspx
Paul's Online Notes Home / Calculus II / Applications of Integrals / Surface Area Show Mobile Notice Show All Notes Hide All Notes Mobile Notice You appear to be on a device with a "narrow" screen width (i.e. you are probably on a mobile phone). Due to the nature of the mathematics on this site it is best views in landscape mode. If your device is not in landscape mode many of the equations will run off the side of your device (should be able to scroll to see them) and some of the menu items will be cut off due to the narrow screen width. ### Section 2-2 : Surface Area 2. Set up, but do not evaluate, an integral for the surface area of the object obtained by rotating $$y = \sin \left( {2x} \right)$$ , $$\displaystyle 0 \le x \le \frac{\pi }{8}$$ about the $$x$$-axis using, 1. $$\displaystyle ds = \sqrt {1 + {{\left[ {\frac{{dy}}{{dx}}} \right]}^2}} \,dx$$ 2. $$\displaystyle ds = \sqrt {1 + {{\left[ {\frac{{dx}}{{dy}}} \right]}^2}} \,dy$$ Show All Solutions Hide All Solutions a $$\displaystyle ds = \sqrt {1 + {{\left[ {\frac{{dy}}{{dx}}} \right]}^2}} \,dx$$ Show All Steps Hide All Steps Start Solution We’ll need the derivative of the function first. $\frac{{dy}}{{dx}} = 2\cos \left( {2x} \right)$ Show Step 2 Plugging this into the formula for $$ds$$ gives, $ds = \sqrt {1 + {{\left[ {\frac{{dy}}{{dx}}} \right]}^2}} \,dx = \sqrt {1 + {{\left[ {2\cos \left( {2x} \right)} \right]}^2}} \,dx = \sqrt {1 + 4{{\cos }^2}\left( {2x} \right)} \,dx$ Show Step 3 Finally, all we need to do is set up the integral. Also note that we have a $$dx$$ in the formula for $$ds$$ and so we know that we need $$x$$ limits of integration which we’ve been given in the problem statement. $SA = \int_{{}}^{{}}{{2\pi y\,ds}} = \int\limits_{0}^{{\frac{\pi }{8}}}{{2\pi y\,\sqrt {1 + 4{{\cos }^2}\left( {2x} \right)} \,dx}} = \require{bbox} \bbox[2pt,border:1px solid black]{{\int\limits_{0}^{{\frac{\pi }{8}}}{{2\pi \sin \left( {2x} \right)\,\sqrt {1 + 4{{\cos }^2}\left( {2x} \right)} \,dx}}}}$ Be careful with the formula! Remember that the variable in the integral is always opposite the axis of rotation. In this case we rotated about the $$x$$-axis and so we needed a $$y$$ in the integral. Note that with the $$ds$$ we were told to use for this part we had a $$dx$$ in the final integral and that means that all the variables in the integral need to be $$x$$’s. This means that the $$y$$ from the formula needs to be converted into $$x$$’s as well. Luckily this is easy enough to do since we were given the formula for $$y$$ in terms of $$x$$ in the problem statement. As an aside, note that the $$ds$$ we chose to use here is technically immaterial. Realistically however, one $$ds$$ may be easier than the other to work with. Determining which might be easier comes with experience and in many cases simply trying both to see which is easier. b $$\displaystyle ds = \sqrt {1 + {{\left[ {\frac{{dx}}{{dy}}} \right]}^2}} \,dy$$ Show All Steps Hide All Steps Start Solution In this case we first need to solve the function for $$x$$ so we can compute the derivative in the $$ds$$. $y = \sin \left( {2x} \right)\hspace{0.5in} \to \hspace{0.5in}\,\,\,x = \frac{1}{2}{\sin ^{ - 1}}\left( y \right)$ The derivative of this is, $\frac{{dx}}{{dy}} = \frac{1}{2}\frac{1}{{\sqrt {1 - {y^2}} }} = \frac{1}{{2\sqrt {1 - {y^2}} }}$ Show Step 2 Plugging this into the formula for $$ds$$ gives, $ds = \sqrt {1 + {{\left[ {\frac{{dx}}{{dy}}} \right]}^2}} \,dy = \sqrt {1 + {{\left[ {\frac{1}{{2\sqrt {1 - {y^2}} }}} \right]}^2}} \,dy = \sqrt {1 + \frac{1}{{4\left( {1 - {y^2}} \right)}}} \,dy = \sqrt {\frac{{5 - 4{y^2}}}{{4\left( {1 - {y^2}} \right)}}} \,dy$ Show Step 3 Next, note that the $$ds$$ has a $$dy$$ in it and so we’ll need $$y$$ limits of integration. We are only given $$x$$ limits in the problem statement. However, we can plug these into the function we were given in the problem statement to convert them to $$y$$ limits. Doing this gives, $x = 0:y = \sin \left( 0 \right) = 0\hspace{0.25in}\hspace{0.25in}x = \frac{\pi }{8}:y = \sin \left( {\frac{\pi }{4}} \right) = \frac{{\sqrt 2 }}{2}$ So, the corresponding $$y$$ limits are : $$0 \le y \le \frac{{\sqrt 2 }}{2}$$. Show Step 4 Finally, all we need to do is set up the integral. $SA = \int_{{}}^{{}}{{2\pi yds}} = \require{bbox} \bbox[2pt,border:1px solid black]{{\int_{0}^{{\frac{{\sqrt 2 }}{2}}}{{2\pi y\sqrt {\frac{{5 - 4{y^2}}}{{4 - 4{y^2}}}} \,dy}}}}$ Be careful with the formula! Remember that the variable in the integral is always opposite the axis of rotation. In this case we rotated about the $$x$$-axis and so we needed an $$y$$ in the integral. Also note that the $$ds$$ we chose to use is technically immaterial. Realistically one $$ds$$ may be easier than the other to work with but technically either could be used.
2021-12-04 17:15:15
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8878635764122009, "perplexity": 273.1590429251354}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-49/segments/1637964362999.66/warc/CC-MAIN-20211204154554-20211204184554-00289.warc.gz"}
https://plainmath.net/27139/let-be-the-region-between-the-circles-of-radius-and-radius-centered-at-the
# Let D be the region between the circles of radius 6 and radius 8 centered at the Let D be the region between the circles of radius 6 and radius 8 centered at the origin that lies in the third quadrant. Express D in polar coordinates. 1) $D=\left\{\left(r,\theta \right)\mid 6\le r\le 8,0\le \theta \le \frac{\pi }{2}\right\}$ 2) $D=\left\{\begin{array}{cc}r& \theta \end{array}\right)\mid 6\le r\le 8,\pi \le \theta \le \frac{3\pi }{2}\right\}$ 3) $D=\left\{\left(r,\theta \right)\mid 0\le r\le 8,\pi \le \theta \le \frac{3\pi }{2}\right\}$ 4) $D=\left\{\left(r,\theta \right)\mid 6\le r\le 8,\pi \le \theta \le 2\pi \right\}$ You can still ask an expert for help • Questions are typically answered in as fast as 30 minutes Solve your problem for the price of one coffee • Math expert for every subject • Pay only if we can solve it dessinemoie Given:let D be the region between the circles of radius 6 and 8 centered at the origin that lies in the third quadrant express D in polar coordinates. Here the region D is between the circles with radius r=6,r=8 centered at the origin that is $6\le r\le 8$ and that lies in quadrant 3 So, $\pi \le \theta \le \frac{3\pi }{2}$ Therefore, $D=\left\{\left(r,\therefore \right)\mid 6\le r\le 8,\pi \le \therefore \le 3\pi 2\right\}$ Therefore the 2 nd option is the correct one
2022-05-28 07:32:34
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 8, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9673869609832764, "perplexity": 311.9347780266093}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-21/segments/1652663013003.96/warc/CC-MAIN-20220528062047-20220528092047-00571.warc.gz"}
https://quant.stackexchange.com/questions/24318/modeling-transaction-cost-with-single-counted-turnover-ratio
# Modeling transaction cost with single-counted turnover ratio Why do people use "Single-Counted" turnover ratio when modeling for transaction cost. I read a paper (Factor Investing in the Corporate Bond Market) which uses only the purchase side as turnover measure multiplied by a spread assumption. This seems to assume that the sell side does not cost anything. I couldn't find a definitive reference of this term and it doesn't seem to be widely used. However, I think I can follow the logic: In their set-up the portfolio is rebalanced monthly. So, at the start positions are taken and costs incurred, since the positions are not liquidated at the end the costs for this month are only one way. After the first month position weights are updated incurring new costs but again one way because liquidation of these positions does not take place, so only one way. This process will continue so there is never a need to liquidate and only one trip will be made per month. • This reminds me of why it can be difficult to think about multi-period portfolio optimization. The bonds mature in the future. If you're taking into account transaction costs, then there will be costs of investing in new bonds at that point. The only way I've thought of to simplify it (in the multi-period optimization approach) is to abstract away from investing in individual bonds and think more in terms of strategies. – John Dec 13, 2016 at 15:32 I would say because when you multiply the total turnover ratio by the full bid ask spread you obtain the double of transaction costs. The bid ask spread contains the double of transaction costs ( it represents the cost of a round trip - of a consecutive buy and sell order), that is the reason why we often take half of the spread as an indication of the transaction costs. We usually assume that the real transaction costs are : $| \text{price} - \text{mid-quote} |$ which should correspond to $\approx 0.5 * \text{spread}$. If you only take the purchase side (as in the paper you mentioned ) it amounts to compute half of the double spread which is the transaction costs. Another way would be to compute $[0.5 \times \text{spread} \times \text{purchase turnover} ]+ [0.5 \times \text{spread} \times \text{sell turnover} ]$ which is indeed equal to : $\text{purchase turnover} * \text{spread}$ (assuming symmetry of the bid ask spread and realizing that purchase turnover is equal to sell turnover)
2022-08-16 22:30:32
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.4146111011505127, "perplexity": 594.537039911961}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 5, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-33/segments/1659882572581.94/warc/CC-MAIN-20220816211628-20220817001628-00398.warc.gz"}
https://learn.careers360.com/ncert/question-what-do-you-understand-by-inert-pair-effect/
# 11.29     What do you understand by   (a) inert pair effect On moving down the group in the periodic table, the tendency of $s$-orbital electron to participate in bonding is decreased. This effect is known as the inert pair effect. For example, in group 13 element ($ns^2,np^1$)the stability of +1 oxidation is more than the +2 oxidation state due to the poor shielding of the $ns^2$ electrons by the $d$ and $f$ electron, as a result, $ns^2$electrons are strongly held by the nucleus.
2020-04-07 21:19:42
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 6, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.7408831119537354, "perplexity": 1035.640245217272}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-16/segments/1585371805747.72/warc/CC-MAIN-20200407183818-20200407214318-00212.warc.gz"}
http://fivethirtyeight.com/features/a-presidents-economic-decisions-matter-eventually/
A President’s Economic Decisions Matter … Eventually This is In Real Terms, a column analyzing the week in economic news. Comments? Criticisms? Ideas for future columns? Email me, or drop a note in the comments. When Hillary Clinton said last spring that she would put her husband “in charge of revitalizing the economy,” I argued that Bill Clinton doesn’t deserve much credit for the late-1990s boom. That’s not a knock on the 42nd president in particular. It’s just that presidents, in general, have far less control over the economy than the public often thinks. It is impossible to know for sure, but it is likely that the housing boom and bust would have played out much the same way under President Gore as it did under President Bush, and it’s likely that the recovery would have been just as long (and just as disappointing) under President McCain or President Romney as it has been under President Obama. But that doesn’t mean presidents have no power over the economy at all, or that their policies are unimportant. Indeed, presidents, along with Congress, can exert profound influence over the economy, for good and ill. It’s just that their true impact is rarely what gets talked about in party conventions or on the campaign trail. True presidential impacts are often invisible — as much about what doesn’t happen as what does — and become clear only years or even decades after they leave office. “It’s really hard to assign to a president or even more broadly to a political establishment … credit or blame for things,” said Salim Furth, an economist at the conservative Heritage Foundation. “A lot of policies take a long time to take effect.” There are exceptions. In crises, presidential action can have an immediate and measurable effect. Most economists believe the stimulus package that Obama signed early in his administration helped dampen the effects of the recession; critics on the left argue that a larger or better-designed stimulus could have done more. (Assessments of Obama’s other crisis-era initiatives, such as the auto company bailouts and mortgage-assistance programs, have been more mixed.) And many experts (though not all) believe that the broader set of government actions — by both Obama and Bush, as well as the Federal Reserve — helped avert a full-blown depression. (Obama’s 2008 opponent, John McCain, fought the administration’s stimulus package in the Senate, but not because he objected to the idea of a stimulus — he wanted the bill to rely more heavily on tax cuts rather than spending. Many observers suspect that, had McCain become president, he would have proposed a bill closer to the one Obama passed.) “The moment when presidents can make a difference is during emergencies like a global financial crisis,” said Stan Veuger, an economist at the American Enterprise Institute, a conservative think tank. Outside of crises, presidents can’t do much to boost the economy in the short term. But they can hurt it. Furth pointed to President Nixon’s 1971 decision to impose price controls in response to inflation as an example of a policy that had a clear — and immediate — negative effect on the economy, one that took years to fully reverse. Economists on the right and left say there are various decisions — defaulting on the national debt or sharply limiting trade, for example — that could have a similar effect today. Most of the time, however, presidents affect the economy in more subtle, long-run ways. That can make their impact hard to measure. The decades-long process of opening up global trade has had a clear, positive impact on the U.S. economy, but the impact of any single trade agreement is generally modest. The entry of women into the workforce in the second half of the 20th century was one of the most important drivers of economic growth during those decades, but it is hard to identify a single policy that led to that shift. Government policies on taxes, health care, education and infrastructure all play important roles in determining the long-run path of the American economy, but it takes years if not decades for their effects to be felt. “Their impact in the short-run is minimal,” said Eugene Steuerle, a tax policy expert at the Urban Institute. But while presidents can’t control how fast the economy grows, they have more influence over how that growth is divided. Presidents probably can’t do much, for example, to bring back lost manufacturing jobs, but they can try to help the workers who lost those jobs. At this week’s convention in Philadelphia, Democrats promised to raise the minimum wage, guarantee paid leave to new parents and hike taxes on the rich; those policies might have long-term effects on the size of the economy, but they would have the far more immediate effect of redistributing income from wealthier Americans to poorer ones. Republicans, of course, propose a different set of policies — lower taxes, reduced regulation — that would affect distribution in different ways. (Donald Trump also proposes various policies — bringing back manufacturing jobs, reducing immigration — that most economists consider either unrealistic or dangerous.) In other words, neither Clinton nor Trump can realistically promise to avoid recessions or boost economic growth. Instead, voters should be asking themselves a series of questions: Which candidate do I trust to manage a crisis, or to avoid creating one? Which candidate will set up the economy for success after he or she leaves office? And which candidate’s policies will help the most people succeed in the economy that we have now? ## Minimum wage Donald Trump wants to raise the federal minimum wage to $10 an hour — maybe. At a press conference on Wednesday, Trump said he’d “like to raise it to at least$10,” but also said “states should really call the shots.” (The federal wage is currently $7.25 an hour.) Trump’s campaign hasn’t clarified his position. But in one (possibly too generous) interpretation, Trump’s policy may be similar to that of many economists: Raise the federal minimum by a modest amount, then let states raise it further if they see fit. As I’ve written before,$15 an hour means something very different in high-cost California than in low-cost Mississippi. Or, as Trump told Bill O’Reilly on Tuesday, “It’s very expensive to live in New York.” (There is also, of course, significant variation in the cost of living within states, which explains why many of the most aggressive minimum-wage increases have come at the city level.) New research this week supported that position. A preliminary study from economists at the University of Washington found that low-wage workers in Seattle have thrived since the city passed a $15 minimum wage law in 2014. (The wage floor is still being phased in; the$15 minimum won’t kick in for all workers until 2021.) Workers’ success, the report found, is largely because of the booming local economy, which has led to strong hiring and solid wage growth up and down the earnings spectrum. But the law itself has had a modest positive effect on low-wage workers without seeming to hurt companies or cost jobs. On the other hand, separate research from James Sherk at the conservative Heritage Foundation concluded that a $15 federal minimum wage would eliminate 7 million jobs, with the brunt of the impact falling on the most vulnerable workers. Sherk’s analysis is, by definition, speculative, and other economists might well come up with a smaller estimated impact. But even many liberal economists are skeptical that a$15 minimum makes sense in low-cost parts of the country, where as many as half of workers earn less than \$15 an hour. (On the other hand, leaving the decision up to the states often means that minimum wages will often be set based on politics, not economics. Five states, all in the South, have no state minimum wage.) ## The Fed To hear Republicans tell it at their convention last week, the U.S. economy is a shambles. To hear Democrats tell it, the recovery is strong. Policymakers at the Federal Reserve? They’re somewhere in the middle. The Fed this week, as expected, decided to leave interest rates unchanged, as it has at every meeting this year. The Fed, which once expected to raise rates four times this year, has become more cautious in the face of a slowing global economy. But in its statement on Wednesday, the Fed said “the near-term risks to the economic outlook have diminished” and hinted that it could raise rates as early as September. ## Number of the week Newly opened businesses created 889,000 jobs in the final three months of 2015, the Bureau of Labor Statistics reported Wednesday.1 That’s the most since early 2008, when the recession was just getting started. The pickup in entrepreneurial activity is still modest, but if it lasts, it would be an important step forward for the economy. New businesses are the lifeblood of a dynamic economy; they spread innovation, improve productivity and, crucially, play a disproportionate role in creating new jobs. But for all the focus on Silicon Valley, startup activity has been muted in this recovery. Even with the latest rebound, startups account for a significantly smaller share of job creation than they did before the recession. Moreover, entrepreneurial activity was falling even before the recession. The rate at which Americans start new businesses has been falling for more than 30 years — a troubling trend that economists can’t fully explain. ## More from us On Tuesday, Carl Bialik looked at the diverging paths of the two convention host cities. Philadelphia has begun to emerge from its late-20th century struggles; Cleveland, however, continues to shrink. The Democratic convention has featured lots of talk about the economy. Catch up on our analysis from our week’s worth of live blogs. ## Elsewhere No, the government is not cooking the books on the unemployment rate to make Obama look good. Matt O’Brien explains (again) in The Washington Post. Gideon Lewis-Kraus takes a deep look in The New York Times Magazine at the progressive group trying to put inequality at the center of Clinton’s economic agenda. Stop worrying about robots taking our jobs, writes Robin Harding in The Financial Times. Worry about climate change instead. Note: In Real Terms will be taking next week off. ## Footnotes 1. The BLS figures technically don’t count new companies but rather new “establishments” — new business locations, whether or not the parent company is new. But separate, less timely data from the Census Bureau that does look at companies shows a similar trend. Ben Casselman is a senior editor and the chief economics writer for FiveThirtyEight. Filed under , , ,
2016-12-10 14:48:36
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.23734423518180847, "perplexity": 3570.5885148385387}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2016-50/segments/1480698543315.68/warc/CC-MAIN-20161202170903-00016-ip-10-31-129-80.ec2.internal.warc.gz"}
https://stats.stackexchange.com/questions/55954/regression-estimation-difficulties
Regression Estimation difficulties My regression problem is properly formulated, but is encountering serious computational difficulties. Dependent: $Y$ = multinomial Independent: $X_1, \dots, X_{90}$ = linearly independent set of variables. (I verified the independence. Afterall, I defined these variables). Consider design matrix $X$, Hessian $H$, and gradient $G$. Difficulties: condition_number($H$) = $10^9$ Variance Inflation Factors (VIFs): all of them $< 5.5$ except for one variable which has $VIF = 15.6$ eigenvalues$(H)$: ranges from $-10^{10}$ more-or-less-smoothly to $-17.3$ This causes parameter estimation to go whacky - any sort of Newton-Raphson approxmation encounters numerical problems when computing $H^{-1} \cdot G$ Any suggestions or ideas?
2020-07-08 07:39:55
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.6740514039993286, "perplexity": 4307.587524631372}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 5, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-29/segments/1593655896905.46/warc/CC-MAIN-20200708062424-20200708092424-00360.warc.gz"}
https://cstheory.stackexchange.com/questions/39918/is-sorting-n-real-numbers-in-time-on-sqrt-log-n-and-linear-space-possib/40013
# Is sorting $n$ real numbers in time $O(n \sqrt{\log n})$ and linear space possible? In the recent preprint https://arxiv.org/abs/1801.00776, it is claimed that $n$ real numbers can be sorted in time $$O(n \sqrt{\log n}),$$ and linear space. The paper seems reasonable, though I am not an expert in sorting algorithms. If correct, this would be a significant, I believe, at least theoretically. The presentation of the main argument is somewhat informal and nontraditional, however. Has anyone noticed/commented on this paper? It seems that the same author, Yijie Han, has published a related result on integer sorting, as discussed in Han's $O(n \log\log n)$ time, linear space, integer sorting algorithm • "We assume that a variable $v$ holding a real value has arbitrary precision and $int(v \cdot 2^a)$ for a nonnegative integer $a$ can be computed in constant time." This smells fishy, see computational-geometry.org/mailing-lists/compgeom-announce/… – Sasho Nikolov Jan 6 '18 at 3:12 • Every computable function from reals to integers is constant. – Andrej Bauer Jan 25 '18 at 7:12 • Andrej, that is in a different model of computation. – Kristoffer Arnsfelt Hansen Jan 25 '18 at 8:11 • Aaand now I no longer believe his earlier paper. – Jeffε Jan 30 '18 at 1:52 • what is the connection to $PSPACE\in P$ or $\#P\in FP$? – T.... Jan 25 '18 at 11:06
2020-02-24 23:20:29
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.5804780125617981, "perplexity": 440.34480133668916}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-10/segments/1581875145989.45/warc/CC-MAIN-20200224224431-20200225014431-00481.warc.gz"}
https://codereview.stackexchange.com/questions/223863/find-the-number-of-all-specific-substrings-in-a-string
# Find the number of all specific substrings in a string I would like to ask a code review regarding a concrete exercise. Let's suppose I have to get the number of all specific substrings in a string. We call something specific if any of these conditions are is true: • The string consists of similar characters. For example: "aaaaa" • The string consists of similar characters except the middle one, which can be anything. For example: "aabaa" To do this, I first decided to get all combinations into a list. I did it in an O(n³) solution just with 2 basic for loops and using substring() (see more), this way: private static List<String> allCombinations(String s) { List<String> output = new ArrayList<>(); for (int i = 1; i <= s.length(); i++) { for (int j = 0; j <= s.length() - i; j++) { } } return output; } Subsequently, I used this method to count how many of these are special: static long substrCount(String s) { long res = 0L; List<String> output = allCombinations(s); for (String x : output) { if(isSpecial(x)){ res++; } } return res; } isSpecial() looks like this: private static boolean isSpecial(String input) { Set<Character> occurrences = new HashSet<>(); for (int i = 0; i < input.length(); i++) { } if (occurrences.size() > 2) { return false; } if (occurrences.size() == 1) { return true; } return input.length() % 2 == 1 && input.charAt(0) == input.charAt(input.length() - 1) && input.charAt(input.length() / 2) != input.charAt(0); } I've got 2 questions: 1. This is a practice exercise with provided tests, and most of the test cases failed due to time complexity problems. How could I reduce the time complexity of my solution? 2. If you could give me any general feedback what to improve in - based on my code - I would be really thankful. First off, I believe you are confusing the terms "specific" and "special" in the description. # Method isSpecial • You could move the if (occurrences.size() > 2) statement into the for loop, because once you know that there are more than 2 different characters, there is no need to continue adding more characters to the Set. Then you can also initialize the HashSet to an initial size of 3, because it never can grow larger than that. • You don't need the part input.charAt(0) == input.charAt(input.length() - 1) in the final expression, because when the length is more than two and if there are exactly two different characters, it can never be false when input.charAt(input.length() / 2) != input.charAt(0) is true. • Finally I'd put input.length into a local variable. That will speed up the for loop by a tiny amount by avoiding the method call and it will make the final expression a bit shorter and thus better to read private static boolean isSpecial(String input) { int len = input.length(); Set<Character> occurrences = new HashSet<>(3); for (int i = 0; i < len; i++) { if (occurrences.size() > 2) { return false; } } if (occurrences.size() == 1) { return true; } return len % 2 == 1 && input.charAt(len / 2) != input.charAt(0); } # Method substrCount This method can be shorten significantly by using Java 8's Stream API: static long substrCount(String s) { return allCombinations(s).stream().filter(ClassName::isSpecial).count(); } (ClassName is the name of the class isSpecialis in.)
2021-05-11 23:55:58
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.23913440108299255, "perplexity": 1661.6882336996478}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-21/segments/1620243990419.12/warc/CC-MAIN-20210511214444-20210512004444-00040.warc.gz"}
http://mathhelpforum.com/advanced-statistics/21832-de-montmort-s-problem-coincidence.html
# Thread: De Montmort's Problem of Coincidence 1. ## De Montmort's Problem of Coincidence If n objects labelled 1 to n are randomly placed in a row, what is the probability that exactly m of the objects will be correctly placed? The equation is given below: $c_{m,n} = {1 \over {m!}}[ {1 \over{2!}} - {1 \over{3!}} + {1 \over{4!}} .... +{ (-1)^{n-m} \over {n-m!}}]$ First, let's look at the case n=4 and m=4. The equation correctly gives ${1 \over {4!}}$ Having all of the objects in their correct place is only 1 way out of 4! ways to arrange the 4 objects. Can anyone explain to me the case when m=3 and n=4? By right, it should return also ${1 \over {4!}}$, as if 3 out of 4 objects werein their correct place, then surely the last object must be in its place too. But how does the formula make me see this? $C_{n,m} = \frac{1}{{m!}}\sum\limits_{k = 2}^{n - m} {\frac{{\left( { - 1} \right)^k }}{{k!}}}$. I know this one: $E(n,m) = \frac{{D(n - m)}}{{\left( {n - m} \right)!(m!)}},\mbox{ where } D(n) = n!\sum\limits_{k = 0}^n {\frac{{\left( { - 1} \right)^k }}{{k!}}}$ Now your given $C_{n,m}$ will give an incorrect answer if $n - m < 2$. But if you change the index to begin at 0: $C_{n,m} = \frac{1}{{m!}}\sum\limits_{k = 0}^{n - m} {\frac{{\left( { - 1} \right)^k }}{{k!}}}$ then $C_{4,3} =0$.
2018-02-23 19:18:55
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 9, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8110305666923523, "perplexity": 482.3059250527696}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-09/segments/1518891814827.46/warc/CC-MAIN-20180223174348-20180223194348-00489.warc.gz"}
http://learningwitherrors.org/authors/
Learning With Errors Posts by Preetum Nakkiran Constructive Hardness Amplification via Uniform Direct Product August 24, 2016 New Theory Blog August 13, 2016 Simple Lower Bounds for Small-bias Spaces June 03, 2016 Fast Johnson-Lindenstrauss May 27, 2016 Posts by Tselil Schramm Discrepancy: a constructive proof via random walks in the hypercube January 03, 2017 Discrepancy: definitions and Spencer's six standard deviations December 26, 2016 Intro to the Sum-of-Squares Hierarchy June 23, 2016 Posts by Chenyang Yuan Deterministic Sparsification July 06, 2016 Posts by Pasin Manurangsi Pseudo-calibration for Planted Clique Sum-of-Squares Lower Bound August 12, 2016
2019-03-20 15:19:20
{"extraction_info": {"found_math": true, "script_math_tex": 1, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.3102550506591797, "perplexity": 8413.442053588704}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-13/segments/1552912202433.77/warc/CC-MAIN-20190320150106-20190320172106-00144.warc.gz"}
https://mathoverflow.net/questions/292787/is-there-a-local-version-of-near-coherence-of-filters
# Is there a 'local' version of Near Coherence of Filters? The axiom Near Coherence of Filters (NCF) is known to be independent of ZFC. Axiom (NCF I): For any two free ultrafilters $\mathcal D$ and $\mathcal E$ on $\mathbb N$, there exist finite-to-one functions $f,g: \mathbb N \to \mathbb N$ such that $f(\mathcal D) = g(\mathcal E)$. Where we define $f(\mathcal D) = \{A \subset \omega: f^{-1}(A) \in \mathcal D\}$. Equivalently $f(\mathcal D)$ is the unique ultrafilter generated by $\{f(D): D \in \mathcal D\}$ There is an equivalent version of the axiom. Axiom (NCF II): For any two free ultrafilters $\mathcal D$ and $\mathcal E$ on $\mathbb N$, there exists a finite-to-one monotone function $f :\mathbb N \to \mathbb N$ such that $f(\mathcal D) = f(\mathcal E)$. One can ask if the same property holds over an upper-subset of $\mathbb N^*$: First define the preorder $\le$ on $\mathbb N^*$ where $\mathcal F \le \mathcal D$ means that $\mathcal F = f(\mathcal D)$ for some finite-to-one monotone function $f :\mathbb N \to \mathbb N$. Now fix some $\mathcal F \in \mathbb N^*$ and define $(\mathcal F \uparrow) = \{ \mathcal D \in \mathbb N^*: \mathcal F < \mathcal D \}$ Is anything known about the following proposition? Axiom?: For any two free ultrafilters $\mathcal D, \mathcal E \in (\mathcal F \uparrow)$, there exists a finite-to-one monotone function $f :\mathbb N \to \mathbb N$ such that $f(\mathcal D) = f(\mathcal E) \in (\mathcal F \uparrow)$. In particular: 1. Is the proposition consistent? 2. Does it follow from any well-known additional axioms? 3. Does it hold for any known $\mathcal F$ under ZFC? Thanks in advance. • Maybe this is standard notation, but what do you mean by $f(\mathcal{D})$? Do you mean the set of inverse images of elements of $\mathcal{D}$ under $f$, or the set of forward images? – Paul McKenney Feb 14 '18 at 21:38 • The collection of sets whose inverse image is in $\mathcal D$. Equivalently the ultrafilter generated by all the the forward images. – Daron Feb 14 '18 at 23:19 • If the filter $\mathcal F$ is meager, then your Axiom seems to be equivalent to NCF. If $\mathcal F$ is not meager, then everything depends on the choice of $\mathcal F$. For example your Axiom trivially holds if $\mathcal F$ is an ultrafilter. So, maybe you want to ask if NCF is equivalent to your Axiom for any non-meager filter $\mathcal F$? – Taras Banakh Feb 18 '18 at 11:10 • @Daron Observe that non-meager filters for their existence need some strong form of Axiom of Choce. All definable filters (for example, analytic or coanalytic) are meager. – Taras Banakh Feb 18 '18 at 11:13 • Sorry, there was a mistake in the definition of the preorder. The two elements should have been the other way around. I see how this makes the axiom trivial for $\mathcal F$ an ultrafilter (we assume everything is an ultrafilter). I have corrected the mistake now. Is the axiom still trivial? – Daron Feb 18 '18 at 13:13
2021-04-15 08:49:47
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9403315782546997, "perplexity": 451.2404444401767}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": false}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-17/segments/1618038084601.32/warc/CC-MAIN-20210415065312-20210415095312-00576.warc.gz"}
https://infoscience.epfl.ch/record/202306
## Structure and Properties of the Precursor/Successor Complex and Transition State of the CrCl2+/Cr2+ Electron Self-Exchange Reaction via the Inner-Sphere Pathway The electron self-exchange reaction CrCl(OH2)5(2+) + Cr(OH2)6(2+) -> Cr(OH2)(6)(2+) + CrCl(OH2)(5)(2+), proceeding via the inner-sphere pathway, was investigated with quantum-chemical methods. Geometry and vibrational frequencies of the precursor/successor (P/S) complex, (H2O)5Cr(III)ClCr(II)(OH2)(5)(4+)/(H2O)(5)(CrClCrIII)-Cl-II(OH2)5(4+), and the transition state (TS), (H2O)(5)CrClCr(OH2)5(4+) double dagger, were computed with density functional theory (DFT) and conductor polarizable continuum model hydration. Consistent data were obtained solely with long-range-corrected functionals, whereby in this study, LC-BOP was used. Bent and linear structures were computed for the TS and P/S. The electronic coupling matrix element (H-ab) and the reorganizational energy (lambda) were calculated with multistate extended general multiconfiguration quasi-degenerate second-order perturbation theory. The nuclear tunneling factor (Gamma(n)), the nuclear frequency factor (nu(n)), the electronic frequency factor (nu(el)), the electron transmission coefficient (k(el)), and the first-order rate constant (ket) for the electron-transfer step (the conversion of the precursor complex into the successor complex) were calculated based on the imaginary frequency (nu double dagger) of the TS, the Gibbs activation energy (Delta G double dagger), H-ab, and lambda. The formation of the precursor complex via water substitution at Cr(OH2)(6)(2+) was also investigated with DFT and found to be very fast. Thus, the electron-transfer step is rate-determining. For the substitution reaction, only a bent TS structure could be obtained. The overall rate constant (k) was estimated as the product K-Akev, whereby KA is the equilibrium constant for the formation of the ion aggregate of the reactants Cr(OH2)(6)(2+) and CrCl(OH2)(5)(2+), Cr(H2O)(6).CrCl(OH2)(5)(4+) (IAR). k calculated for the bent and linear isomers agrees with the experimental value. Published in: Inorganic Chemistry, 53, 18, 9923-9931 Year: 2014 Publisher: Washington, American Chemical Society ISSN: 0020-1669 Laboratories:
2018-04-20 16:37:05
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8142438530921936, "perplexity": 8627.104348557275}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-17/segments/1524125944479.27/warc/CC-MAIN-20180420155332-20180420175332-00238.warc.gz"}
https://www.gradesaver.com/textbooks/math/algebra/algebra-2-1st-edition/chapter-8-rational-functions-8-3-graph-general-rational-functions-8-3-exercises-problem-solving-page-569/31a
## Algebra 2 (1st Edition) Published by McDougal Littell # Chapter 8 Rational Functions - 8.3 Graph General Rational Functions - 8.3 Exercises - Problem Solving - Page 569: 31a #### Answer $l=\frac{100}{r^2\pi}$ #### Work Step by Step The volume of the cylinder is: $V=r^2\pi l$. Here $V=100$, hence $l=\frac{100}{r^2\pi}$ After you claim an answer you’ll have 24 hours to send in a draft. An editor will review the submission and either publish your submission or provide feedback.
2023-02-05 14:09:50
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.7167940735816956, "perplexity": 2964.376893237752}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": false}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2023-06/segments/1674764500255.78/warc/CC-MAIN-20230205130241-20230205160241-00401.warc.gz"}
https://lavelle.chem.ucla.edu/forum/viewtopic.php?f=131&t=42294
## Converting to q rev $\Delta S = \frac{q_{rev}}{T}$ Marsenne Cabral 1A Posts: 59 Joined: Fri Sep 28, 2018 12:19 am ### Converting to q rev Even if the reaction you have is irreversible, do you still use change in entropy= qrev/T? Yousif Jafar 1G Posts: 59 Joined: Thu May 10, 2018 3:00 am ### Re: Converting to q rev unless it is isothermal and irreversible, you use the other formulas were change in entropy is =nRlnT2/T1 Yvonne Du Posts: 64 Joined: Fri Sep 28, 2018 12:23 am ### Re: Converting to q rev Yes, qrev/Delta T would be the formula for irreversible reaction. ryanhon2H Posts: 60 Joined: Fri Sep 28, 2018 12:28 am ### Re: Converting to q rev Because entropy is a state function, the change will be the same regardless of whether the reaction is reversible or not, since it is not path dependent, so you can use qrev/T, as long as the reaction is isothermal. If it is isobaric (constant pressure), isochoric (constant volume), etc you would have to use a different equation. 105169446 Posts: 32 Joined: Fri Sep 28, 2018 12:16 am ### Re: Converting to q rev When the change in entropy is reversible, you can also use delta S = nrln(V2/V1) or delta S = nCln(P1/P2) because pressure is inversely related to volume.
2020-09-19 06:39:39
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 1, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.6419656276702881, "perplexity": 3586.0966572156926}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-40/segments/1600400190270.10/warc/CC-MAIN-20200919044311-20200919074311-00223.warc.gz"}
https://cs.uwaterloo.ca/events/phd-seminar-algorithms-and-complexity-succinct-color
# PhD Seminar • Algorithms and Complexity: Succinct Color Searching in One Dimension Wednesday, April 4, 2018 — 1:30 PM EDT Hicham El-Zein, PhD candidate David R. Cheriton School of Computer Science We present succinct data structures for one-dimensional color reporting and color counting problems. We are given a set of $n$ points with integer coordinates in the range $[1,m]$ and every point is assigned a color from the set $\{\,1,\ldots,\sigma\,\}$. A color reporting query asks for the list of distinct colors that occur in a query interval $[a,b]$ and a color counting query asks for the number of distinct colors in $[a,b]$. We describe a succinct data structure that answers approximate color counting queries in $O(1)$ time and uses $\mathcal{B}(n,m) + O(n) + o(\mathcal{B}(n,m))$ bits, where $\mathcal{B}(n,m)$ is the minimum number of bits required to represent an arbitrary set of size $n$ from a universe of $m$ elements. Thus we show, somewhat counterintuitively, that it is not necessary to store colors of points in order to answer approximate color counting queries. In the special case when points are in the rank space (i.e., when $n=m$), our data structure needs only $O(n)$ bits. Also, we show that $\Omega(n)$ bits are necessary in that case. Then we turn to succinct data structures for color reporting. We describe a data structure that uses $\mathcal{B}(n,m) + nH_d(S) + o(\mathcal{B}(n,m)) + o(n\lg\sigma)$ bits and answers queries in $O(k+1)$ time, where $k$ is the number of colors in the answer, and $nH_d(S)$ ($d=\log_{\sigma} n$) is the $d$-th order empirical entropy of the color sequence. Finally, we consider succinct color reporting under restricted updates. Our dynamic data structure uses $nH_d(S)+o(n\lg\sigma)$ bits and supports queries in $O(k+1)$ time. Location DC - William G. Davis Computer Research Centre 1304 200 University Avenue West Waterloo, ON N2L 3G1 ### September 2022 S M T W T F S 28 29 30 31 1 3 4 5 8 9 10 11 12 13 15 16 17 18 20 22 23 24 25 26 27 28 29 30 1 1. 2022 (186) 1. November (2) 2. October (2) 3. September (12) 4. August (29) 5. July (23) 6. June (17) 7. May (20) 8. April (24) 9. March (22) 10. February (16) 11. January (19) 2. 2021 (210) 1. December (21) 2. November (13) 3. October (12) 4. September (21) 5. August (20) 6. July (17) 7. June (11) 8. May (16) 9. April (27) 10. March (20) 11. February (13) 12. January (19) 3. 2020 (217) 4. 2019 (255) 5. 2018 (217) 6. 2017 (36) 7. 2016 (21) 8. 2015 (36) 9. 2014 (33) 10. 2013 (23) 11. 2012 (4) 12. 2011 (1) 13. 2010 (1) 14. 2009 (1) 15. 2008 (1)
2022-09-28 15:59:36
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.41655465960502625, "perplexity": 3149.4205623307316}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030335257.60/warc/CC-MAIN-20220928145118-20220928175118-00498.warc.gz"}
https://mathsgrader.com/assessments/wk4c.html
- Click to visit the Maths Genie - If you pressed go and nothing happened it means either you got the answer wrong or, sometimes, you might have got the correct answer but Mathsgrader hasn't recognised it. - Common problems in your answers include: unneccessary spaces and capital letters. - If you want to type a fraction like \frac{1}{2}, type 1/2. - If you want to type x^2 type x^2. - If you think you got it right and Mathsgrader didn't mark it, contact ross@mathsgrader.com Wk4 Assessment Tier C Q1 i) cm (2 marks) ii) cm^2 (2 marks) Q2 i) £ (2 marks) ii) £ (2 marks) Q3 a) £ (2 marks) ii) g (2 marks) Q4 cm^2 (2 marks) Q5 a) \le l < (2 marks) b) (2 marks) Q6 mm (3 marks) Q7 £ (3 marks) Q8 Lower bounds (4 d.p.): Upper bounds (4 d.p.): Value of f: to (4 marks)
2020-09-21 09:50:45
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.37108325958251953, "perplexity": 13811.77479594752}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-40/segments/1600400201601.26/warc/CC-MAIN-20200921081428-20200921111428-00050.warc.gz"}
https://cs.stackexchange.com/questions/129595/what-is-the-difference-between-big-o-omega-and-theta
# What is the difference between Big O, Omega, and Theta? I know that this question is asked a lot of time but I don't understand or I think, I got lost when I was reading Introduction to algorithms They said, "It is not contradictory, however, to say that the worst-case running time of insertion sort is Omega(n^2), since there exists an input that causes the algorithm to take Omega(n^2) time.". I read many articles that Omega is the best-case. As I understand correct me if I'm wrong, that the best-case is Omega notation and why they said, "since there exists an input that causes the algorithm to take Omega(n^2) time.", I don't understand why they called it worst-case and why I can call it Omega(n^2), isn't Big O for worst-case. Also I don't understand they said, "Theta is notation is a stronger notion than Big O". Why is that? and When should I call an algorithm that it's Big O of whatever, Theta or Omega? Because I'm confused and I don't know which one is for or how to use them. • $\Omega$ expresses a lower bound on a function. In that paragraph the function is $n\mapsto$ the running time of the input of size $n$ that makes the algorithm do the largest number of comparisons. $\Omega$ is not about algorithms, or instances of the problems that they solve. Likewise $O$ expresses an upper bound for a function, not running time of any algorithm. – plop Aug 26 '20 at 13:52 • When you hear someone saying "This algorithm runs in $O(n)$ operations", it is loose language that omits what is the function that is being talked about. This is OK under the assumption that the reader can infer that they mean the function in the comment above. Those notations, however can be used for other functions, like $n\mapsto$ the running time for an input of size $n$ that takes the least amount of comparisons. One just need to say which function is being talked about. – plop Aug 26 '20 at 14:07 Check the definitions, e.g. in Hildebrand's Introduction to asymptotics. In a nutshell, for the usual computer science use for running times (all relevant functions positive), it is said that: • $$f(n) = O(g(n))$$ if there are $$n_0$$ and $$c > 0$$ so that for all $$n \ge n_0$$ it is $$f(n) \le c g(n)$$ • $$f(n) = \Omega(g(n))$$ if there are $$n_0$$ and $$c > 0$$ so that for all $$n \ge n_0$$ it is $$f(n) \ge c g(n)$$ • $$f(n) = \Theta(g(n))$$ if both $$f(n) = O(g(n))$$ and $$f(n) = \Theta(g(n))$$. Note that the last implies two possibly different $$n_0$$ values, and different values for $$c$$ (one each for $$O$$ and $$\Omega$$). Informally, $$O(\cdot)$$ gives an upper bound, $$\Omega(\cdot)$$ gives a lower bound, while $$\Theta(\cdot)$$ gives a sharp bound. Some examples: \begin{align*} n &= O(2^n) \\ (3/2)^n &= \Omega(n^3) \\ n^2 (\sin n + \cos n) &= \Theta(n^2) \end{align*}} There are functions that don't have a simple expression, like: \begin{align*} f(n) &= \begin{cases} n^2 & n \text{ odd} \\ n^5 & n \text{ even} \end{cases} \end{align*} Here clearly $$f(n) = \Omega(n^2)$$ and $$f(n) = O(n^5)$$, both best possible among functions $$n^\alpha$$; there is no simple $$g$$ so that $$f(n) = \Theta(g(n))$$. Lower and upper bounds don't need to be "best possible" in any sense. Often people take great care to get best bounds, but they aren't implied in the notation at all. To use $$\Omega$$ to mean best case and $$O$$ for worst case is misleading at best. For example, the best case for bubblesort on an array of $$n$$ elements is $$\Theta(n)$$ (bounded below and above by a linear function in the number of elements; when sorting an already sorted array it does one pass over the data), it's worst case is $$\Theta(n^2)$$ (if the data are in reverse order). We could say it is $$\Omega(n^{1/2})$$ and $$O(n^3)$$ as well, both valid for best and worst cases.
2021-07-24 05:43:15
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 34, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9621273875236511, "perplexity": 334.999630220146}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-31/segments/1627046150129.50/warc/CC-MAIN-20210724032221-20210724062221-00215.warc.gz"}
https://electronics.stackexchange.com/questions/236115/open-collector-comparator-for-voltage-glitch-detection
# Open Collector Comparator for Voltage Glitch Detection I recently ran a round of PCBs which included an experimental circuit to detect short glitches in my input voltages. I attached the schematic for reference: The intent is that C29, in parallel with R27, should hold V- at ~3.1 V even if 3V3 has a short glitch. So a quick glitch on 3V3 should be detected by the comparator and trigger an output which I can read with additional circuitry. After testing a few PCBs and seeing initial input current spikes and some minor smoking from the board, I've narrowed it down to this chip as a possible culprit. I started taking voltage measurements and saw big disparities in V-: 0.96, 2.82, 1.58, 1.32, 3.96?? Seems like this chip is just completely fried, but I'm not sure why this would have happened. Does anything stick out as an obvious mistake? Some thoughts/notes: 1. I noticed the same input current jump when I applied only 3V3 to the board, without any power on 5V. 2. Without desoldering the resistors from the board, I can't accurately measure the individual resistor values because of the rest of the board. Edit 2016-06-03: Think I got to the bottom of it. I think the chips were actually still operational, and the smoking/initial current surge was coming from somewhere else (to be determined where). The actual problem had to do with my biasing circuitry for V-, and the comparators actual bias voltages. The datasheet gives the bias current anywhere from 25 - 400 nA. My "ideal" biasing current would be ~ 310 nA (3.3 V / 1.0634 MΩ). Considering the wide range of possible comparator bias currents, this could definitely explain the difference in V- readings I was seeing. For my next round of PCBs, I'll spec the ideal biasing current to be in the 10-100 uA range so the comparator bias current becomes insignificant. Does this make sense? • "minor smoking" – Spehro Pefhany Jun 3 '16 at 18:52 • Yeah, lol, noticed a brief small stream of smoke from the board when I first plugged in but it stopped quickly after. – Jim Jun 3 '16 at 18:58 Well you could easily have fried the part when applying 3V3 with no power on the comparator. This is because input protection diodes would route the 3V3 onto the unpowered 5V bus and likely exceed the diode ratings and cause possibly catastrophic device failure. It's probably a good idea to put 100kohm in series with the +Vin pin. • That's what I was suspecting, but the datasheet implies its not a problem The input common-mode voltage of either input signal voltage should not be allowed to go negative by more than 0.3 V. The upper end of the common-mode voltage range is (VCC +/– 1.5 V, but either or both inputs can go to 30 V without damage. 100k in series with both inputs for better safety? – Sean Houlihane May 24 '16 at 17:22 • Yeah I've just read that so I'm thinking about it..... The problem is that the DS does not explicity say you can have inputs powered when the device is unpowered and this gives me cause for concern. Try 100 k in series with both inputs is advisable now I think. – Andy aka May 24 '16 at 17:27 I noticed the same input current jump when I applied only 3V3 to the board, without any power on 5V. This might be the problem. The data sheet is a little confusing but I suggest you pay attention to this part. The other thing that's more important is the pinout. Figure 2. Three different packages for the TS391. Your schematic shows the TS391IYLT. Make sure you haven't installed the RILT version. • I tried a fresh board to see if this was the case, and applied 5V to power the entire board; I have a regulator providing 3V3 from 5V. But, upon connecting my power supply I saw an initial current jump to ~2+ amps (where steady state is about 3-400 mA). – Jim May 24 '16 at 19:25 • Also, on one of my boards I had previously (probably) fried, I de-soldered the comparator and saw the voltage at the pad for V- go to the expected 3.1 V. So, it does seem like the chip is giving me an issue. – Jim May 24 '16 at 19:29 • See update regarding package. – Transistor May 24 '16 at 20:35 • Good point @transistor ... I looked at the markings on my package and it does seem like the manufacturer used the right one (marking K510, IYLT, same pinout as ILT). However, I have had issues with my manufacturer using counterfeit or "gray market" parts in the past. I checked an old rev of my board with a nearly identical schematic (except it had a 2 pin jumper as an enable on the ouput of the comparator to feed an AND gate) and the voltages were fine. I'll keep digging. – Jim May 24 '16 at 21:30
2019-10-15 12:21:59
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.39832454919815063, "perplexity": 2542.9142090491882}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-43/segments/1570986658566.9/warc/CC-MAIN-20191015104838-20191015132338-00025.warc.gz"}
https://www.findfilo.com/math-question-answers/the-general-solution-of-the-differential-equation-bh5
The general solution of the differential equation (y dx-x dy)/y=0 | Filo Class 12 Math Calculus Differential Equations 540 150 The general solution of the differential equation is(A) (B) (C) (D) Solution: Solution: ydx-xdy=0 ln|x|=ln|y|+ln|c| ln|x|=ln|cy| x=cy y=x/c y=c'x 540 150 Connecting you to a tutor in 60 seconds. Similar Topics relations and functions integrals trigonometric functions inverse trigonometric functions application of derivatives
2021-07-27 18:35:04
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8415014147758484, "perplexity": 13569.187540553829}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 5, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-31/segments/1627046153474.19/warc/CC-MAIN-20210727170836-20210727200836-00255.warc.gz"}
https://cs.stackexchange.com/questions/148381/how-can-i-do-this-type-of-swap4-opt-between-4-edges-of-a-graph
# How can i do this type of swap(4-opt) between 4 edges of a graph? The double bridge move is a specific type of swap between 4 edges of a graph, also called 4-opt. It consists of removing 2 pairs of edges. Lets call them (I, I+1), (J, J+1) and (P, P+1), (Q, Q+1). The edges are removed and reconnected in this way (I, J+1), (J, I+1), (P, Q+1) and (Q, P+1), like in the first image below. In this way, the graph that could be seem like circular now has 2 "bridges"(pair of edges) crossing each other. I need this for the Travelling Salesman Problem. I want to know the best way to do the Double bridging, in pseudocode. I have coded the 2-opt already, but this type of 4-opt cannot be reproduced by any sequence of 2-opts. Not unless some sort of reversal is also made in some edges of the graph, and i don't know how to do this. I searched the entire internet including several papers for any explanation regarding double bridge but none have helped me. • Thank you for your edits. I don't understand your question. What do you mean by the "best" way? "Best" by what criteria? It looks like this can be implemented in a totally straightforward way, with $O(1)$ time. What are the criteria you will use to evaluate which is "best", and that you will use to evaluate answers? What's the best approach you've been able to come up with so far, and why have you rejected it? – D.W. Jan 16 at 0:49 • @D.W. By best answer i mean mostly time complexity, if possible considering space complexity aswell. O(1) would be perfect but i dont know if it`s possible the way i coded it. I'm using C language and an array with a start and a finish node in each position, representing an edge. My approach uses a copy for the path and 2 array of succesors, to indicate what is the next node, since this isn't a linked list. I update one of them according to what changes i made in the path when double bridging. Then i use the other in conjunction to order the nodes in the new array that represents the path Jan 17 at 12:42 • I'm not confident i can do O(1) using an array, but even if it's possible to use less variables than i'm using it would be great already. By variables i mean the copy of the path and the 2 arrays of succesors(i tried with one but couldn't quite figure out how to do it). Jan 17 at 12:46 If the path is stored as a doubly-linked list, you can do it in $$O(1)$$ time in a straightforward way: you have to change around 4 edges, and each change can be done in $$O(1)$$ time. With an array it takes $$O(n)$$ time but is also straightforward to implement.
2022-01-26 04:15:58
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 3, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.5348316431045532, "perplexity": 465.2273344740462}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-05/segments/1642320304915.53/warc/CC-MAIN-20220126041016-20220126071016-00028.warc.gz"}
https://aas.org/archives/BAAS/v35n5/aas203/866.htm
AAS 203rd Meeting, January 2004 Session 15 Comets, Kuiper Belt and Trans-Neptunian Objects Poster, Monday, January 5, 2004, 9:20am-6:30pm, Grand Hall ## [15.05] Outer Solar System and the Sinusoidal Potential D. F. Bartlett (University of Colorado) At recent meetings of the AAS I have presented posters defending a new, sinusoidal gravitational potential. Here the customary numerator in Newton’s law is replaced by GM cos(2\pi r/\lambdao) where \lambdao is a universal constant, 425 pc. Because there are 20 oscillations of the potential between the sun and the center of the Milky Way, galactic tidal forces should be about 120 times as strong as normally believed. Such a large tidal force is needed if the global galactic potential is to explain the surprisingly large modulation in the galactic longitude of the perihelia of comets teased from the Oort cloud. (A modulation with prominent peaks at longitudes of 45,135,… degrees was first observed by Matese and Whitmire (1996). They now feel that an impactor is the culprit, but it could instead be the sinusoidal potential (Bartlett, AAS-199)). Here I discuss how the same large tidal force might be responsible for two more observations in our solar system. Recently, Shaviv and Veizer (2003) have found a periodicity of about 140 Myrs in the observed isotopic fraction of heavy oxygen (O-18) in terrestrial calcite. They ascribe the period to variations in the cosmic ray rate caused by the revolution of the solar system through 4 rotating spiral arms. I find it rather to be the effect of variations in the strength of the galactic tidal force as the sun rotates in the nearly stationary quadrupole field of the central bar. There is increasing evidence that the Kuiper belt really ends at about 50 AU. (Donnes 1997; Allen, Bernstein, and Molhotra 2001). The cause for this cut-off is unknown, but galactic tidal forces are dismissed. I will show how the new, stronger forces can be effective. Bulletin of the American Astronomical Society, 35#5 © 2003. The American Astronomical Soceity.
2016-09-28 22:58:50
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8399127721786499, "perplexity": 2314.782911047353}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2016-40/segments/1474738661768.10/warc/CC-MAIN-20160924173741-00004-ip-10-143-35-109.ec2.internal.warc.gz"}
https://solvedlib.com/n/chemical-connections-26-a-what-is-a-protonophore,10766680
# (Chemical Connections 26 A) What is a protonophore? ###### Question: (Chemical Connections 26 A) What is a protonophore? #### Similar Solved Questions ##### A distant galaxy is simultaneously rotating and receding from the earth. As the drawing shows, the... A distant galaxy is simultaneously rotating and receding from the earth. As the drawing shows, the galactic center is receding from the earth at a relative speed of Ug = 1.70 x 100 m/s. Relative to the center, the tangential speed is vt = 5.00 105 m/s for locations A and B, which are equidistant fro... ##### QuEsTION le dlagram belav hon Tha varlatton (urtent Krth Dire throvanhhal ote Eanani Cadecrdanet Canananmt Hentter that the ampltude olhe curtent A00 eeam tettrd1aee uarn Ie (Gechh 400 Aend 624 Grta [46e1 40l0 Wlal tdena IEMm10,000 Wi30,000WM50,000W40,000 W 20,000Weatt ond Click Satr and Submit t0 QuEsTION le dlagram belav hon Tha varlatton (urtent Krth Dire throvanhhal ote Eanani Cadecrdanet Canananmt Hentter that the ampltude olhe curtent A00 eeam tettrd1aee uarn Ie (Gechh 400 Aend 624 Grta [46e1 40l0 Wlal tdena IEMm 10,000 Wi 30,000WM 50,000W 40,000 W 20,000W eatt ond Click Satr and Submi... ##### You are thinking of doing a poster project on the development ofpenicillin and related antibiotics in the context of antibioticresistance. You read a peer-reviewed research article stating thatpenicillin kills bacterial cells but not human cells because...Group of answer choices:A) human cells are protected by the cholesterol in their cellmembrane.B) human cells are protected by a cell wall.C) only bacterial cells have chitin in their cell wall.D) human cells do not contain peptidoglycan in thei You are thinking of doing a poster project on the development of penicillin and related antibiotics in the context of antibiotic resistance. You read a peer-reviewed research article stating that penicillin kills bacterial cells but not human cells because ... Group of answer choices: A) human cells... ##### Which of the following controls is appropriate for current liabilities? Select one: a. tracking and receiving... Which of the following controls is appropriate for current liabilities? Select one: a. tracking and receiving payments from customers b. delaying salary payments to employees c. avoiding payment discounts d. tracking and paying bills on time... ##### Two cars are traveling from the same place and in the same direction. The first car started from rest and speeds up with constant acceleration of 0.3 m/s?_ The second car leaves 15 seconds after the first car and speeds up with constant acceleration of 0.5 m/s?. When will the second car overtake the first car? Two cars are traveling from the same place and in the same direction. The first car started from rest and speeds up with constant acceleration of 0.3 m/s?_ The second car leaves 15 seconds after the first car and speeds up with constant acceleration of 0.5 m/s?. When will the second car overtake the... ##### Icidregcaic %3 irirzuznelt ic <hsciituizn: 642{70h Icidregcaic %3 irirzuznelt ic <hsciituizn: 642 {70h...
2022-07-05 09:00:05
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 2, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.18103016912937164, "perplexity": 9666.043070370928}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-27/segments/1656104542759.82/warc/CC-MAIN-20220705083545-20220705113545-00556.warc.gz"}
https://logosconcarne.com/2022/04/07/quantum-decoherence/
# Quantum Decoherence In the last two posts (Quantum Measurement and Wavefunction Collapse), I’ve been exploring the notorious problem of measurement in quantum mechanics. This post picks up where I left off, so if you missed those first two, you should go read them now. Here I’m going to venture into what we mean by quantum coherence and the Yin to its Yang, quantum decoherence. I’ll start by trying to explain what they are and then what the latter has to do with the measurement problem. The punchline: Not very much. (But not exactly nothing, either.) In order to understand decoherence it’s necessary to understand coherence, and to understand that we need to talk about phase. (Don’t be fazed, it’s not that bad.) The starting point is the wave-like nature of matter. This is most apparent with light, where the wave/particle duality is ancient and relatively easily demonstrated. But ever since Louis deBroglie (pronounced “deh-broy”) we view all matter as having this duality. [For more, see What’s the Wavelength?]. As an aside, light, especially seen as photons, is matter. Light has energy but is a form of matter. Energy is a property all matter can have. All types of light, from radio waves to x-rays and gamma rays, are matter. What distinguishes them is their energy, which, in light, is expressed as frequency. Radio waves are low frequency (low energy) while x-rays and gamma rays are high frequency (high energy). Frequency, casually speaking, is how often something happens. In physics, frequency refers to regularly occurring phenomena, such as swinging pendulums, vibrating springs, sine waves, or spinning objects. Here we’re interested in the last two, in particular the relationship between them: Figure 1. Generating waves with a wheel (red=sine wave, blue=cosine wave). [from Wikipedia] There are three things to note in Figure 1. Firstly, the direct relationship between the generated waves (red and blue) and the spinning wheel (green). This correlation is fundamental. Electrical power from the power companies waves in this manner because it’s generated by the spinning wheels of electrical generators. Secondly, note that the peaks and valleys of the two waves don’t line up. This is because the red sine wave is a horizontal projection of the rotation while the blue cosine wave is a vertical projection. The vertical axis is 90° relative to the horizontal axis, so the blue wave is 90° out of phase from the red wave. If the peaks and valleys lined up, we’d say the waves are exactly in phase or have a phase angle of 0° relative to each other. On the other hand, if the peaks of one matched the valleys of the other, we’d say their phase angle was 180°, which is as out of phase as it’s possible to get. They would be exactly out of phase, the opposite of being exactly in phase. Figure 2. Waves that are exactly (180°) out of phase. Obviously, projecting both waves onto the horizontal axis in the same direction generates identical in phase waves. Projecting one onto the horizontal axis in the opposite direction — 180° difference — generates a wave that is 180° out of phase. By varying the projection angles, we vary the phase angle. As an aside, note that only the in 90° case can we say the blue wave is the cosine wave relative to the red sine wave. (In general, a wave generated by circular motion is called a sine wave.) Thirdly, because both waves are generated by the same spinning circle, the distance between their respective peaks is the same for both. That distance is their wavelength and how frequently those peaks pass a fixed point is their frequency. § Phase is the angular distance between two waves relative to specific matching points on each, for instance their respective peaks or valleys or any other spot we identify. Because of the relationship between sine waves and spinning wheels, phase is also the angular distance between two points on the wheel. If we think of a wheel as a clock, then, relative to 12 o’clock, 3 o’clock is 90° out of phase and 6 o’clock is 180° degrees out of phase. Above I said that 180° is as out of phase as possible — 6 being on the opposite side from 12. While 9 o’clock might seem like it’s 270° out of phase and, thus, more, it’s also -90° degrees, which is the same phase difference (in reverse) as 3 o’clock. To segue into the notion of coherence, consider generating these waves with two spinning wheels, both projected left on the horizontal axis (like the red wave is). If the clocks were in phase — both striking 12 o’clock at the same moment — then the generated waves would also be in phase. To generate the above animation, the blue wave’s clock would have to strike 3 o’clock at the same moment the red wave’s clock struck 12 — the clocks would have to be 90° out of phase. As an aside, we can calculate angular distance in degrees, which are arbitrary units (like inches or pounds), or we can use the natural units of radians where 180° degrees equals pi radians, and a full 360° is 2*pi radians. In physics we generally use radians. (One radian is about 57°. It’s the angle we get by moving along a circle the same distance as its radius.) There is one more very important way we could control the phase between the two waves. Imagine generating two waves from one clock using the same projection. The waves would be exactly in phase but if one wave took a longer path to some comparison point, at that point they would be out of phase. How much out of phase depends on the relative lengths of the two paths. Figure 3. Two waves (red and blue) from two clocks with different frequencies. They are in phase only momentarily at 0.0, +2.0, and +4.0 pi radians. They are (again momentarily) exactly out of phase at +1.0 and +3.0 pi radians. The green line traces the phase difference from 0° to 180° (aka 1.0 pi radians). (Note the scale difference there.) Lastly, if the two clocks ran at different speeds, even if the generated waves were both from the same projection, then the phase angle constantly changes. How fast it changes depends on the clock speeds relative to each other. In the extreme case, think of how a stopped clock is right twice a day compared to a normally running clock. § § Coherence is a broad topic (see its Wiki disambiguation page), but as the Yang to the Yin of decoherence, we can think of it as the property of a wave-like system that allows consistent interference effects. A crucial point is that the phase of a coherent system remains controlled and stable. In a decoherent system the phase is unstable, usually due to the effects of some other uncontrolled system. One example of a coherent system is laser light. It has amazing and useful properties because it is a coherent form of light. Its photons march in lockstep and have consistent phase. In the quantum realm, phase and interference are aspects of the complex numbers used to describe quantum systems. These act like tiny clocks (with the caveat that, in some cases, the clocks are stopped, and we care about where they are stopped). This is why quantum math has so many constructs that look like this: $\displaystyle{e}^{{i}{2}\pi\theta{x}{t}}$ This complex exponential describes a “wheel” with frequency θ (theta). The x can stand for a rich variety of things depending on the system we’re describing. The t, if included, is the time component. In particular, note the i, the imaginary unit. Here again, complex numbers are important in quantum math. [See Circular Math and its many links (especially Beautiful Math) for more.] For now, suffice to say constructs like this describe waves (which correspond to clocks or spinning wheels). We won’t delve further into the mathematics of it. § Quantum coherence is important in two regards: Firstly, it allows interference effects such as seen in two-slit and interferometry experiments. Secondly, as a “stopped clock”, it’s a critical part of quantum states (such as used in quantum computing). Both of these are worthy of (multiple!) posts on their own. Here I’ll only summarize them so we can move on to decoherence and what (little) it has to do with measurement. In two-slit experiments, a matter wave passes through two slits and interferes with itself to produce distinctive interference bands. While there is a classical wave description involving constructive and destructive interference, the quantum version is puzzling because, since we don’t really know what a matter wave is, we don’t really know what is interfering. The presence of the bands tells us something is, but the phenomena isn’t like the classic version because what we see is based on probabilities, which remember, are the squares of the wavefunction at each point on the screen or detector. That wavefunction, by the way, contains complex exponentials similar to the equation above, and it’s these spinning wheels that interfere. The dark bands, unlike in the classical description, are not cancelations of wave energy but cancelations of probability. Interferometry experiments, typically some form of the Mach–Zehnder interferometer, are more sophisticated and scientifically useful versions of the two-slit experiment. (The latter are mostly useful for just demonstrating the wave-like nature of matter.) In quantum computing, the complex exponential describes the state of a qubit, and here we typically want a “stopped clock” — a fixed state. Recall that a (pure) quantum state is described by a vector with a length of one. The state is defined by where the vector points (see previous post) as well as by its phase. In the Bloch sphere (image at top of this post), phase is represented as rotation around the sphere. (The basis vectors are the vertical axes labeled |0〉 and |1〉. See QM 101: Bloch Sphere for more.) § § This brings us to decoherence. As the name implies, it’s the loss of a system’s coherence. Two-slit and interferometry experiments depend on the phase of the waves being unperturbed. They require constant coherence. If environmental effects perturb the system enough, the interference effects are lost. In quantum computing, if the phase of the qubit is perturbed, then the state of that qubit is no longer what it should be. This is similar to randomly flipping bits in classical computing — the computation becomes corrupted. Much of the engineering of quantum computers involves preserving the coherence of its qubits. Note that, with regard to individual “particles” in a two-slit or interferometry experiment, altering the phase of their wave doesn’t mean it no longer interferes with itself. It means the interference shifts randomly due to the perturbation. If that happens to all the “particles”, the random nature of environmental influence smears the banding, destroying the overall effect. So, quantum decoherence is the corruption of a quantum system’s phase by mixing it with the random phases from the environment. In turn, the quantum system’s phase is dispersed into the environment (imagine a tiny drop of ink in five gallons of water). If coherence is a property a system can have, decoherence is a process it can experience. § Finally, what does decoherence have to do with measurement? Not much. At least, not much with the measurement itself. But I think it has a lot to do with the divide between a quantum system and the instrument that measures it. In any large system, the phases of the individuals are dispersed, not just into the environment, but among the myriad individuals of the system. (Recall the 10²⁷ singers.) A large system has decohered and, thus, acts not as quantum system but as a classical one. In particular, quantum behaviors such as superposition and interference are no longer possible. (This is why cats are always either alive or not.) Bottom line, when a classical system measures a quantum system, the effect is as if one more singer joined the crowd — that singer’s song is utterly and completely swamped by the “white noise” of the crowd. § § Next time I’ll talk about measurement in some specific contexts, such as the infamous Schrödinger’s Cat experiment. Stay coherent, my friends! Go forth and spread beauty and light. The canonical fool on the hill watching the sunset and the rotation of the planet and thinking what he imagines are large thoughts. View all posts by Wyrd Smythe #### 51 responses to “Quantum Decoherence” • Peter Morgan Your comment that “Electrical power from the power companies waves in this manner because it’s generated by the spinning wheels of electrical generators”, although usually true, put me into curiosity mode. I arrived at https://en.wikipedia.org/wiki/Power_inverter by way of https://en.wikipedia.org/wiki/Electricity_generation#Photovoltaic_effect. Solar energy is generated as DC, which is usually converted to AC without mechanically moving parts, although the physics of a power inverter somehow still has to provide the same mathematical effect. • Wyrd Smythe Indeed. AC is necessary for transmitting the current through the many power transformers along the way. Transformers only work with AC and usually are “tuned” to work most efficiently with AC of the expected frequency. That’s not the only reason we need AC from DC sources. Many home devices require it. Fan and other motors depend on it and so do many clocks (which use it as a timing source). Devices such as TVs require AC power to create the low-voltage DC that runs the device. What’s more, if you have a UPS, you have a system that stores (DC) power in a battery, and, if you lose the incoming AC, the UPS kicks in, inverting that DC into the AC your home devices expect. The concept is fairly old. Search for [inverters for cars] or [inverters for campers] and you’ll get many pages of hits. It has long been possible to buy inverters that connect to your car’s 12VDC battery and produce enough wattage to power a few “household” devices that depend on the voltage (and in such cases as mentioned above, the AC as well). Speaking of which, modern technology often shifts from mechanical to non-mechanical. Early inverters often used a tiny spinning or vibrating mechanism to create the AC, but modern ones are usually solid state. • Wyrd Smythe As a further thought to your last bit about the physics of an inverter and my last bit about modern technology replacing mechanical devices with solid state ones, often the circuitry of a modern inverter has something along the lines of a resonant circuit that acts like a harmonic oscillator — effectively an electronic wheel that generates a sine wave. • Measurement Specifics | Logos con carne […] the last three posts (Quantum Measurement, Wavefunction Collapse, and Quantum Decoherence), I’ve explored one of the key conundrums of quantum mechanics, the problem of measurement. […] • Objective Collapse | Logos con carne […] the last four posts (Quantum Measurement, Wavefunction Collapse, Quantum Decoherence, and Measurement Specifics), I’ve explored the conundrum of measurement in quantum mechanics. […] • Wyrd Smythe Here’s a pretty good video about decoherence from Sabine Hossenfelder: • The Power of Qubits | Logos con carne […] about the measurement problem in quantum mechanics (see Quantum Measurement, Wavefunction Collapse, Quantum Decoherence, Measurement Specifics, and Objective […] It turns out that “decoherence” has many uses in the literature, and they aren’t as closely-related as one might think. You’ve captured some of its common usage (in particular, for NMR), but in QIS it’s taken on a different (and I think simpler) usage. If you’ve learned about density matrices, I’d be happy to explain it. • Wyrd Smythe Hello and welcome. Yes, I’ve encountered how “decoherence” almost seems a catchword. I’m especially askance at how “decoherence” supposedly explains how, under the MWI, matter dodges the Pauli Exclusion Principle. I have a rough understanding of density matrices, so by all means, I’d love to hear what you have to say. (As an aside, “entanglement” is another concept that seems used in different ways. Sometimes as what Roger Penrose calls “quantanglement” — a wavefunction describing two particles where a measurement on one affects the other — and also, as far as I can tell, just to mean quantum information being dispersed into the environment. In the latter, a measurement on the particle (or the environment) would not affect the so-called “entangled” parts. Confusing!) Regarding density matrices, the book “The Structure and Interpretation of Quantum Mechanics” by RIG Hughes is what made it all click for me. It turns out to be really simple, though I can’t do it justice in a comment. Here’s an equivalent formulation that doesn’t use density matrices: Take the qubit |0>+|1>. Measure it in the basis {|0>+|1>, |0>-|1>}. Discover you get the first element with 100% chance. It is as though the two “components” of your qubit interfere: https://www.scottaaronson.com/democritus/lec9.html Now entangle it: |00> + |11>. If you measure either individual qubit in the preceding basis, you’ll now get 50-50 results, just as you would for a classical bit: https://www.scottaaronson.com/democritus/lec11.html. The interference is gone! The point is that any *sub*system appears to be classical, even though the overall state is still pure. Therefore, if you can’t access even a single one of the entangled qubits, you can’t exhibit quantum behavior any longer. I would work through that example carefully, because that’s what finally made things clear to me. You’ll find that it’s a bit of a pain, which is why density matrices were invented. They’re a clever way to encapsulate the same information. Entanglement should mean the same thing in all contexts. The only confusing part is the details of the entangled systems. Entanglement with single qubits is easier to understand, but is in principle not different from entangling with fields (such as the gravitational field, which doesn’t have a force carrier in the standard model — though IANAP, so I have limited understanding here). Basically, you have to understand how the individual systems are modeled. Are you familiar with the sense in which both qubits and wavefunctions are vectors in Hilbert spaces? The former is 2-dimensional, and the latter is (uncountably) infinite-dimensional, and so they require different descriptions (since nobody likes writing down tuples with infinitely many elements). • Wyrd Smythe Keeping in mind I’m learning this as I go, so it’s highly likely I’ve misunderstood you… “Take the qubit |0⟩+|1⟩. Measure it in the basis {|0⟩+|1⟩, |0⟩-|1⟩}. Discover you get the first element with 100% chance.” The link to Aaronson’s text. The section below “Exercise 2 for the Non-Lazy Reader:” — is that the section in question? There aren’t coefficients for the |0⟩+|1⟩ state. I assume 1/√2(|0⟩+|1⟩) — the “positive” superposition of |0⟩ and |1⟩ — and certainly if measured in the {|0⟩+|1⟩, |0⟩-|1⟩} basis would produce |0⟩+|1⟩. It has to because Ψ is already in that state. Same as having Ψ=|0⟩ and measuring in the {|0⟩,|1⟩} basis always returning |0⟩. What Aaronson is speaking of in that section is, as I understand it, slightly different. Given a starting state Ψ=|0⟩, applying a 90° rotation by using what looks like a Hadamard gate (except the minus would be in the lower right) does create the state 1/√2(|0⟩+|1⟩) and a further rotation by the same gate does result in a final state of |1⟩. Which, yes, I can see is a demonstration of quantum interference, and that requires a coherent system, but I’m not connecting the dots to how decoherence enters the picture. Except that if the qubit were to decohere, its state would be randomized. No doubt I’m just not getting it. “Now entangle it: |00⟩ + |11⟩. […] The interference is gone!” I feel I’m totally misunderstanding, because this is a very different situation to me. Now we have two qubits that are maximally entangled. Measuring either indeed gives 50/50 odds of |0⟩ or |1⟩. And immediately forces the other qubit to be the same. But I don’t see the connection or how this applies to decoherence. Bell pairs do have to remain coherent for the entanglement to survive. If the system decoheres, the entanglement vanishes. This linked page has material about the MWI and decoherence that I want to read carefully when I have a chance. Looks interesting! As far as density matrices goes, my understanding is that phase information is carried in the off-diaogonal members (usually in the form of complex exponentials). Decoherence is when those vanish or become real numbers. Here’s the Wiki page. It’s the idea that fermions can’t share the same state (but bosons can). It’s why the periodic table — the electrons of atoms must each have their own state so they form shells. But in the MWI matter overlaps. Infinite worlds overlap. How is that possible if fermions can’t share the same state? “Entanglement should mean the same thing in all contexts.” Perhaps, and again this may be my misunderstanding, but it seems as if there is what I mentioned Roger Penrose calls “quantanglement” — which involves, for instance, bell pairs that are described by a single inseperable quantum state — and the more common entanglement that seems to just mean “all mixed up.” For instance, as far as I know, photons that bounce off an object, and are which sometimes then said to be “entangled” with it, may carry information away from the object, but measuring those photons doesn’t affect the object. The photons aren’t linked to the object the way bell pairs are linked. I can’t help but think Penrose came up with the term to distinguish from the “all mixed up” kind of entanglement. (Like we might say a box of rubber bands is entangled.) “Basically, you have to understand how the individual systems are modeled.” Absolutely agree! I think it’s critical to consider actual physical systems. I’m a hard-core realist, and I want to know what’s really going on! “Are you familiar with the sense in which both qubits and wavefunctions are vectors in Hilbert spaces? The former is 2-dimensional, and the latter is (uncountably) infinite-dimensional,…” Yes, and yes. Finite summations version integrations. I’ve posted about the Bloch sphere and two-state spin systems (especially photon polarization). (My ultimate goal in studying QM is learning to solve the Schrödinger equation and write some software to create an animation of the two-slit experiment. But learning to actually solve partial differential equations turns out to be very hard (for me). Not sure I’ll reach that goal, but I’m enjoying the learning so far.) Alas, this is a poor medium for communication! Yes, I know what the Pauli exclusion principle is, but I had never heard about a connection to MWI. The whole point of them being “different worlds” is that they don’t belong to a single world, where things like PEP apply. But this conversation may take us too far astray of our original goal. I’m leaving out normalization constants because they’re too hard to type. Sorry! The state |0> will measure as |0>+|1> half the time. So will the state |1>. Naively, one might think that the state |0> + |1> could be treated as a classical combination of those two states, in which case it would *also* measure as |0>+|1> half the time. Of course, it does not — and when you try to see why not, you will find yourself using negative and positive amplitudes that cancel. This can be considered the simplest example of interference. An environment might now entangle with your qubit, producing the Bell state. *Now* if you try to calculate the above odds, you get 50/50 again — just like for a classical particle. Therefore, entangling with an environment destroys interference (and the degree destroyed depends on the maximality of entanglement). If the photon in your example does carry away information about an object, then the photon is entangled with that object in exactly the usual way you think of “entanglement.” Knowing the state of the photon gives you corresponding information about the object in precisely the way a Bell state does (but again, depending on *how* entangled they are). • Wyrd Smythe “Yes, I know what the Pauli exclusion principle is, but I had never heard about a connection to MWI.” Oh, sorry, I misunderstood what you were asking. It’s not something I read, but a question that’s occurred to me. Under the MWI, the many worlds are supposed to be taken as physically real, but I don’t understand how an infinite number of roughly identical worlds can overlap physically. There may be a different world wavefunctions superposed, but how does a world wavefunction apply to individual electrons? Those have only a certain number of quantum properties, and the PEP applies to that set of properties. So, what extra quantum property gets around the PEP? The only answer I’ve ever found is “decoherence”, but I can’t see how that works. (Maybe Aaronson’s webpage will finally provide an answer.) If two physical systems have decohered, for instance myself and the chair I’m sitting in, the last thing they can do is coincide physically. But, yeah, this is a distraction from the discussion. Just something about the MWI that bugs me. “The state |0⟩ will measure as |0⟩+|1⟩ half the time. So will the state |1⟩.” If measured in the {|0⟩+|1⟩, |0⟩-|1⟩} basis, I agree. (It might be worth mentioning that it’s not possible to actually measure on that basis. We can’t measure a superposition. What’s typically done is applying a rotation operator such that a (|0⟩+|1⟩) state rotates to a |0⟩ state and a (|0⟩-|1⟩) state rotates to |1⟩. Those we can measure.) “Naively, one might think that the state |0⟩ + |1⟩ could be treated as a classical combination of those two states, in which case it would *also* measure as |0⟩+|1⟩ half the time.” The thing is, when treating the system in the {|0⟩+|1⟩, |0⟩-|1⟩} basis, the state (|0⟩+|1⟩) is an eigenstate of that basis, so it will always measure as that state. In the {|0⟩+|1⟩, |0⟩-|1⟩} basis, the (|0⟩+|1⟩) state isn’t a superposition. In that basis, the |0⟩ and |1⟩ states are the superpositions. Which is why you’ll get them half-and-half if measuring (|0⟩+|1⟩) or (|0⟩-|1⟩) states in the {|0⟩, |1⟩} basis. I was going to do the math for the example Aaronson has in Lecture 9 when I got a chance. I’ll put it in a comment once I do. It might help clarify things. Or at least show how I see it, for whatever that may be worth. “Knowing the state of the photon gives you corresponding information about the object in precisely the way a Bell state does.” But does measuring that photon and obtaining some information about the object change the object’s wavefunction? It does with Bell pairs. Ah, but it IS possible to measure in that basis — in fact, this is precisely what is going on in the spin-1/2 case. |0> = |z+>, √2/2(|z+>+|z->)=|x+>, √2/2(|z+>-|z->) =|x->. You are of course correct about being an eigenstate of the basis in the second example. This points to a fact that’s often overlooked in undergrad QM: “superposition” and “interference” are subjective labels (as can be seen in the spin case, where |x+> can be seen as a superposition or not). Yes, measuring the photon changes the wavefunction of the object, just as with any entanglement. Of course, this is what gives rise to “psi-epistemic” interpretations: they say that it is only our *information* that changed, since obviously “real effects” cannot propagate FTL. • Wyrd Smythe Ha! Well, yes, with spin states one can move the entire Stern-Gerlach device or rotate the polarization filter. In which case, it’s still eigenstates, not superpositions. (In quantum computing, as I understand it, qubits can only be measured on the {|0⟩, |1⟩} basis, but I’ve only yet dipped my toe in QC.) I think we’re on the same page here. Superposition is certainly relative to the basis. I’m not sure how much interference is. As I understand it, it depends on relative phase between two superpositions. I’ve never been entirely happy with that description due to Aaronson, the first one you linked to. (FWIW, I did the math. See below.) It seems more a special case of interference than the usual cases with two-slit experiments and beam-splitters. Start with a known state and rotate it. Then rotate it again. I think I do see the point being made, and perhaps you’re right I just need to really think about it. (But I find that many times things that are mathematically identical don’t to me seem physically identical, and as I mentioned, I’m a hard-core realist.) I worry that QM sometimes gets too lost in the math. In what way does measuring the photon change the object? I don’t have any problem with the nonlocality of, for instance, bell pairs. Or even Einstein’s spooky wavefunction nonlocal collapse, although that does require explanation. I think time is fundamental, but I can see space as being emergent from something deeper. Three-dimensional distance may be an artifact of classical reality. So, the photon linking “instantly” back to the object, no problem, but in what way can measuring the photon affect the object? The photon carrying away information *must* be entanglement, because that’s the very definition of “carrying information away” (at the quantum level)! It entangled with some particular observable of the object, and information about the photon gives information about that observable. Whether you call this “affecting” the object or not, it’s the same thing that happens in EPR experiments. > it’s still eigenstates, not superpositions. An eigenstate of the spin-x observable is a superposition of spin-z eigenstates. BTW, it’s not the rotation part that is interesting (to me). The way to look at it is: how does a classical mixture of |0> and |1> behave (wrt the Hadamard basis), and “why” does the superposition |0>+|1> behave differently? One answer is “duh, it’s an eigenstate” and the other is “ooh, magic, interference!” It depends on perspective. This is what I mean when I say that calling it interference is subjective. • Wyrd Smythe “The photon carrying away information *must* be entanglement, because that’s the very definition of “carrying information away” (at the quantum level)!” I agree the photon carries away information from the object. No question. “Whether you call this ‘affecting’ the object or not, it’s the same thing that happens in EPR experiments.” I don’t, and I’m not seeing how that’s true. In EPR experiments, two particles are described by a single wavefunction. Fully described by this wavefunction, as I understand it. A measurement of one changes the wavefunction, and this is reflected in both. My understanding also is that such a measurement destroys the entanglement. Now they have separate wavefunctions. A photon bouncing off an object seems asymmetrical to me and not described by the same wavefunction. The photon and the object are linked in having affected each other in the past, and therefore having information about each other, but I don’t seem them as linked after the interaction. So, I don’t see the situation being similar to a Bell test, but perhaps there are dots I’m not connecting (always possible!). “An eigenstate of the spin-x observable is a superposition of spin-z eigenstates.” Yes. I’m sure we’re on the same page here. This tangent was in response to my saying that we can’t measure superpositions, just eigenstates. I think you agree but are pointing out any eigenstate is a superposition when viewed from another basis. I quite agree. Same page? “…how does a classical mixture of |0⟩ and |1⟩ behave (wrt the Hadamard basis),…” I’ve been thrown by what you’re calling “classical” and how it applies to quantum states. Isn’t any quantum state a superposition of a variety of basis states? I have a sense this takes us back to the beginning with decoherence to a non-quantum state, and I’ve encountered density matrices as being a big part of that. I do understand it has to do with the off-diagonal matrix members becoming zero (or real numbers rather than complex?). One more hill to climb! “Classical” just means that you have a coin that is either |0> or |1> with 50% probability. Imagine flipping a classical coin, getting one of those at random (without knowing which), and doing measurements on it. In the Hadamard basis, you will get 50-50. • Wyrd Smythe Forgive me for being obtuse, but are you talking about a system (such as a coin) that physically can only have two states (and no superpositions)? As opposed to a two-level qubit measured in the {|0⟩. |1⟩} basis, which can only have two outcomes (although it actually has R² degrees of freedom)? Starting a new thread because it’s hard to respond to the nested thread above. I just mean a qubit that known to be in one of the definite states |0> or |1> with 50% probability. We can calculate expectations for measurements on this classically indeterminate quantum state (and they will agree for all observables with the results for one qubit of a Bell pair). Yes, I believe that macroscopic objects can be in superposition. I don’t know how much sense it makes for me to consider myself as being in superposition, however, since I am me. I don’t believe it is meaningful to talk about a “God’s-eye perspective” where that would make sense. In this sense, perhaps I am closest to QBism: QM is a tool for agents to predict things. I’ve heard the term “distributed solipsism” (in a different context) and I kind of like it. • Wyrd Smythe Forgive my obtuseness, but wouldn’t any random qubit, if measured, give |0⟩ or |1⟩ with 50% probability? I’m sure you mean something more specific. Is the qubit meant to be in the |0⟩ or |1⟩ eigenstate with 50% probability — and not in a superposition of them? Clearly, I’m confused! “Distributed solipsism”! In terms of putative other copies of yourself, or in terms of other people in your world? Suppose I give you a qubit that is either |z+> or |z-> with 50% probability, and ask you for the probabilities of spin-x outcomes. What are those odds? Distributed solipsism in terms of all beings. (And no, I don’t know how to define a “being” :)) I think the correct metaphysics is one that can’t be fully pinned down. The interesting bit for me is that my own world is not definite before information reaches me. That’s what I think QM is telling us: that there is genuine openness, and you are at the epicenter of it. I am the point at which things become real in my world — and the same for everyone else. • Wyrd Smythe What’s confusing me isn’t the probabilities of spin-X measurements (which are 50/50) but of exactly what’s meant by “a qubit that is either |z+⟩ or |z-⟩ with 50% probability”. I see multiple meanings: The qubit is known to be ½|Z+⟩ and ½|Z-⟩ — specifically in one of those two eigenstates (say because of a previous measurement on spin-Z). The qubit is an unknown random superposition 1/√2(|a⟩±|b⟩). In which case, 1/√2(|Z+⟩±|Z-⟩) is a valid superposition and satisfies the constraint “either |z+⟩ or |z-⟩ with 50% probability”. In both cases, though, spin-X measurements are 50/50. (The two cases would vary for non-orthogonal tests in that, a |Z+⟩ state is more likely to produce a spin-up measurement the closer the angle of measurement is to the positive Z-axis.) Which is all in the scope of QM, so I’m struggling with how it can be labeled “classical”. Anti-realism, solipsism,… suffice to say our metaphysics are quite different! 😆 I mean the former: one that is known to be in eigenstate |z+> OR in |z-> but you don’t know which, so you assign 50% odds. In that case, measurement of spin-x yields 50% expectation for both |x+> and |x-> . On the other hand, if the qubit is in definite state |z+> + |z->, then it will measure as |x+> 100% of the time (because it IS |x+>). (You may wonder how one can KNOW that it begins in that definite state, and the easiest answer is that you *prepare* it in |x+>, but then the story gets less interesting. Forgetting about *how* you know, the point of this exercise is to show that the classical mixture yields different odds than the superposition, and the reason can be meaningfully described as interference: amplitudes cancelling when you compute using the z basis.) • Wyrd Smythe Okay, I’m clear on it now. I kind of thought that’s what you meant but calling it “classical” really threw me. I reserve the term for the classical world that emerges from QM. (But then, it occurs to me your metaphysics might not include a classical reality. No collapse, no classical world?) FWIW, I posted about this topic: QM 101: Quantum Spin. The other thing that got in my way, I think, was how, as you no doubt know perfectly well, the situation can be formulated with the X-axis as the “fundamental” basis and the other two as superpositions: $\displaystyle|\textrm{up}\rangle=|{0}\rangle=|{X}^{+}\rangle\\[0.5em]|\textrm{down}\rangle=|{1}\rangle=|{X}^{-}\rangle$ So, then the particle in known Z-axis eigenstates is defined being one of: $\displaystyle|{Z}^{+}\rangle=\frac{|{0}\rangle\!+\!|{1}\rangle}{\sqrt{2}}\\[1.0em]|{Z}^{-}\rangle=\frac{|{0}\rangle\!-\!|{1}\rangle}{\sqrt{2}}$ Which is hard for me to think of as “classical” — mental block! 🤷🏼‍♂️ Just noticed this comment on the post you linked: “It does boil down to exactly what causes a branch, doesn’t it. It’s one of many things I keep hoping an expert who has really studied this would clear up. I don’t find “dealer’s choice” very satisfying.” This was precisely what kept me learning for years, and I finally got an answer that satisfies me a couple years ago. Sean Carroll will tell you that branches split “when decoherence happens” (maybe not an exact quote, but close). That sent me down the rabbit hole of learning about decoherence. In this context it turns out to be a synonym for “entanglement that is so complicated that we can no longer practically track or reverse it.” There is no point at which decoherence objectively “happens,” because “practically” is a subjective choice! The reason Carroll et al are happy with it is because it becomes “practically” impossible to track/reverse at a very early stage (nanoseconds or less in a Schrodinger’s Cat setup?). The only other option for an MWI’er (or *any* non-collapser, which is the vast majority) is to let it propagate until it reaches you. At that point you have no choice except to say that you are somehow the point at which possibilities become “real.” This is profoundly distasteful to most physicists (though Ed Witten is a notable exception, and I’m glad to have him in my corner :). • Wyrd Smythe I’m happy to discuss the MWI if you want to but should say up front that I’m entirely unsympathetic to the view. I’ve posted about the MWI quite a few times. Most recently a three-part series: MWI: Questions, part1, part 2, and part 3. If you want to go down that particular rabbit hole, we should move to one of those posts. Or better yet, perhaps on this post also from last year: BB #74: Which MWI? It has the virtue of almost no comments on the post so far. (The key difference between our views here being that I don’t believe macro-objects exhibit quantum behavior. I don’t think wavefunctions are meaningful for large objects.) • Wyrd Smythe As an aside,… Assuming the link refers to the section just below Exercise 2 for the Non-Lazy Reader:, the text beginning with ‘This “2-norm bit” that we’ve defined …’, I decided to do, as they say, the math. In part because I was struck by his unitary matrix being almost, but not exactly, a Hadamard gate. Which is defined for two-level qubits as: $\displaystyle{H}=\frac{1}{\sqrt{2}}\begin{bmatrix}{1}&{1}\\[0.7em]{1}&{-1}\end{bmatrix}=\begin{bmatrix}{\frac{1}{\sqrt{2}}}&{\frac{1}{\sqrt{2}}}\\[0.7em]{\frac{1}{\sqrt{2}}}&{\frac{-1}{\sqrt{2}}}\end{bmatrix}$ But Aaronson’s unitary matrix in the text is: $\displaystyle{A}=\frac{1}{\sqrt{2}}\begin{bmatrix}{1}&{-1}\\[0.7em]{1}&{1}\end{bmatrix}=\begin{bmatrix}{\frac{1}{\sqrt{2}}}&{\frac{-1}{\sqrt{2}}}\\[0.7em]{\frac{1}{\sqrt{2}}}&{\frac{1}{\sqrt{2}}}\end{bmatrix}$ It was only when I sat down and did the math that I realized what the A gate does and why Aaronson used it. Here I’ll show both, starting with the Hadamard gate. If applied to a qubit in the |0⟩ state, we have: $\displaystyle{H}|0\rangle=\begin{bmatrix}{\frac{1}{\sqrt{2}}}&{\frac{1}{\sqrt{2}}}\\[0.7em]{\frac{1}{\sqrt{2}}}&{\frac{-1}{\sqrt{2}}}\end{bmatrix}\!\!\begin{bmatrix}{1}\\[0.7em]{0}\end{bmatrix}=\begin{bmatrix}{\frac{1}{\sqrt{2}}}\\[0.7em]{\frac{1}{\sqrt{2}}}\end{bmatrix}=\frac{1}{\sqrt{2}}\left(|0\rangle\!+\!|1\rangle\right)$ And if applied to a qubit in the |1⟩ state, we have: $\displaystyle{H}|1\rangle=\begin{bmatrix}{\frac{1}{\sqrt{2}}}&{\frac{1}{\sqrt{2}}}\\[0.7em]{\frac{1}{\sqrt{2}}}&{\frac{-1}{\sqrt{2}}}\end{bmatrix}\!\!\begin{bmatrix}{0}\\[0.7em]{1}\end{bmatrix}=\begin{bmatrix}{\frac{1}{\sqrt{2}}}\\[0.7em]{\frac{-1}{\sqrt{2}}}\end{bmatrix}=\frac{1}{\sqrt{2}}\left(|0\rangle\!-\!|1\rangle\right)$ OTOH, if we use the gate Aaronson uses, we have in the first case: $\displaystyle{A}|0\rangle=\begin{bmatrix}{\frac{1}{\sqrt{2}}}&{\frac{-1}{\sqrt{2}}}\\[0.7em]{\frac{1}{\sqrt{2}}}&{\frac{1}{\sqrt{2}}}\end{bmatrix}\!\!\begin{bmatrix}{1}\\[0.7em]{0}\end{bmatrix}=\begin{bmatrix}{\frac{1}{\sqrt{2}}}\\[0.7em]{\frac{1}{\sqrt{2}}}\end{bmatrix}=\frac{1}{\sqrt{2}}\left(|0\rangle\!+\!|1\rangle\right)$ Which is the same as above. For the |1⟩ case, we have: $\displaystyle{A}|1\rangle=\begin{bmatrix}{\frac{1}{\sqrt{2}}}&{\frac{-1}{\sqrt{2}}}\\[0.7em]{\frac{1}{\sqrt{2}}}&{\frac{1}{\sqrt{2}}}\end{bmatrix}\!\!\begin{bmatrix}{0}\\[0.7em]{1}\end{bmatrix}=\begin{bmatrix}{\frac{-1}{\sqrt{2}}}\\[0.7em]{\frac{1}{\sqrt{2}}}\end{bmatrix}=\frac{1}{\sqrt{2}}\left(-|0\rangle\!+\!|1\rangle\right)$ And because global phase is irrelevant: $\displaystyle\frac{{e}^{i\pi}}{\sqrt{2}}\left(-|0\rangle\!+\!|1\rangle\right)=\frac{1}{\sqrt{2}}\left(|0\rangle\!-\!|1\rangle\right)$ So, effectively the same state as above. Aaronson moving the minus sign makes no difference in this case. However, they rotate other states differently. In fact, they rotate the |0⟩+|1⟩ and |0⟩-|1⟩ states differently, as shown below. Next step, apply the gates to the superpositions formed in the first step. Again, I’ll do both. Using the Hadamard gate on the |0⟩+|1⟩ superposition: $\displaystyle{H}\frac{\left(|0\rangle\!+\!|1\rangle\right)}{\sqrt{2}}=\begin{bmatrix}{\frac{1}{\sqrt{2}}}&{\frac{1}{\sqrt{2}}}\\[0.7em]{\frac{1}{\sqrt{2}}}&{\frac{-1}{\sqrt{2}}}\end{bmatrix}\!\!\begin{bmatrix}{\frac{1}{\sqrt{2}}}\\[0.7em]{\frac{1}{\sqrt{2}}}\end{bmatrix}=\begin{bmatrix}{\frac{1}{2}+\frac{1}{2}}\\[0.7em]{\frac{1}{2}-\frac{1}{2}}\end{bmatrix}=\begin{bmatrix}{1}\\[0.7em]{0}\end{bmatrix}=|0\rangle$ And on the |0⟩-|1⟩ superposition: $\displaystyle{H}\frac{\left(|0\rangle\!-\!|1\rangle\right)}{\sqrt{2}}=\begin{bmatrix}{\frac{1}{\sqrt{2}}}&{\frac{1}{\sqrt{2}}}\\[0.7em]{\frac{1}{\sqrt{2}}}&{\frac{-1}{\sqrt{2}}}\end{bmatrix}\!\!\begin{bmatrix}{\frac{1}{\sqrt{2}}}\\[0.7em]{\frac{-1}{\sqrt{2}}}\end{bmatrix}=\begin{bmatrix}{\frac{1}{2}-\frac{1}{2}}\\[0.7em]{\frac{1}{2}+\frac{1}{2}}\end{bmatrix}=\begin{bmatrix}{0}\\[0.7em]{1}\end{bmatrix}=|1\rangle$ So, the Hadamard gate, applied twice, restores the original condition. The diagonal rotation axis of the Hadamard gate swings the |0⟩ and |1⟩ states to the |0⟩+|1⟩ axis and then back again. Things are different when applying the Aaronson gate because it’s rotating the Bloch sphere on a different axis (the Y axis, not a diagonal). For the A|0⟩(|0⟩+|1⟩) superposition: $\displaystyle{A}\frac{\left(|0\rangle\!+\!|1\rangle\right)}{\sqrt{2}}=\begin{bmatrix}{\frac{1}{\sqrt{2}}}&{\frac{-1}{\sqrt{2}}}\\[0.7em]{\frac{1}{\sqrt{2}}}&{\frac{1}{\sqrt{2}}}\end{bmatrix}\!\!\begin{bmatrix}{\frac{1}{\sqrt{2}}}\\[0.7em]{\frac{1}{\sqrt{2}}}\end{bmatrix}=\begin{bmatrix}{\frac{1}{2}-\frac{1}{2}}\\[0.7em]{\frac{1}{2}+\frac{1}{2}}\end{bmatrix}=\begin{bmatrix}{0}\\[0.7em]{1}\end{bmatrix}=|1\rangle$ And for the A|1⟩(|0⟩-|1⟩) superposition: $\displaystyle{A}\frac{\left(|0\rangle\!-\!|1\rangle\right)}{\sqrt{2}}=\begin{bmatrix}{\frac{1}{\sqrt{2}}}&{\frac{-1}{\sqrt{2}}}\\[0.7em]{\frac{1}{\sqrt{2}}}&{\frac{1}{\sqrt{2}}}\end{bmatrix}\!\!\begin{bmatrix}{\frac{1}{\sqrt{2}}}\\[0.7em]{\frac{-1}{\sqrt{2}}}\end{bmatrix}=\begin{bmatrix}{\frac{1}{2}+\frac{1}{2}}\\[0.7em]{\frac{1}{2}-\frac{1}{2}}\end{bmatrix}=\begin{bmatrix}{1}\\[0.7em]{0}\end{bmatrix}=|0\rangle$ So, two applications of the A gate reverse the |0⟩ and |1⟩ states. Which obviously is Aaronson’s point. Nice! I’m sure what I’ve called the A gate has a formal name in QC, but I didn’t find it in a list of gates. I think it’s a straight two-dimensional rotation matrix for 45° along the Y axis: $\displaystyle{R}_{y}(45)=\begin{bmatrix}{\cos 45}&{-\sin 45}\\[0.75 em]{\sin 45}&{\cos 45}\end{bmatrix}=\begin{bmatrix}{\frac{1}{\sqrt{2}}}&{\frac{-1}{\sqrt{2}}}\\[0.75 em]{\frac{1}{\sqrt{2}}}&{\frac{1}{\sqrt{2}}}\end{bmatrix}$ Which sort of makes sense, as it would rotate the Bloch sphere on its real plane (the XZ plane). Two 90° rotations would reverse the |0⟩ and |1⟩ states, and since it’s common to see θ/2, I suspect that’s how 90° becomes 45°. The |0⟩ and |1⟩ states are orthogonal, so two 45° rotations conceptually make an orthogonal angle. I was curious if not applying the global phase to the (-|0⟩+|1⟩) superposition made much difference: $\displaystyle{A}\frac{\left(-|0\rangle\!+\!|1\rangle\right)}{\sqrt{2}}=\begin{bmatrix}{\frac{1}{\sqrt{2}}}&{\frac{-1}{\sqrt{2}}}\\[0.7em]{\frac{1}{\sqrt{2}}}&{\frac{1}{\sqrt{2}}}\end{bmatrix}\!\!\begin{bmatrix}{\frac{-1}{\sqrt{2}}}\\[0.7em]{\frac{1}{\sqrt{2}}}\end{bmatrix}=\begin{bmatrix}{-\frac{1}{2}-\frac{1}{2}}\\[0.7em]{-\frac{1}{2}+\frac{1}{2}}\end{bmatrix}=\begin{bmatrix}{-1}\\[0.7em]{0}\end{bmatrix}$ But a touch of global phase makes it the same: $\displaystyle{e}^{i\pi}\begin{bmatrix}{-1}\\[0.7em]{0}\end{bmatrix}=\begin{bmatrix}{1}\\[0.7em]{0}\end{bmatrix}=|0\rangle$ This might be a lot of old hats to you, but it was a fun exercise for me to work through. • Wyrd Smythe BTW, it’s those ½+½ and ½-½ parts that demonstrate the “interference” of the superposition. I had not worked that out before. Nice! At the end, you could also have just factored a -1 out of the whole expression to see quickly that A(-|0>+|1>) = -A(|0>-|1>) = -|0>. In any case, you have more facility manipulating single qubits and visualizing the Bloch sphere than I do 🙂 I really think you owe to yourself to learn at least the basics of density matrices and taking partial traces. As you may have learned already, the density matrix for a pure state |x> is the outer product |x> and |y> by 0.5(|x>+|1>)(<0|++|11>)(<00|+<1|. The off-diagonal terms have gone. This means we should predict it behaves like a classical mixture, which is also what we saw when we tried to measure one qubit in the Hadamard basis. Ugh, I can see my comment got messed up 😦 • Wyrd Smythe It sucks that WP doesn’t allow editing one’s own comments. I’ve messed up plenty! It’s especially annoying when you notice your error just as you click the [Submit] button. Split second too late! ARG! Density matrices are definitely on the list! I do know one can do some interesting things with outer products, such as: $\displaystyle|{0}\rangle\langle{0}|+|{1}\rangle\langle{1}|=\mathbb{I}$ And I’ve seen constructions like this: $\displaystyle\rho={c_1}|{0}\rangle\langle{0}|+{c_2}|{1}\rangle\langle{1}|+ {c_3}|{0}\rangle\langle{1}|+{c_4}|{1}\rangle\langle{0}|$ Which I think is a density matrix? That’s right. The interesting thing is to take the density matrix of the superposed state, then of a classical mixture (= Identity, as you’ve shown), and see how they differ. Then take the density matrix of the Bell state. Then trace over the second qubit, and watch it look exactly like a classical mixture. Since a density matrix captures all information needed to predict measurements in all bases, the fact that it looks the same as a classical mixture tells you that it IS one for all practical purposes. That’s the punchline of (this kind of) decoherence. BTW, FWIW I am a hardcore anti-realist, so it is only natural that we will see some things differently 🙂 • Wyrd Smythe Ha, well that just makes things interesting. Do you have a favored interpretation? (FWIW, I’m sympathetic to some version of OR involving gravity. I think Penrose and others are on to something there.) p.s. That’s probably it for me tonight. Been very interesting! Perhaps more later or on another post. Have a nice evening! I don’t have a favorite interpretation, but I don’t believe in collapse. Instead, definite results are attained in “my world” only when superpositions interact with *me*. The same is presumably true of others (though I am prevented from ever being sure of this), but I only interact with the versions of them in my world. Good night! • Wyrd Smythe Good morning! Ha, yeah, collapse versus no collapse. Pretty much on opposite sides on that point! 😄 So, QM applies to all levels of reality, and wavefunctions are meaningful for things like trees, people, bridges? You, right now, are in superposition with other nearly identical versions of yourself (and ones much less identical)? Should have said no point at which decoherence *objectively* happens, because … • Wyrd Smythe I inserted objectively into your comment so future readers don’t have to notice your later comment. Later today I’ll delete it and my reply here. I’ll leave it long enough for you to see my reply (and complain if you feel I’ve crossed a boundary). Also, I’m sure my decoherence times are off by many orders of magnitude, but I couldn’t be bothered to look up real numbers so I just wrote a small-sounding number to be safe 🙂 • Wyrd Smythe From what I’ve read, I think you’re safely in the ballpark. (Those rapid decoherence times are a big factor in why I don’t believe macro-reality is quantum.) (Hmmm. These two comments make me think I’ll not delete this little sub-thread. I’ve never been entirely comfortable deleting comments, anyway.) This is why it was crucial for me to get a grasp on precisely what decoherence is saying. It says that we lose the ability to exploit superpositions almost instantly in practice, but that they are still there in principle. If you want to lose superpositions in principle, a collapse mechanism (like in GRW) is necessary. My belief is that theories like GRW were created because the straightforward results of unitary evolution are just too weird. Same with modern MWI: so long as Carroll can maintain that worlds split “when decoherence happens,” the observer can be taken out of the equation. I see all such attempts as avoiding the punchline that nature is trying to deliver straight to our faces — though I appreciate that you will disagree! • Wyrd Smythe I’m working my way through Roger Penrose’s The Road to Reality (2004), and I was delighted when I read that he, too, questions unitarity. I’m not the only one who questions the supposed conservation of information. (In part, because our physical conservation laws are based on symmetries, and I’ve never heard of any symmetry that leads to conservation of information.) Unitarity comes, in part, from the linearity of QM, which is another thing I think is worth questioning. Reality seems decidedly nonlinear to me, and most physical laws are nonlinear. I’m not a fan of stochastic objective collapse theories, but I do like the Diósi-Penrose model that brings gravity into the equation. Everett himself allowed for the possibility of objective collapse but disdained and handwaved away the idea as being tied to some putative N (of particles). I think it’s likely more complicated than that. The Heisenberg Cut, I believe, will turn out to depend on number of particles, environmental conditions, gravity, and perhaps other factors. Though, obviously, I (and Penrose) could be completely wrong. There’s a space experiment proposed for the ESA that I’d really like see performed. It’s called MAQRO. From their website: The experiment involves “observing free quantum evolution and interference of massive dielectric test particles with radii of about 100nm and masses up to several 10^10 atomic mass units (amu).” It goes on to point out that the current record is about 10^4 AMU, so this would involve a big jump. It might even falsify the notion that quantum mechanics applies to macro-objects. Or not! • Wyrd Smythe “This is why it was crucial for me to get a grasp on precisely what decoherence is saying. It says that we lose the ability to exploit superpositions almost instantly in practice, but that they are still there in principle.” My analogy is a vast stadium filled with trillions of people singing the same song (in very very ragged unison). In come a group of hundreds singing a different song. They disperse into the crowd of trillions still singing their song. Some people around them might take up the song, but the trillions singing a different song utterly overwhelm them. Eventually the new group is absorbed into the trillions and their song is lost. The other picture is that, somehow, this group of hundreds gets the whole stadium of trillions singing their song or some blend of song. I’ve just never bought the idea that a crowd of hundreds would have much effect on a crowd of trillions. It’s throwing ping-pong balls at an ocean liner. I think of amplification as quite distinct from decoherence. We know we can create a device that will kill a cat if a radioactive atom decays, and won’t if not — quite independently of the question of whether these two possibilities can exist in macroscopic superposition. That’s the amplification aspect. Decoherence says “well even if there IS a macroscopic superposition, there’s no way you can exploit it or even demonstrate it, so who cares?” And indeed for all practical purposes, nobody should care. But I still think it’s pointing us to something about the nature of reality. Of course, if we do get evidence for objective collapse, this perspective goes out the window! The more interesting case is if we don’t, since you can’t prove a negative and all that. I suspect we may end up wondering forever. BTW, I just put all my thoughts (on QM and tangentially-related topics) together in one (ugly) site, in case you find yourself bored and wanting to explore some bizarre ideas: https://github.com/monktastic/hackmd/blob/master/README.md • Wyrd Smythe “I think of amplification as quite distinct from decoherence.” Absolutely! I don’t know if you realized this, but this post is #3 of 5. There are two before and two after, all related to this general topic. The next post Measurement Specifics gets into my view about what’s happening when we amplify quantum events to the classical level. In a word: mousetrap! The last post in the series, Objective Collapse gets into my views about objective WF collapse. (The first two posts, Quantum Measurement and Wavefunction Collapse, just set the stage. I assume this post caught your eye because of its title and topic.) “The more interesting case is if we don’t, since you can’t prove a negative and all that.” “Interesting”? I think you misspelled “infuriating”! 😁 Yeah, no black swans. Like those poor folks chasing SUSY or Dark Matter. I’m about to take off to visit a friend and won’t be back until late. This will be my last comment for the day. Have a good one! (I’ll check out your page when I get a chance. Kinda booked until Tuesday or Wednesday or so.) • Wyrd Smythe Thanks for letting me know. I’ve fixed the permissions. I skimmed posts 4 and 5 just now. Will leave a few comments on 5. BTW, Penrose discusses the sense in which decoherence is only “for all practical purposes” on p802 of RTR: https://physics.stackexchange.com/a/386163/47309 • Wyrd Smythe I can access your GitHub page now! Caveat: got a lot on my plate right now (including a leaking water pipe and a plumber coming but not until Thursday morning). That Penrose quote is from the beginning of the section FAPP philosophy of environmental decoherence, which is part of the chapter 29, The measurement problem. It’s one of my favorite parts of the book, and I skipped to it soon after I started reading. (I’m linearly in chapter 20 where he’s still talking classical mechanics.) The reason I skipped to 29 was that it seemed a prerequisite for chapter 30, Gravity’s role in quantum state reduction, that I really wanted to read. After the quoted part, Penrose continues: “It would seem to be a strange view of physical reality to regard it to be ‘really’ described by a density matrix. Accordingly, such descriptions are sometimes referred to as FAPP […] The density-matrix description may be thus regarded as a pragmatic convenience: something FAPP, rather than providing a ‘true’ picture of fundamental physical reality. “There might, however, be a level at which the detailed phase relations indeed actually get lost, because of some deep overriding basic principle. Ideas aimed in this direction often appeal to gravity as possibly leading us to such a principle.” He introduces a few ideas that he describes in detail in chapter 30. In particular, he questions unitarity. (The black hole information paradox would be easily solved if information could indeed be lost.) FWIW, Penrose is also a pretty staunch realist.
2023-02-09 12:42:39
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 19, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.7168042063713074, "perplexity": 1009.987075567611}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2023-06/segments/1674764499966.43/warc/CC-MAIN-20230209112510-20230209142510-00673.warc.gz"}
http://paperity.org/p/79122593/b-s-pi-b-bar-k-interactions-in-finite-volume-and-x-5568
# $$B_s\pi$$ – $$B\bar{K}$$ interactions in finite volume and X(5568) The European Physical Journal C, Feb 2017 The recent observation of X(5568) by the D0 Collaboration has aroused a lot of interest both theoretically and experimentally. In the present work, we first point out that X(5568) and $$D_{s0}^*(2317)$$ cannot simultaneously be of molecular nature, from the perspective of heavy-quark symmetry and chiral symmetry, based on a previous study of the lattice QCD scattering lengths of DK and its coupled channels. Then we compute the discrete energy levels of the $$B_s\pi$$ and $$B\bar{K}$$ system in finite volume using unitary chiral perturbation theory. The comparison with the latest lattice QCD simulation, which disfavors the existence of X(5568), supports our picture where the $$B_s\pi$$ and $$B\bar{K}$$ interactions are weak and X(5568) cannot be a $$B_s\pi$$ and $$B\bar{K}$$ molecular state. In addition, we show that the extended Weinberg compositeness condition also indicates that X(5568) cannot be a molecular state made from $$B_s\pi$$ and $$B\bar{K}$$ interactions. This is a preview of a remote PDF: https://link.springer.com/content/pdf/10.1140%2Fepjc%2Fs10052-017-4660-9.pdf Jun-Xu Lu, Xiu-Lei Ren, Li-Sheng Geng. $$B_s\pi$$ – $$B\bar{K}$$ interactions in finite volume and X(5568), The European Physical Journal C, 2017, 94, DOI: 10.1140/epjc/s10052-017-4660-9
2018-11-15 07:10:25
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.541853129863739, "perplexity": 1128.4580417600864}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-47/segments/1542039742567.46/warc/CC-MAIN-20181115054518-20181115080518-00250.warc.gz"}
https://www.physicsforums.com/threads/heat-conduction-in-nuclear-reactor-introductory-question.153114/
Heat conduction in nuclear reactor (introductory question) 1. Jan 26, 2007 *Alice* 1. The problem statement, all variables and given/known data The elements of a boiling water nuclear reactor consist of long cylindrical rods of uranium dioxide (U02) of diameter 8mm surrounded by a thin layer of aluminium cladding. In the reactor core the elements are cooled by boiling water at 285°C with a heat transfer coefficient of 35kW/m^2K. If heat is generated uniformly within the rod at a rate of 760 MW/m^3, calculate the temperature of the cladding and the maximum temperature within the rod. The mean thermal conductivity of U=2 is 2.3 W/m K 2. Relevant equations first part: model it as a slab with equation Q= h(T*-285) ? second part : T-T* = (Q/4k)r^2 3. The attempt at a solution first part: I just assumed that one can model the situation at the wall as a slab with very thin walls and using the equation above with the values of h= 35kW/m^2K and Q=760 MW/m^3*2*pi*0.004 = 19.1 MW/m^3 does not give the required solution of 328°C. I don't really see what exactly is wrong with this calculation and would therefore appreciate if anyone could give me a hint. second part: completely fine thanks a lot 2. Jan 26, 2007 chanvincent Hint: First, you have to calculate the rate of heat gererated per unit length of the rod... Then, calculate the rate of heat carry away by the water, (it depends on the temperature different between the rod and the water and the surface area) At the equalibrium point, The heat generated is equal to the heat carry away.... set them equal to get the $$\Delta T$$ Last edited: Jan 26, 2007 3. Jan 27, 2007 *Alice* Thanks a lot - I now have the answer! Last edited: Jan 27, 2007 4. Jan 27, 2007 chanvincent The Q on the LHS is the heat created per unit length, but h*(T-285) is heat carry away per unit area You have to multiple something on the RHS to make this equation works... can you tell me what you have missed?
2016-10-26 19:55:37
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.7739942073822021, "perplexity": 996.7160044486014}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2016-44/segments/1476988720972.46/warc/CC-MAIN-20161020183840-00459-ip-10-171-6-4.ec2.internal.warc.gz"}
https://www.gradesaver.com/textbooks/math/calculus/university-calculus-early-transcendentals-3rd-edition/chapter-3-section-3-2-the-derivative-as-a-function-exercises-page-127/45
## University Calculus: Early Transcendentals (3rd Edition) a) $y=f(x)$ is differentiable on $[-3,0)\cup(0,3]$. b) There are no domain points in $[-3,3]$ where $y=f(x)$ is continuous but not differentiable. c) $y=f(x)$ is neither continuous nor differentiable at $x=0$. *Some things to remember about differentiability: - If $f(x)$ is differentiable at $x=c$, then $f(x)$ is continuous at $x=c$. (Theorem 1) - $f(x)$ is not differentiable at $x=c$ if the secant lines passing $x=c$ fail to take up a limiting position or can only take up a vertical tangent. In other words, we can look at differentiability as the ability to draw a tangent line at a point, or the smoothness of the graph. a) In this exercise, we have a discontinuous curve $y=f(x)$ on the closed interval $[-3,3]$. $f(x)$ has a smooth continuous curve on $[-3,0)$, then the graph jumps to $0$ at $x=0$, and then it jumps again to continue another smooth continuous curve on $(0,3]$. We see that on $[-3,0)\cup(0,3]$, the graph is all continuous and smooth; there are no corners or cusps or any points having vertical tangents. So $y=f(x)$ is differentiable on $[-3,0)\cup(0,3]$. For $x=0$, $f(x)$ is not continuous here, meaning that $f(x)$ is also not differentiable at $x=0$. b) There are no domain points in $[-3,3]$ where $y=f(x)$ is continuous but not differentiable. Where $f(x)$ is continuous, it is differentiable. c) $y=f(x)$ is neither continuous nor differentiable at $x=0$.
2019-12-07 03:43:01
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9387114644050598, "perplexity": 116.49488817886142}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-51/segments/1575540495263.57/warc/CC-MAIN-20191207032404-20191207060404-00195.warc.gz"}
https://www.win-raid.com/t15f37-NVIDIA-Optimized-nForce-Driverpacks-for-Vista-Win-16.html
Page 17 of 36 « Page 1 ... 12 13 14 15 16 17 18 19 20 21 ... 36 Page » #246 | RE: NVIDIA: Optimized nForce Driverpacks for Vista/Win7 Thu Jul 30, 2015 6:30 pm (Last edited: Thu Jul 30, 2015 6:32 pm) #248 | RE: NVIDIA: Optimized nForce Driverpacks for Vista/Win7 Thu Jul 30, 2015 7:47 pm (Last edited: Fri Jul 31, 2015 11:13 am) #251 | RE: NVIDIA: Optimized nForce Driverpacks for Vista/Win7 Sat Aug 29, 2015 5:39 pm (Last edited: Sat Aug 29, 2015 6:05 pm) Page 17 of 36 « Page 1 ... 12 13 14 15 16 17 18 19 20 21 ... 36 Page » Forum Software von Xobor
2017-12-12 21:55:01
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8233470320701599, "perplexity": 4849.286658132352}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 5, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-51/segments/1512948519776.34/warc/CC-MAIN-20171212212152-20171212232152-00011.warc.gz"}
https://eng.libretexts.org/Under_Construction/Book%3A_The_Joy_of_Cryptography_(Rosulek)/Chapter_0%3A_Review_of_Concepts_and_Notation/0.3%3A_Probability
# 0.3: Probability A (discrete) probability distribution ? over a set $$X$$ of outcomes is a function ? : $$X\rightarrow [0,1]$$ that satisfies the condition: $\sum D(x)=1\nonumber$ We say that ? assigns probability $$?(x)$$ to outcome $$x$$. The set $$X$$ is referred to as the support of ?. A special distribution is the uniform distribution over a finite set X, which assigns probability$$1/|X|$$ to every element of $$X$$. Let ? be a probability distribution over X. We write $$Pr_?[A]$$ to denote the probability of an event $$A$$, where probabilities are according to distribution ?. Typically the distribution ? is understood from context, and we omit it from the notation. Formally, an event is a subset of the support X, but it is typical to write $$Pr[cond]$$ where “cond” is the condition that defines an event $$A = \{x\in X | x$$ satisfies condition cond}. Interpreting A strictly as a set, we have $$Pr_?[A]\stackrel{\text{def}}{=}\sum_{x\in a}?(x)$$. The conditional probability of A given B is defined as $$Pr[A|B] \stackrel{\text{def}}{=} Pr[A \cap B]/Pr[B]$$. When $$Pr[B] = 0$$, we let $$Pr[A | B] = 0$$ by convention, to avoid dividing by zero. Below are some convenient facts about probabilities: $Pr[A]=Pr[A|B]Pr[B]+Pr[A|\neg B]Pr[\neg B];\nonumber$ $Pr[A\cup B\le Pr[A]+Pr[B].\tag{union bound}$ ### Precise Terminology It is common and tempting to use the word “random” when one really means “uniformly at random.” We’ll try to develop the habit of being more precise about this distinction. It is also tempting to describe an outcome as either random or uniform. For example, one might want to say that “the string $$x$$ is random.” But an outcome is not random; the process that generated the outcome is random. After all, there are many ways to come up with the same string $$x$$, and not all of them are random. So randomness is a property of the process and not an inherent property of the result of the process. It’s more precise and a better mental habit to say that an outcome is “sampled or chosenrandomly,” and it’s even better to be precise about what the random process was. For example, “the string $$x$$ is chosen uniformly.” ### Notation in Pseudocode When $$?$$ is a probability distribution, we write “$$x\leftarrow ?$$” to mean that the value of x is sampled according to the distribution $$?$$. We overload the “$$\leftarrow$$” notation slightly, writing “$$x\leftarrow X$$” when $$X$$ is a finite set to mean that $$x$$ is sampled from the uniform distribution over $$X$$. We will often discuss algorithms that make some random choices. When describing such algorithms, we will use statements like “$$x\leftarrow ?$$” in the pseudocode. If $$?$$ is an algorithm that takes input and also makes some internal random choices, then we can think of the output of $$?(x)$$ as a distribution — possibly a different distribution for each input $$x$$. Then we write “$$y \leftarrow ?(x)$$” to mean the natural thing: run $$?$$ on input $$x$$ and assign the output to $$y$$. The use of the arrow “$$\leftarrow$$” rather than an assignment operator “$$:=$$” is meant to emphasize that, even when x is fixed, the output $$?(x)$$ is a random variable depending on internal random choices made by $$?$$. ### Asymptotics (Big-O) Let $$?: \mathbb{N}\leftarrow \mathbb{N}$$ be a function. We characterize the asymptotic growth of $$?$$ in the following ways: $f(n)\space \text{is}\space O(g(n)) \stackrel{\text{def}}{\Leftrightarrow} \lim_{n \rightarrow \infty} \frac{f(n)}{g(n)} < \infty \\ \Leftrightarrow \exists c>0: \text{for all but finitely many }n:\space f(n)<c\cdot g(n)\nonumber$ $f(n)\space \text{is}\space \Omega(g(n)) \stackrel{\text{def}}{\Leftrightarrow} \lim_{n \rightarrow \infty} \frac{f(n)}{g(n)} >0 \\ \Leftrightarrow \exists c>0: \text{for all but finitely many }n:\space f(n)>c\cdot g(n)\nonumber$ $f(n)\space \text{is}\space \Theta(g(n)) \stackrel{\text{def}}{\Leftrightarrow} f(n) \space\text{is}\space O(g(n)) \text{ and } f(n)\space \text{is } \Omega(g(n)) \\ \Leftrightarrow 0<\lim_{n \rightarrow \infty} \frac{f(n)}{g(n)} <\infty \\ \Leftrightarrow\exists c_1,c_2>0: \text{for all but finitely many }n:\space c_2\cdot g(n)<f(n)<c_2\cdot g(n)\nonumber$
2021-03-04 00:30:41
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9588883519172668, "perplexity": 218.27447739755178}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-10/segments/1614178367949.58/warc/CC-MAIN-20210303230849-20210304020849-00346.warc.gz"}
https://cp4space.wordpress.com/2013/07/28/further-news/
## Further news As usual, I’ll start this post by mentioning the current state of the bounded gaps between primes project. The current values are $k_0 = 720, H = 5414$, with an unconfirmed result giving a value of H below 5000. It’s surprising how far these sieve methods are successfully being pushed — significantly below Ben Green’s estimate that 10000 would be the limit. Anyway, the 54th International Mathematical Olympiad finished a few days ago. There are no prizes for guessing which country came first (China), closely followed by South Korea. The United Kingdom came first in the EU and ninth in the world, which is the best result since 1996. Congratulations go to Geoff Smith for leading the team, Dominic Yeo for acquiring refreshments (and the other things that deputy leaders do), and especially to the six excellent contestants* who, between them, attained two gold medals, three silvers and a bronze. This is an excellent achievement! * In lexicographical order by surname, they are Andrew Carlotti, Gabriel Gendler, Daniel Hu, Sahl Khan, Warren Li and Matei Mandache. Andrew is now the country’s most prolific IMO contestant, with three gold medals and a bronze. Our other triple gold medallists are John Rickard (c.f. Treefoil) and Simon Norton (co-discoverer of the Harada-Norton sporadic group). The next impending mathematical Olympiad worthy of mention is the Mathematical Olympiad for Girls (or MOG), which is used for finding the UK EGMO team. I’ll be mentioning it closer to the time, combined in an article with the Miracle Octad Generator (purely on the basis that they have the same acronym). I think that my recommendation that medals and certificates be awarded as opposed to gold stickers on returned scripts has been effected, although if this is not the case and you have been affected by this issue, do not hesitate to contact me. In other news, Stuart Gascoigne recently overtook Joseph Myers on the cipher-solving leaderboard. This entry was posted in Uncategorized. Bookmark the permalink. ### 4 Responses to Further news 1. Maria says: Ben Green – isn’t he the one moving from Cambridge to Oxford? ;P • apgoucher says: We still have his doctoral advisor (Gowers), uncle (Imre) and grandfather (Professor Bollobás), though. http://genealogy.math.ndsu.nodak.edu/id.php?id=22719 • James Cranch says: But Oxford has uncle Oliver Riordan. (Then again Cambridge also has uncle Keith Carne and uncle Andrew Thomason). Sheffield has uncle John Haslegrave and maybe a third cousin or two…
2018-06-21 14:28:17
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 1, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.5823810696601868, "perplexity": 6311.526875875452}, "config": {"markdown_headings": true, "markdown_code": false, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-26/segments/1529267864172.45/warc/CC-MAIN-20180621133636-20180621153636-00453.warc.gz"}
https://discuss.codechef.com/t/avgarr-editorial/100578
# AVGARR-Editorial Setter: Jeevan Jyot Singh Tester: Nishank Suresh, Satyam Editorialist: Devendra Singh Simple # PREREQUISITES: The mean of an array B of size M is defined as: \texttt{mean}(B) = \dfrac{\sum_{i = 1}^{M} B_i}{M}. # PROBLEM: You are given two integers N and X. Output an array A of length N such that: • -1000 \le A_i \le 1000 for all 1 \le i \le N. • All A_i are distinct. • \texttt{mean}(A) = X. If there are multiple answers, print any. It is guaranteed that under the given constraints at least one array satisfying the given conditions exists. As a reminder, the mean of an array B of size M is defined as: \texttt{mean}(B) = \dfrac{\sum_{i = 1}^{M} B_i}{M}. For example, \texttt{mean}([3, 1, 4, 8]) = \frac{3 + 1 + 4 + 8}{4} = \frac{16}{4} = 4. # EXPLANATION: The problem can be divided into two cases: \textbf{Case 1}: N is even. To achieve an average of X , the sum of all the values of array A must be N\cdot X. We simply create N/2 pairs such that their sum is 2\cdot X each. Total sum of these pairs would be 2\cdot X\cdot N/2 = N\cdot X Therefore the average of these N values is (N\cdot X)/N = X . One possible way to create these pairs is to pair values around X i.e X-1\: and\: X+1 ,X-2\: and\: X+2 and so on. To print the answer for this case run a loop from i=1 to i=N/2 and on each iteration print two values X-i and X+i. \textbf{Case 1}: N is odd. To achieve an average of X , the sum of all the values of array A must be N\cdot X. We simply create (N-1)/2 pairs such that their sum is 2\cdot X each and append X to the end of the array. Total sum of these pairs would be 2\cdot X\cdot (N-1)/2 +X= N\cdot X - X +X= N\cdot X Therefore the average of these N values is (N\cdot X)/N = X . One possible way to create these pairs is to pair values around X i.e X-1\: and\: X+1 ,X-2\: and\: X+2 and so on. To print the answer for this case run a loop from i=1 to i=(N-1)/2 and on each iteration print two values X-i and X+i. Then print X in the end. The value of A_i never exceeds 600 and never drops below -500 in this approach which satisfies the given constraints. # TIME COMPLEXITY: O(N) for each test case. # SOLUTION: Setter's solution #ifdef WTSH #include <wtsh.h> #else #include <bits/stdc++.h> using namespace std; #define dbg(...) #endif #define int long long #define endl "\n" #define sz(w) (int)(w.size()) using pii = pair<int, int>; const long long INF = 1e18; const int N = 1e6 + 5; void solve() { int n, x; cin >> n >> x; vector<int> a; for(int i = 1; i <= n / 2; i++) a.push_back(x - i), a.push_back(x + i); if(n % 2) a.push_back(x); for(int x: a) cout << x << " "; cout << endl; } int32_t main() { ios::sync_with_stdio(0); cin.tie(0); int T; cin >> T; for(int tc = 1; tc <= T; tc++) { // cout << "Case #" << tc << ": "; solve(); } return 0; } Tester-1's Solution(Python) for _ in range(int(input())): n, x = map(int, input().split()) for i in range(n//2): print(x-i-1, x+i+1, end = ' ') if n%2 == 1: print(x) else: print('') Tester-2's Solution #include <bits/stdc++.h> using namespace std; #ifndef ONLINE_JUDGE #define debug(x) cerr<<#x<<" "; _print(x); cerr<<nline; #else #define debug(x); #endif /* ------------------------Input Checker---------------------------------- */ long long readInt(long long l,long long r,char endd){ long long x=0; int cnt=0; int fi=-1; bool is_neg=false; while(true){ char g=getchar(); if(g=='-'){ assert(fi==-1); is_neg=true; continue; } if('0'<=g && g<='9'){ x*=10; x+=g-'0'; if(cnt==0){ fi=g-'0'; } cnt++; assert(fi!=0 || cnt==1); assert(fi!=0 || is_neg==false); assert(!(cnt>19 || ( cnt==19 && fi>1) )); } else if(g==endd){ if(is_neg){ x= -x; } if(!(l <= x && x <= r)) { cerr << l << ' ' << r << ' ' << x << '\n'; assert(1 == 0); } return x; } else { assert(false); } } } string ret=""; int cnt=0; while(true){ char g=getchar(); assert(g!=-1); if(g==endd){ break; } cnt++; ret+=g; } assert(l<=cnt && cnt<=r); return ret; } long long readIntSp(long long l,long long r){ } long long readIntLn(long long l,long long r){ } } } /* ------------------------Main code starts here---------------------------------- */ int MAX=100000; void solve(){ if(n&1){ cout<<x<<" "; n--; } int l=-5,r=2*x+5; for(int i=1;i<=n;i+=2){ cout<<l--<<" "<<r++<<" "; } cout<<"\n"; return; } int main(){ ios_base::sync_with_stdio(false); cin.tie(NULL); #ifndef ONLINE_JUDGE freopen("input.txt", "r", stdin); freopen("output.txt", "w", stdout); freopen("error.txt", "w", stderr); #endif while(test_cases--){ solve(); } assert(getchar()==-1); return 0; } Editorialist's Solution #include "bits/stdc++.h" using namespace std; #define ll long long #define pb push_back #define all(_obj) _obj.begin(), _obj.end() #define F first #define S second #define pll pair<ll, ll> #define vll vector<ll> const int N = 1e5 + 11, mod = 1e9 + 7; ll max(ll a, ll b) { return ((a > b) ? a : b); } ll min(ll a, ll b) { return ((a > b) ? b : a); } void sol(void) { int n, x; cin >> n >> x; if (n & 1) { cout << x << ' '; for (int i = 1; i <= (n / 2); i++) cout << x - i << ' ' << x + i << ' '; cout << '\n'; } else { for (int i = 1; i <= (n / 2); i++) cout << x - i << ' ' << x + i << ' '; cout << '\n'; } return; } int main() { ios_base::sync_with_stdio(false); cin.tie(NULL), cout.tie(NULL); int test = 1; cin >> test; while (test--) sol(); } In the problem statement you have mentioned an example in which the mean is coming as a decimal number but the editorial part is saying mean is in integer and in whole contest i am struggling to solve only because of this. And now in editorial part @devendra7700 you have change the problem statement example by replacing 5 with 8 and now it is giving the mean value as an integers why u have replace this? https://www.codechef.com/viewsolution/62903077 why did my soln fail ?? Any guide will be appreciated ! my approach is as follows : 1. set two pointers, one at 1 and another at 1000 2. make a sliding window of size ( n - 1 ) such that sum of these consecutive elements in the window is >= ( n*x ) [will be operating using the above two pointers ] 3. Then I will initialize first n-1 values of my ans array with these values of sliding window I found. 4. nth element of ans array will be ( req_sum - tot_sum ) where, tot_sum = sum of n-1 elements of the sliding window and req_sum = n*x n=100, x=10 req=1000 news=(100*99)/2=4950 a[n-1]=1000-4950=-3950 a[n-1]<-1000 every element of your answer array should be in the range (-1000<=a[i]<=1000) hey @jatin2929 That was a just example that explains how the mean is calculated. Although it is clearly mentioned in the problem that X is an integer. You are given two integers N and X I think you misread the problem statement. 2 Likes Can someone tell me for which testcase am I getting wrong? sol thnx a lot !! got the mistake https://www.codechef.com/viewsolution/62804926 can someone tell me for which sample testcase my approach would give wrong answer? I can’t seem to find the corner case. Hey @musharafzm , Your code is failing for the test case 1 1000 14 Here your code is printing 77 two times and that is the reason one of a TC is failing. https://www.codechef.com/viewsolution/62970920 Can anyone please tell me whats wrong in this code and how I can find out and debug that particular error during a contest? Hey @codersidhant , Your logic is right but question also have a condition that is -1000 >= Ai <= 1000. 1 7 1000 Your code is printing an array that has element not satisfying the condition. You can debug by giving inputs that are strict (means corner cases) just like you did here check always that for every worst case scenario how your code will print you will soon get the error then. @jatin0308_adm But sir the constraint on X is 0≤X≤100 this no? Also according to the editorial the ans should be 997 998 999 1000 1001 1002 1003 which also is not satisfying the condition. Hey @codersidhant , yes you are right x <= 100 my bad. Your code is giving WA on this test case 1 1000 100 printing 10^5 which violates the condition. @jatin0308_adm Yes now I got it thank you very much. I did not understand use of “if(n&1)” . is it for odd numbers? #include<bits/stdc++.h> using namespace std; void Solve(vector& v, int n, int x){ if(n == 0) cout<<0<<endl; int res = n*x; int coff = 0; vector<int> ans; if(n>1){ for(int i = 1; i < n;i++){ coff += i; } } if(n>0){ int f = (res-coff)/n; int itr= 0; while (itr<n) { ans.push_back(f); f++; itr++; } for(int i = 0; i < n;i++){ cout<<ans[i]<<" "; }cout<<"\n"; } } int main(){ int t; cin>>t; while (t--) { int n,x; cin>>n>>x; vector<int> v(n); Solve(v,n,x); } return 0; } can any one explain why i am getting wrong answer? thanks in advance I am using simple math x +x+1+x+2+…+x+n-1 = X; Yes “&” is a bitwise and operator which checks if a number is odd or even. For example:3 & 1=1 and 4 & 1=0 .
2022-05-27 22:39:01
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.45297396183013916, "perplexity": 5082.829782193646}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-21/segments/1652663006341.98/warc/CC-MAIN-20220527205437-20220527235437-00773.warc.gz"}
https://atap.gov.au/tools-techniques/distributional-effects/6-commentaries.aspx
# 6. Commentaries ## 6.1 Appendix A Equity analysis in transport practice Many transport plans, strategies and policies articulate equity as a key issue to consider in transport infrastructure investments. Selected examples from different jurisdictions are presented below. For example, in 2003, a draft transport plan from the South Australian Government (SA Government, 2003), noted ‘transport’s contribution to social inclusion through recognition that not all South Australians fare equally and some experience acute and disproportionate disadvantage’. The following groups and issues were identified: • Age (specifically the mobility needs of older people and the young who are especially dependent on public transport and others for transport) • Gender (specifically people who have particular travel needs regarding access to private transport and in patterns of commuting and employment) • People with disabilities • Other socially or economically disadvantaged groups such as indigenous people. Western Australia’s sustainability framework (WA Government, 2003) presented a set of criteria that could be used in the process of sustainability assessment, one of which emphasises increasing ‘access, equity and human rights in the provision of material security and effective choices’. The NSW Government (DIPNR, 2004) has used the objectives of social equity, economic development, environmental protection and financial management to guide transport planning. In relation to transport, these objectives were described as follows: • Social equity reflects access to jobs and services, the affordability of housing and transport, and the provision of transport choice. • Economic development includes creating sustainable jobs, supporting exports, developing regions and minimising the cost of congestion. • Environmental protection includes minimising the environmental impacts of transport on air, water, soils, vegetation and noise. • Financial management includes ensuring taxpayers receive value for money from public investment, and considering inter-generational equity issues such as not overburdening future generations with excessive debts or capital requirements. All these Australian examples emphasise the consideration of equity impacts during strategic planning and decision-making levels. There is scope to further develop and refine methodologies, tools and techniques for equity assessment of transport initiatives. Some progress has been made overseas. The Applied Research Centre in California, for example, has developed guidance for policymakers on preparing an Equity Impact Statement (ARC, 2004). The approach includes identification of the following elements: • Communities of concern (including, for example, gender, income, disability characteristics) • adverse effects (including social, cultural, economic, environmental, individual and cumulative effects) • Key questions that are integrated into the policy-making process that address specific issues such as compliance with legislation, access to livelihood, quality of life and the distribution of the costs and benefits. In Europe, the German transport investment appraisal method is both detailed and explicit in its treatment of distributional effects on different regions within Germany (Bristow and Nellthorp, 2000). A unique feature of the approach is the flexibility to assign extra weight to employment impacts to reflect specific socio-economic conditions within specific regions (Bristow and Nellthorp, 2000). These same authors also report that in Finland, distributional effects are assessed and presented as part of a Supplementary Study that is made available to the decision-maker alongside the cost-benefit results and other findings. Bristow and Nellthorp (2000) conclude that in many other EU countries there is little evidence that equity and distributional impacts are given a significant role in the assessment of proposed initiatives and the reporting of results. ## 6.2 Appendix B Techniques to estimate distributional (equity) impacts on the community There are various quantitative and qualitative techniques for considering equity impacts on the community. Some common techniques described in this section include equity indexes and weights, social impact assessment, stated preference surveys and spatial analysis techniques. ### 6.2.1 Equity weights A number of indexes have been developed to measure equity or inequity between groups or populations. This type of analysis is mostly used to estimate income inequity in the population; however, it is also being applied to concepts such as accessibility. The choice of which index to use will depend on the decision-makers’ needs, data availability and the level of development within the community of interest. #### Welfare index Loeis and Richardson (1997) identified a welfare index for use in transport analysis and evaluation. Travel demand estimation and the evaluation of travel proposals often rely on personal or household income as one of the explanatory variables. However, the financial significance of a unit of income varies from person to person and household to household. For example, at the same income level, a small household can buy more for its individual members than a larger household. Consequently, it is by itself not an adequate economic explanatory variable for travel behaviour or evaluation. Loeis and Richardson (1997) developed their ‘Welfare Index’ through practical application of the welfare economics concept of equivalence scales, used in classifying households based on the relative cumulative needs or living costs of their members. Applied in combination with after-tax income, it rates households on relatively uniform standards of financial or welfare capacity. The result is therefore a better explanatory variable for travel behaviour than personal or household income. ##### Equity weights The text in this section is provided as information for the reader only. In line with the advice of the Department of Finance and Administration (2006), the use of equity weights is not recommended by the ATAP (see discussion in chapter 1). Equity weights provide a method of explicitly incorporating concepts of fairness into an economic analysis. Weights express the extent society is prepared to sacrifice efficiency in pursuit of fairness. The greater the equity weight, the more efficiency gain a society is willing to trade-off to achieve improved fairness (Sassi et al, 2001). The underlying assumption to the development and application of such weights is that the concepts of equity and efficiency can be traded off against each other. The application of weights is thus used to effect a balancing of conflicting, but commensurable objectives when making complex resource allocation decisions (Sassi et al, 2001). An example of how to apply equity weights is provided below. Equity weights can be derived from two major sources: the view of a (representative) sample of the population and/or the views of decision-makers (Sassi et al, 2001). It is important to note that the application of equity weighting can be controversial. Equity weights are subjective and a detailed description of the equity effects should be provided to the decision-maker who can assess the distributional effects of an initiative. For this reason, equity weights are not often used in practice. Box 1 How to apply equity weights Equity weighting is a simple concept. Say that a particular initiative provides benefits for two different population groups A and B. The net benefit is given by: NetB = Wa ΔA + Wb ΔB, where Wa and Wb are the distributional (or equity) weights. In situations where population groups are equal, the weights are set to one. If there are equity differences between population groups (involving, for example, different income distributions) then the ‘marginal utility of income’ will be different for these two groups. The net benefit in such cases may be defined as: NetB = WH ΔH + WL ΔL, where WH is the marginal utility of income for the high income group and WL is the marginal utility of income for the low income group. If we say that the marginal utility of income is 0.40 for the high income group (i.e. a $1 change in income for this group results in a 0.4 change in economic welfare) and 1.25 for the low income group (i.e. a$1 rise in income causes a 1.25 change in economic welfare), then the net benefit equation becomes: NetB = 0.4 ΔH + 1.25 ΔL ·(example taken from Sassi et al, 2001). ### 6.2.2. Social Impact Assessment of transport initiatives Social impacts are the likely consequences for individuals or a community of implementing a particular course of action. It is common practice (and often required by legislation) to undertake a Social Impact Assessment in conjunction with an Environmental Impact Assessment in the process of evaluating major transport initiatives. Social Impact Assessment relates to the identification and assessment of potential impacts for an area and the community of an initiative. Sinclair Knight Merz (1998) state that a Social Impact Assessment requires: • A description of the existing and likely future social characteristics of an area • A description of proposed changes • An analysis of how these changes will impact on the community at both a broad (regional level) and a local level • An examination of measures available to ameliorate adverse impacts. Assessment of social impacts relies on community input to gain an understanding of community concerns, values and aspirations. As such, Social Impact Assessment processes and community consultation are inextricably linked (Sinclair Knight Merz, 1998). The range of social impacts that can result from a transport initiative can be very large. The table below provides some common social impacts of a freeway construction (or extension) initiative. Table 3: Selection of social impacts to consider when undertaking a road development initiative Social impactIssues to consider Displacement or isolation of residents Adequacy of the compensation and the relocation process, reduced land value, emotional issues including grief Displacement or isolation of commercial and community facilities Adequacy of the compensation and the relocation process, economic hardships for existing or new businesses, reduced land value, clientele cut off, inaccessibility of services or inconvenience for customers Barrier effects: effects on social interaction Effects on community cohesion, disruption of friendships or family contact, changes in convenience and travel time Barrier effects: effects on business, recreation or services Inconvenience, changes to accessibility and travel time Noise effects Physiological, psychological and social changes due to increased noise levels Safety Effects on personal, family or child safety on a localised scale i.e. dependent on proximity to freeway or changes to traffic conditions in surrounding area Health effects Physiological changes resulting from air and water quality Environmental quality effects Changes in air or water quality as they affect people’s lifestyle and enjoyment of their environment, recreation, indoor and outdoor living Land use changes Changes in zoning from residential to commercial areas or development in a previously undeveloped area, loss of recreational or public space Aesthetics Changes to visual landscape, physical intrusion, scale, loss of open space, changes in flora or fauna Cultural heritage Disturbance or destruction of heritage sites The data required to facilitate a Social Impact Assessment process are firmly based on community consultation campaigns. Community participation is a major component of Social Impact Assessment. It is useful to begin the participation process early in the planning process and carry on throughout the life of individual initiatives. In many transport agencies, community participation/consultation is also a legislative requirement, meaning that an initiative cannot proceed beyond the planning stage without adequate consultation with the community. The support of the community is also often needed to ensure successful implementation of a transport initiative. An added complication to impact assessment is that social impacts are classified differently by different practitioners. For example, air pollution is classified as an environmental issue in an Environmental Impact Assessment. A Social Impact Assessment should also include air pollution as a social issue because of its consequences on the health of the community. Air pollution mitigation would also be included in a CBA due to the economic costs of pollution mitigation measures. The practitioner is often faced with a series of complexities inherent in impact assessment statements, which can lead to serious double counting issues in the economic appraisal of initiatives. It is very important to remember that a thorough appraisal should take into account a broad range of social impacts, not just those that are easily quantifiable and monetised such as relocation, pollution mitigation measures and safety, but also those that are more difficult to monetise such as community severance or loss of character or open space. ### 6.2.3 Equity Impact Assessment Social (equity) Impact Assessment statements consider the winners and losers of the particular initiative investment. As stated by Levinson (2002), a set of specified (winner and loser) population subgroups would be normally identified. Then the outcomes of the initiative (e.g. travel time and delay, accessibility, consumer surplus, air pollution, noise pollution, accidents) would be assessed for each of these population subgroups. Levinson (2002) provides an Equity Impact Statement checklist as shown below. The checklist includes a range of stratification variables (for example population, gender or spatial extent), specific process requirements (such as the opportunity to participate in decision-making) as well as desired outcome areas (such as mobility, economic, environmental and health outcomes) for transport initiatives. Process Outcomes Stratification Opportunity to engage in decision-making process Mobility Economic Environmental Health Other Population Spatial (or jurisdictional) Temporal Modal Generational Gender Racial Ability Cultural Income Source: Levinson, 2002. ### 6.2.4 Assessing cumulative impacts The distribution of effects can change over time and through the cumulative effects of successive initiative activities. Transport practitioners involved in equity analysis should therefore be aware of procedures for conducting Cumulative Effects Assessment (CEA) or Cumulative Impact Assessment (CIA). A cumulative impact on a resource is one that results from the incremental impact of an action when added to other past, present and reasonably foreseeable future actions (see below). Cumulative impacts can result from individually minor but collectively significant actions taking place over a period of time. Cumulative impacts may also include the effects of natural processes and events, depending on the specific resource in question (FHWA, undated). Cumulative impact analysis is resource-specific and generally performed for the environmental resources directly impacted by a government action under study, such as a transportation initiative. However, not all of the resources directly impacted by an initiative will require a cumulative impact analysis. The resources subject to a Cumulative Impact Assessment should be determined on a case-by-case basis early in the process, generally as part of early coordination or scoping (FHWA, undated). It is generally recognised among practitioners that specific methodologies for the assessment of indirect and cumulative impacts, particularly for predicting reasonable foreseeable impacts, are not as well established or universally accepted as those associated with direct impacts, such as traffic noise analysis or wetland delineation. Determining the most appropriate technique for assessing indirect and cumulative impacts of a specific initiative should include communication with the cooperating agencies and community stakeholders (FHWA, undated). Figure 2: Cumulative impactsSource: (FHWA, undated) ### 6.2.5 Stated preference surveys Stated preference surveys are important community consultation tools that are used to inform equity evaluations (e.g. cost-utility analysis). They are particularly useful in situations where empirical information does not exist. For example, stated preference surveys might be used because no data has yet been generated on a new type of travel mode or a special type of pricing instrument with unique characteristics (US EPA, 1998). In a stated preference approach, it is possible to derive statistical estimates of ‘trade-off’ rates between various alternatives or their attributes by making respondents choose from among them in measured ways that indicate the relative importance of key attributes. These rates can then be assessed in relation to each traveller and their circumstances (US EPA, 1998). The validity of the derived statistical relationships relies on how well the alternatives are portrayed to (and understood by) the respondent, and their comparison with known ‘standards’. While stated preference surveys rely on hypothetical situations, comparison of ‘elasticity’ relationships derived from stated preference with more conventional revealed preference surveys or models have shown corroboration. The results from these surveys should be used with caution, but they offer an important interim tool for agencies to estimate relationships between pricing instruments and travel behaviour response, not just in mode choice but also in relation to destination, time of day, route choice, etc (US EPA, 1998). Stated preference methods were developed by the private market research industry and have been used successfully for many years to aid companies in identifying the critical attributes of their product, and maximising those attributes to gain market share over competitors. Use of the techniques in transport is a fairly recent development; however, there are examples where they have been used to explore time of day choice or assist in the development of a route choice model (US EPA, 1998). ### 6.2.6 Spatial analysis techniques This section discusses the potential of spatially based analysis and micro-simulation modelling to explore distributional or equity issues. #### Spatially based analysis Since transport infrastructure occurs on a spatial scale, it is usually the case that physical or social impacts resulting from transport impacts can also be quantified over a spatial scale. This is most commonly undertaken with Geographic Information Systems (GIS) technology which is now readily available and widely used to quantify various effects; for example, emission of environmental contaminants or noise modelling. Most transport impacts have a geographical component; for example, property prices can be easily represented in geographic form. Once the distributional impact is defined over a geographical scale, relevant socio-economic characteristics need to be transposed onto the geographical representation of the impact. Some of these characteristics will be derived from a community social profile. Due to the aggregate nature of common data sources on population characteristics (such as the census), Statistical Local Area or Local Government Area population characteristics are generally used as a proxy for specific groups being examined. For example, if concern is expressed over impacts on low income or minority populations, the impacts are measured for neighbourhoods that exceed a certain percentage of those population groups, rather than for specific minority persons or households. This provides the decision-maker with a representation of the distributional effects of initiatives on the communities of interest, i.e. the ‘winners’ and ‘losers’. The biggest problem with spatial techniques is that some factors that affect impact distribution are difficult to determine. It is often difficult to identify the geographic location of a population class according to social characteristics. An additional complicating factor is that people’s decisions about where they live may be affected by transportation investments. For example, positive externalities such as good public transport or highway access can lead to higher property values and a migration of higher-income people to the area served (FHWA, 2003). ##### Micro-simulation Micro-simulation modelling techniques forecast travel by modelling a set of actual or synthetic individuals or households that represent the population as the basic unit of analysis rather than dealing with population averages by postcode or statistical region. A ‘synthetic’ sample is composed of a hypothetical set of people or households with characteristics that as a whole match the overall population. Results are aggregated only after the individual or household analyses are completed, allowing the user great flexibility in specifying output categories. This is more commonly referred to as sample enumeration or discrete choice analysis. Sample enumeration relies on the modelling of behaviour for a representative sample of the population generally derived from a regional home interview survey or stated preference survey (FHWA, 2003). The benefit of this modelling approach for analysing distribution of impacts is that travel patterns, and therefore the travel benefits of transportation improvements, can be tracked across any population characteristic that is included in the sample of persons modelled. Historically, this has been done by income level, since income is commonly used to predict travel behaviour. However the characteristics of the sample can be broadened to include other attributes (FHWA, 2003). An example of a micro-simulation program from the United States (STEP) program is presented below. ##### STEP: a micro-simulation program STEP is a travel demand analysis package composed of an integrated set of travel demand and activity analysis models, supplemented by a variety of impact analysis capabilities and a simple model of transportation supply. STEP has been used by the US Department of Transport and the US Environmental Protection Authority to analyse travel impacts of pricing scenarios (with the intention to reduce transport emissions) by income group. STEP program models are applied using actual or forecast data on household socioeconomic characteristics, the spatial distribution of population and employment (land use), and transportation system characteristics for the selected analysis year(s). STEP reads through the household sample, attaching level-of-service and land use data to each household record as necessary. For each household, STEP uses its models to predict a daily travel and activity pattern for each individual in the household. Finally, household travel is summed up and household totals are expanded to represent the population as a whole. Testing the effect of a change in conditions or policies is a simple matter of re-analysing the household sample using the new data values, and comparing the results with previous outputs. For example, a new highway or new transit service can be represented by changed travel times and costs for the areas served; a parking price increase can be represented by an increase in out-of pocket costs; an increase in income in a particular area or for a particular population subgroup can be represented by editing the household file to incorporate the revised incomes. The sampling framework preserves the richness of the underlying distribution of population characteristics and permits tabulation by any subgroup with sufficient observations to be statistically significant. For example, the results can be disaggregated by income level and age, which would allow an assessment of effects for, say, various income classes among the retired population. This is a significant advantage over an aggregate model, which uses zonal averages for most socioeconomic and economic data. A possible STEP model structure is illustrated below. Figure 3: STEP model structureSource: US EPA, 1998 ## 6.3 Appendix C Community participation processes There are varying degrees of public participation; from information provision and consultation to substantial support for community initiatives (see figure below). Higher degrees of participation are not necessarily 'better' - different levels are appropriate for different situations and interests (Wilcox, 1994). The most commonly applied form of participation is community consultation. Figure 4: Levels of community participationSource: Adapted from Wilcox, 1994 The desired level of participation needed for a initiative will inform the selection of participatory methods and techniques. Choice of method should directly reflect the type of information needed and the purpose for which it will be used. The following table provides common purposes for which community input is sought and the methods generally effective in achieving the task. Table 5: Matching participatory instruments to purpose Participatory approachCharacteristicsParticipants Purpose: To gain ideas and input from the public Public hearing/ community meeting A public hearing is often formal, with statements going into an official record of the meeting. A community meeting will often be an informal gathering where people come to share ideas with local officials. An open gathering of people from the community who wish to be heard about a topic or issue Focus groups A small gathering of stakeholders who meet in a confidential setting to discuss an issue or react to a proposal. The assumption is that through discussion, new information will emerge that would not otherwise come to light from individual questioning. These meetings are often facilitated by a trained individual. Local officials may or may not actively participate in the discussion. Selected stakeholders Purpose: To complete a specific task with citizen input Workshop A meeting focused on a predetermined task to be accomplished. Rather than soliciting general opinion, workshops are intended to focus on specific concerns and produce a predetermined product. The benefit of such meetings is that those most directly affected by an issue are directly involved in addressing it. Primary stakeholders are often involved because of a high level of interest in the issue. To be most effective in addressing a public issue, the full range of interests should be represented in the workshop Task force Purpose is to complete a clearly defined task in the planning process. A task force is often appointed to study a particular issue and offer a report of findings and recommendations to the policy-making body. A small (usually 8 to 20 people) ad hoc citizen committee Purpose: To have a discussion about citizen priorities associated with community initiatives Priority-setting committee Citizen group appointed to advise local officials regarding citizen ideas and concerns in planning community initiatives. Participants who are trusted to represent the concerns of citizens and sometimes function as a ‘go-between’ with residents and local government Purpose: To discuss citizen priorities associated with community initiatives Delphi procedure The objective is to work toward a consensus of opinion that can be used by policymakers for decision making. Successive rounds of presented arguments and counterpoints move the group toward consensus, or at least to clearly established positions and supporting arguments. A panel of citizens chosen for their knowledge about an issue Purpose: To quickly and quietly ascertain public sentiment about an issue Interviews, polls, and surveys Detailed information can be gathered. While confidential, the information can be informative both in content and overall emotional/political reaction to an issue. Interested citizens are given a chance to speak directly with someone about their views Purpose: To gain input about the alternatives and consequences of an issue. Media-based issue balloting Coupled with a media-based effort to discuss alternatives and consequences of potential solutions, letters to the editor or radio call-in shows are monitored to gain a sense of public reaction. Unscientific and not a reliable indicator of overall community sentiment, it can be a good way to gain a quick reaction to proposals by those most likely to be active on an issue. Citizens are asked to respond through the local media Purpose: To give citizens broad decision-making powers Citizen advisory boards or councils An advisory board studies an issue and makes recommendations to policy makers. The range of decision-making authority can vary and, in some cases, may be binding. Appointed representatives of one or more community interests Referenda Direct and binding decision-making authority by the electorate. Protracted campaigning leading to a referendum can become a divisive force. All eligible voters Purpose: To stay informed about the needs of certain neighbourhoods or interest groups Group or neighbourhood planning council This council serves as advisory to policy makers. Such councils keep decision makers informed about neighbourhood or group concerns, formulate goals and priorities on behalf of the neighbourhood or group, and evaluate plans and programs affecting the neighbourhood or group. Organised by, and composed entirely of citizens Source: Adapted from Leatherman and Howell, 2000 ## 6.4 Appendix D A distributional rules approach Khisty (1996) attempts to draw analytical conclusions about equity effects by using distributional rules or theories of justice that can be applied depending on the outcomes sought. In this approach, the analyst needs to determine which analytical framework is the most appropriate for the situation under investigation. This involves the application of different equity principles or theories to determine the types of outcomes that are possible or desirable. Theories of justice are used as input in the development of decision-making procedures. There is no one single theory of justice that will satisfy everyone. For example, Khisty (1996) provides the following six theories of justice chosen because they represent ideas that are either commonly used, understood by society or are documented in the literature. To illustrate how theories of justice can be applied, Khisty (1996) developed an example of a hypothetical city showing six alternative bus configurations (1-6) as illustrated below. The income distribution (expressed from ‘low’ to ‘high’) on the route alternatives is then overlaid on the area map. Each alternative satisfies the goals and objectives set forth by the citizens of the city, and in each case the aggregate benefits exceed the aggregate costs. Figure 5: A hypothetical city showing six bus transit configurationsSource: Khisty (1996) There are five major socio-economic groups in the city and their population percentages are indicated in the table below. It is assumed that each group contributes taxes to the city in proportion to their income. The amount indicated under each alternative (1-6) represents units of benefit that each individual would receive. Alternatives 1 2 3 4 5 6 Total Net Benefits 920 1045 1825 1885 2200 2450 Income class % population High 5% 6 9 25 28 35 50 Medium high 10% 7 11 22 25 30 40 Medium 50% 9 11 19 20 25 30 Medium low 25% 10 10 16 15 15 10 Low 10% 12 9 13 12 10 6 Average net benefit 9.20 10.45 18.25 18.85 22.00 24.50 Floor 6 9 13 12 10 6 Range 6 2 12 16 25 44 Source: Khisty (1996) Given the details of the initiative, the question is: which of the six alternatives is the most equitable? The answer to this question depends on which distributional rules or equity principles the decision-maker adopts. Khisty (1996) provides the implications for route selection based on each of the six equity principles: • Equal share distribution (distribution based on an equal share - or as equal as possible - of the benefits among the socioeconomic groups). Alternative 2 is most consistent with this principle with the minimum range between the highest and the lowest benefit received being 2 units and an average net benefit received of 10.45 units. • Utilitarian distribution (distribution based on maximising the benefits to the community as a whole). Alternative 6 is most consistent with this principle. While the disparity between high-income and low-income groups is glaring, this alternative has the highest net benefit among all the alternatives. • Distribution based on maximising the average net benefit with a minimum floor benefit of 10 units (this principle ensures that an attempt to maximise the average benefit is constrained by a certain amount to ensure that certain individuals or groups, particularly those ‘at the bottom’, receive a certain minimum amount of benefit). Alternative 5 is consistent with this principle. The choice of a minimum floor is a decision that must be made in advance by the decision-maker. This principle also illustrates the nature of an efficiency-equity trade-off; the principle is achieved with a reduction in total net benefits of 250 units compared with the maximum efficiency alternative. • Distribution based on maximising the average net benefit with a benefit range constraint not exceeding 16 units (this principle ensures that an attempt to maximise the average benefit does not allow differences in benefit between the rich and the poor segments of the society to exceed a certain amount). Alternative 4 is consistent with this principle. As above, an efficiency-equity trade-off is apparent. In this case, 565 units of net benefit need to be traded-off. • Distribution based on the egalitarian principle (this principle of ethical conduct attempts to reduce any existing social or economic inequalities among individuals and groups in the community). Alternative 1 distributes higher benefits to the lower end of the income distribution and is therefore consistent with the egalitarian principle. Although this alternative has the lowest total benefit of all alternatives, it probably benefits income groups that are truly in need of public transportation. • Rawls’ theory of justice (distribution based on maximising benefits to the lowest income group). Alternative 3 is consistent with this principle. It also has the highest floor among the alternatives, but indicates a need of 625 units of net benefit to be traded for the desired equity outcome. Which distribution theory to use will depend on the policy-maker and the characteristics of the community that is represented. Invariably, when people are affected by the choice of distribution rules, or when they are offered several rules from which to choose, they tend to prefer the rule that favours them. Preferences are a function of culture, political affiliations, gender, economic standing and so forth (Khisty, 1996). Khisty (1996) suggests that citizens are generally not bothered by ethical theories as much as they are concerned with their own welfare in terms of ‘quality of life’. Therefore, Khisty defines ‘quality of life’ as the essence of the collective economic, social and physical conditions of people in a community. It is important to recognise that these are highly subjective choices. They involve trade-offs between, on the one hand, the efficiency focus of increasing the net benefits to society as a whole and, on the other hand, striving for more equitable outcomes. For transport and infrastructure planners and analysts, it is also essential to note that in Australia the taxation and welfare system is the prime policy tool for addressing issues of inequality. ## 6.5 Appendix E Equity Considerations in road pricing This section provides a discussion of equity issues associated with road pricing. The purpose is to illustrate how equity considerations are a key component of transport policy decision making. An example of providing for equity in road pricing is provided from the European Communities’ AFFORD Project. Over recent years, the concept of road pricing has been gaining momentum due to concerns about road capacity and congestion management. However, there is still a great deal of controversy surrounding the wider introduction and application of road pricing. As stated by Stough et al (2004), there are misunderstandings over what road pricing seeks to do, concerns over how the revenues will be spent and issues relating to welfare distribution (equity) consequences. Road pricing is intended to improve transport efficiency by rationing road capacity. In terms of reducing travel demand and making traffic flow more efficient, it does not matter how road pricing revenue is allocated. From an overall economic perspective, the revenue must be used to benefit society and the greater the benefit the more economically efficient the program. There is no requirement, however that the money be allocated in any particular way (Litman, 1999). The major equity consideration of road pricing concerns the distribution of road pricing revenue. Two components of equity that need to be considered regarding road pricing are horizontal equity and vertical equity. Many people instinctively feel that horizontal equity implies that revenues should be dedicated to road improvements or to provide other benefits to people who pay the fee. However, horizontal equity is complicated by the existence of external costs – those that are borne by non-vehicle users (see table below). So horizontal equity is only fulfilled when revenue is returned to vehicle users as a class, but only after external costs are compensated. Since most estimates of motor vehicle external costs are larger than the expected revenue of road pricing proposals, the horizontal equity justification of returning revenues to drivers is reduced or eliminated (Litman, 1999). The vertical equity component is more complex. Vertical equity requires that disadvantaged people receive more public resources (per capita or unit of service) than those who have a relative advantage, to accommodate their greater need. So revenues must benefit low-income drivers as a class at least as much as the costs they bear, and disadvantaged residents (including non-drivers) must benefit overall. Litman (1999) explains that vertical equity can be defined with respect to the ability to drive. As a class, non-drivers tend to be economically or socially disadvantaged. Road pricing has the potential of benefiting non-drivers overall by increasing the use of alternative travel modes. Vertical equity considerations justify using road pricing revenue in a broad range of ways including the support of alternative transport programs, reduction in taxes, or funding of public services that benefit disadvantaged populations. The table below illustrates an approach developed by Litman (1999) to assess the distribution of road pricing revenues to four classes of people based on horizontal and vertical equity considerations. Table 7: Road pricing revenue distribution equity analysis ClassDescriptionHorizontal equityVertical equity Non-drivers People who cannot drive, usually due to age, disability, or low income. Non-drivers use automobiles as passengers, but their overall use of congested roads is typically low. Although this group would pay little in road pricing, they deserve a share of revenue if it is considered compensation for existing external impacts of driving. Non-drivers include many people who are economically, physically and socially disadvantaged; therefore, maximum use of road pricing revenues to benefit this group is justified. Low-income drivers People who can drive and have access to an automobile, but whose travel decisions are significantly affected by vehicle expenses. They will be frequently tolled off by road pricing. This group pays a relatively small share of road pricing fees, but incurs costs from travel charges that provide a large portion of congestion reduction benefits. They deserve a share of toll revenues in compensation. This group is, by definition, disadvantaged so use of road pricing revenues to benefit this group is justified. Middle-income drivers People who can drive and have access to an automobile, but whose travel decisions are only moderately affected by vehicle expenses. They will sometimes be tolled off the roadway and their net benefits of travel are reduced by road pricing. These drivers pay a large portion of total road pricing and lose net benefits. They deserve to benefit from road pricing revenues on the basis of horizontal equity, but only after all external costs are compensated. Since this group is not disadvantaged there is no vertical equity justification for using road pricing revenue to benefit them. Upper-income drivers People who can drive and have access to an automobile, but whose travel decisions are not affected by vehicle expenses. They benefit overall from road pricing due to reduced congestion. These people enjoy net benefits from reduced congestion. They deserve a share of the revenue only after external costs are compensated. Since this group is not disadvantaged, there is no vertical equity justification for using road pricing revenue to benefit them. Source: Litman, 1999 ### 6.5.1 Road use charges: an example from the AFFORD project The European Commission undertook a study of marginal cost transport pricing in three European cities – Helsinki, Oslo and Edinburgh – as part of the ‘Acceptability of Fiscal and Financial Measures and Organisational Requirements for Demand Management’ (AFFORD) study (Fridstrom et al, 2000). The study distinguished between ‘first-best’ and ‘second-best’ road pricing policy packages. The first-best solution involves charging the user the true cost, i.e. the marginal cost of road use determined by the level of congestion, environmental and accident costs. The second-best pricing package was based on the use of a package of policy instruments that are available for use by transport authorities (e.g. time differentiated cordon toll rates or time differentiated parking charges) (Fridstrom et al, 2000). The study concluded that inequity within a population increased when road pricing is implemented (based on a Gini coefficient defined in terms of household income per consumption unit before and after revenue redistribution). However, in most cases the changes to income distribution appeared to be relatively moderate. Fridstrom et al (2000) noted that if revenue is redistributed proportionately by personal income, which is given as a percentage point relief in the income tax rate, it does nothing to correct the initial, adverse equity effects between people in the different income brackets. It does, however, reverse the potentially unpopular transfer of funds from private consumers to the public treasury. However, if the same, absolute amount of money is redistributed to each adult individual (a ‘poll transfer’ or ‘flat distribution’) income inequity in the population improves considerably. According to model simulations, this is because the out-of-pocket expenditure on road charges represents a higher share of the household income in low income groups than among the more affluent. Both of these scenarios represent clear trade-offs between equity and efficiency: equity can be improved by redistribution but only at the expense of the efficiency gains from the road pricing strategy. Fridstrom et al (2000) suggest that in principle it is possible to conceive of a road-pricing scheme with revenue redistribution that enhances economic efficiency as well as equity. It will usually be sufficient to redistribute a certain component of the revenue generated in a progressive manner, in order to keep the less affluent households at least equally well off. The main reason why road pricing schemes do not lead to any deterioration in income distribution is that the more affluent people, exhibiting higher rates of car ownership and use, tend – in general – to incur higher road pricing expenditure. ### 6.5.2 Non-pricing mechanisms for providing equity in road use While road pricing is one method of rationing road capacity, there are other transport demand management mechanisms that do not involve pricing. These include priority measures such as high occupancy vehicle lanes and alternative rationing schemes. Travel behaviour change initiatives are another non-pricing mechanism to improve equity by encouraging more efficient modes of transport and better access for people without a vehicle. These measures are aimed at reducing total vehicle traffic and encouraging the use of efficient modes. Many of these strategies support equity objectives by improving travel choices/alternatives or affordability, especially for low income or mobility-disadvantaged groups (Litman, 2000). Australia currently has a number of high occupancy vehicle lanes, commonly referred to as ‘transit lanes’. Transit lanes provide travel priority by allowing specified users (usually two or more people per private vehicle and public transport vehicles) exclusive use of part of the roadway to travel through congested sections of road. Transit lanes provide a high degree of horizontal equity (because they do not discriminate in regard to who can participate). This option benefits all existing users, especially public transport users by reducing travel times. Road rationing schemes designate a certain percentage of the travelling population to use a road link on certain days or times of day. Those who have not been designated to use the road link at a particular time may still do so upon payment of a toll. Rationing schemes have been applied in many countries, for example in Athens and several Brazilian cities, with varied results. In these cities, access prohibitions have led to increased multiple car ownership and average fleet age, and after some years they lose their effectiveness (Viegas, 2001). Because of these results Viegas (2001) suggests that the ‘ration’ should be attributed to individuals, not to vehicles, so it is useable for driving and for riding on public transport (this also serves as an incentive to shift to public transport). Attributing the ration to individuals instead of vehicles prevents misuse of the system by those who own more than one car (Viegas, 2001). Nevertheless, rationing schemes are associated with high administration costs and are open to abuse by both users and administrators. These are ‘second-best’ options because of administrative, spatial or other deficiencies. However, under certain scenarios, they provide a valid response to tackling complex equity issues.
2018-02-25 09:15:11
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.2593521773815155, "perplexity": 2545.6558784235854}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-09/segments/1518891816351.97/warc/CC-MAIN-20180225090753-20180225110753-00410.warc.gz"}
https://docs.microsoft.com/en-us/dotnet/framework/data/adonet/ef/language-reference/navigate-entity-sql
# NAVIGATE (Entity SQL) Navigates over the relationship established between entities. ## Syntax navigate(instance-expression, [relationship-type], [to-end [, from-end] ]) ## Arguments instance-expression An instance of an entity. relationship-type The type name of the relationship, from the conceptual schema definition language (CSDL) file. The relationship-type is qualified as <namespace>.<relationship type name>. to The end of the relationship. from The beginning of the relationship. ## Return Value If the cardinality of the to end is 1, the return value will be Ref<T>. If the cardinality of the to end is n, the return value will be Collection<Ref<T>>. ## Remarks Relationships are first-class constructs in the Entity Data Model (EDM). Relationships can be established between two or more entity types, and users can navigate over the relationship from one end (entity) to another. from and to are conditionally optional when there is no ambiguity in name resolution within the relationship. NAVIGATE is valid in O and C space. The general form of a navigation construct is the following: navigate(instance-expression, relationship-type, [ to-end [, from-end ] ] ) For example: Select o.Id, navigate(o, OrderCustomer, Customer, Order) From LOB.Orders as o Where OrderCustomer is the relationship, and Customer and Order are the to-end (customer) and from-end (order) of the relationship. If OrderCustomer was a n:1 relationship, then the result type of the navigate expression is Ref<Customer>. The simpler form of this expression is the following: Select o.Id, navigate(o, OrderCustomer) From LOB.Orders as o Similarly, in a query of the following form, The navigate expression would produce a Collection<Ref<Order>>. Select c.Id, navigate(c, OrderCustomer, Order, Customer) From LOB.Customers as c The instance-expression must be an entity/ref type. ## Example The following Entity SQL query uses the NAVIGATE operator to navigate over the relationship established between Address and SalesOrderHeader entity types. The query is based on the AdventureWorks Sales Model. To compile and run this query, follow these steps: 1. Follow the procedure in How to: Execute a Query that Returns StructuralType Results. 2. Pass the following query as an argument to the ExecuteStructuralTypeQuery method: SELECT address.AddressID, (SELECT VALUE DEREF(soh)
2019-08-18 09:59:15
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.22785325348377228, "perplexity": 9644.140470926626}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-35/segments/1566027313747.38/warc/CC-MAIN-20190818083417-20190818105417-00526.warc.gz"}
https://www.biostars.org/p/243104/
TCGA/GDC FPKM vs FPKM-UQ 2 1 Entering edit mode 4.6 years ago igor 12k GDC provides RNA-seq quantification in multiple forms: For mRNA-Seq data, the GDC generates gene level and exon level quantification in Fragments Per Kilobase of transcript per Million mapped reads (FPKM). To facilitate cross-sample comparison and differential expression analysis, the GDC also provides Upper Quartile normalized FPKM (UQ-FPKM) values and raw mapping count. I tried downloading both FPKM and FPKM-UQ data for TCGA-GBM dataset. The distributions of FPKM-UQ values look more comparable across samples than for FPKM values, which makes sense. The sums for each sample of FPKM values range from 200k to 318k, so the highest sample has about 60% more. For FPKM-UQ, the sums range from 4x10^9 to 9x10^9, so the highest sample is more than double the lowest. UQ normalization actually increases that difference. Does that imply that the total number of transcripts is 2x more in some samples compared to others? gdc rna-seq • 6.4k views 0 Entering edit mode Hi Igor! Did you provide a plausible answer about this issue of yours? I know it is longer but I am facing the same problem and I can not find any good source about this. Cheers 2 Entering edit mode 3.0 years ago solo7773 ▴ 80 First of all, let's find out how RPKM and RPKM-UQ are calculated (https://docs.gdc.cancer.gov/Encyclopedia/pages/HTSeq-FPKM-UQ/) FPKM = [RMg * 10^9 ] / [RMt * L] RMg: The number of reads mapped to the gene RMt: The total number of read mapped to protein-coding sequences in the alignment L: The length of the gene in base pairs FPKM-UQ = [RMg * 10^9 ] / [RM75 * L] RMg: The number of reads mapped to the gene RM75: The number of read mapped to the 75th percentile gene in the alignment. L: The length of the gene in base pairs Here we can see the only difference is the divisor part, which is RMt for FPKM while RM75 for FPKM-UQ. To gabriel.rosser, the factor is 10^9, not changed. Both in the FPKM matrix and FPKM-UQ matrix, every column (all genes of a sample) is divided by a constant factor (either RMt or RM75). Therefore, in the quotient matrix, the column values are the same, which is consistent with gabriel.rosser's explanation as well. To the igor's question, RM75 can be much smaller than RMt because RM75 is the reads mapped to the 75th percentile gene within a sample. Imaging a numerical vector of length 100, the first 75 elements are value 1, and elements 76 to 100 are value 1000000. When apply this setting to our case, that means the RM75 is 1 while RMt is 1000000. As a result, the FPKM and FPKM-UQ can be dramatically different. So when the genes of one sample is divided by a small RM75 but the genes of another sample is divided by a big RM75, after summation within each sample and then compare sums between samples, you will see what you've seen. 1 Entering edit mode 3.0 years ago This this blog post and this discussion are helpful when considering the difference. Summarising the former: To compute FPKM (or RPKM) from raw counts, first divide by the total read count, then by a constant factor, then by gene size. Typically, the total read count is just the sum of all the reads. However, in the FPKM-UQ data, the total read count is estimated as the 75th percentile read count. This will be a lot smaller than the sum of the reads and more robust to outliers(?) Given the factor of 10^6 difference, I also suspect they've changed the constant factor. Having said that, I can't reproduce the results, because the FPKM values come from a different pipeline to the HT-Seq raw counts - so performing the aforementioned steps on the counts data does not reproduce the FPKM values. However, dividing the FPKM matrix by the FPKM-UQ matrix returns values that are constant down the columns (i.e. a single value per sample for all genes). This is consistent with my explanation.
2021-10-20 21:13:20
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.5286226868629456, "perplexity": 2620.9905185854336}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-43/segments/1634323585348.66/warc/CC-MAIN-20211020183354-20211020213354-00643.warc.gz"}
https://connolly.tech/alevelnotes/maths/4.2-normaldistribution/
# Normal Distribution • Half the data is on the left of the peak, half on the right • Mean, median and mode are all the same value • Can't find value $P(X=N)$ for any $N$ as you can't integrate to an absolute value • Curve has points of inflection one standard deviation from the mean • This means the curve changes concavity • Standard Deviations: • $1\times\sigma$ = 68% of the data • $2\times\sigma$ = 95% of the data • $3\times\sigma$ = 99.7% of the data ## Syntax $X \sim B(n,p)$ $n$ is the mean, $p$ is the square of the standard deviation $X \sim N(\mu, \sigma^2)$ $\mu$ represents the center $\sigma$ represents one standard deviation ## Example 1. Diameters of a rivet modelled by $X \sim N(8, 0.2^2)$ a) Find $P(X>8)$ 50% b) Find $P(7.8 < X < 8.2)$ 1 sd so 68% 1. Criteria for joining Mensa is an IQ of at least 131. Assuming that IQ has the distribution $X \sim N(100, 15^2)$ for a population a) What percentage of people are eligible? $P(X>=131)$ = 1.9%
2020-08-07 20:40:22
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.7781084775924683, "perplexity": 951.7123870931566}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-34/segments/1596439737225.57/warc/CC-MAIN-20200807202502-20200807232502-00385.warc.gz"}
https://www.featool.com/model-showcase/02_heat_transfer_03_heat_transfer3/
# Shrink Fitting of an Assembly ## Model Data Type: heat transfer Physics Modes: heat transfer Keywords: cooling multi domain FEATool supports modeling heat transfer through both conduction, that is heat transported by a diffusion process, and also convection, which is heat transported through a fluid through convection by a velocity field. The heat transfer physics mode supports both these processes, and defines the following equation $$\rho °C_p\frac{\partial T}{\partial t} + \nabla\cdot(-k\nabla c) = Q - \rho °C_p\mathbf{u}\cdot\nabla T$$ where $\rho$ is the density, $C_p$ the heat capacity, $k$ is the thermal conductivity, $Q$ heat source term, and $\mathbf{u}$ a vector valued convective velocity field. This example models heat conduction in the form of transient cooling for shrink fitting of a two part assembly. A tungsten rod heated to 84 °C is inserted into a chilled steel frame part at -10 °C. The time when the maximum temperature has cooled to 70 °C should be determined. The assembly is cooled due to convection through a surrounding medium kept at TMinf = 17 °C and a heat transfer coefficient of h = 750 W/m2 K. The surrounding cooling medium is not modeled directly, and the convective term is therefore omitted, but the effects are incorporated into the model by the use of natural convection boundary conditions. This model is available as an automated tutorial by selecting Model Examples and Tutorials… > Heat Transfer > Shrink Fitting of an Assembly from the File menu. Or alternatively, follow the linked step-by-step instructions.
2019-04-24 09:50:46
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.5778911113739014, "perplexity": 1215.5505200517698}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-18/segments/1555578640839.82/warc/CC-MAIN-20190424094510-20190424120510-00407.warc.gz"}
https://www.nag.co.uk/numeric/fl/nagdoc_latest/html/e04/e04wcf.html
# NAG Library Routine Document ## 1Purpose e04wcf is used to initialize the routine e04wdf. ## 2Specification Fortran Interface Subroutine e04wcf ( iw, rw, Integer, Intent (In) :: leniw, lenrw Integer, Intent (Inout) :: ifail Integer, Intent (Out) :: iw(leniw) Real (Kind=nag_wp), Intent (Out) :: rw(lenrw) #include nagmk26.h void e04wcf_ (Integer iw[], const Integer *leniw, double rw[], const Integer *lenrw, Integer *ifail) ## 3Description e04wcf initializes the arrays iw and rw for the routine e04wdf. None. ## 5Arguments 1:     $\mathbf{iw}\left({\mathbf{leniw}}\right)$ – Integer arrayCommunication Array 2:     $\mathbf{leniw}$ – IntegerInput On entry: the dimension of the array iw as declared in the (sub)program from which e04wcf is called. Constraint: ${\mathbf{leniw}}\ge 600$, see routine e04wdf. 3:     $\mathbf{rw}\left({\mathbf{lenrw}}\right)$ – Real (Kind=nag_wp) arrayCommunication Array 4:     $\mathbf{lenrw}$ – IntegerInput On entry: the dimension of the array rw as declared in the (sub)program from which e04wcf is called. Constraint: ${\mathbf{lenrw}}\ge 600$, see routine e04wdf. 5:     $\mathbf{ifail}$ – IntegerInput/Output On entry: ifail must be set to $0$, $-1\text{​ or ​}1$. If you are unfamiliar with this argument you should refer to Section 3.4 in How to Use the NAG Library and its Documentation for details. For environments where it might be inappropriate to halt program execution when an error is detected, the value $-1\text{​ or ​}1$ is recommended. If the output of error messages is undesirable, then the value $1$ is recommended. Otherwise, if you are not familiar with this argument, the recommended value is $0$. When the value $-\mathbf{1}\text{​ or ​}\mathbf{1}$ is used it is essential to test the value of ifail on exit. On exit: ${\mathbf{ifail}}={\mathbf{0}}$ unless the routine detects an error or a warning has been flagged (see Section 6). ## 6Error Indicators and Warnings If on entry ${\mathbf{ifail}}=0$ or $-1$, explanatory error messages are output on the current error message unit (as defined by x04aaf). Errors or warnings detected by the routine: ${\mathbf{ifail}}=1$ One or more of the communication array lengths leniw or lenrw is less than $600$. ${\mathbf{ifail}}=-99$ See Section 3.9 in How to Use the NAG Library and its Documentation for further information. ${\mathbf{ifail}}=-399$ Your licence key may have expired or may not have been installed correctly. See Section 3.8 in How to Use the NAG Library and its Documentation for further information. ${\mathbf{ifail}}=-999$ Dynamic memory allocation failed. See Section 3.7 in How to Use the NAG Library and its Documentation for further information. Not applicable. ## 8Parallelism and Performance e04wcf is not threaded in any implementation.
2019-03-25 10:34:56
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 21, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9862968325614929, "perplexity": 6663.644490159141}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-13/segments/1552912203865.15/warc/CC-MAIN-20190325092147-20190325114147-00442.warc.gz"}
https://www.embeddedrelated.com/showarticle/1334.php
# UML Statechart tip: Handling errors when entering a state This is my second post with advice and tips on designing software with UML statecharts. My first entry is here. It has been nearly 20 years since I first studied UML statecharts. Since that initial exposure (thank you Samek!), I have applied event driven active object statechart designs to numerous projects [3]. Nothing has abated my preference for this pattern in my firmware and embedded software projects. Through the years I have taken note of a handful of common challenges when creating UML statechart based designs. This post tackles the question: How does an engineer handle synchronous errors while entering a state? First some context. The UML specifications ([1] 14.2.3.4.5 Entering a State) describe the behavior when a state is entered. Nowhere within that description does the UML specification describe error handling nor does it describe any possible state transitions as an immediate result of entering the state. As Samek [6] notes, “The UML does not allow transitions in entry or exit actions.” Let’s dive into an example. Our hypothetical product is required to maintain an internal audit log which is stored as a file in a filesystem. Additionally, the product is required to transmit the audit log to the appropriate backend server when certain events take place. Due to various restrictions, the software is only allowed to transmit 1 KiB of the log file at a time. An initial statechart to accommodate this requirement may appear similar to the following figure: Given the above design, a software engineer may ask: “What happens if opening the audit file fails? How does the state machine design accommodate this failure?” As always with software design, there are many possibilities. One option, perhaps all too commonly followed, is to simply ignore the error. For obvious reasons we will skip analysis on that option. The following design options are explored below: • Explicit Transitions • Failure-Event Self-Posting • Asynchronous Service ## Explicit Transitions This option requires the developer to “Explicitly code two transitions with complementary guards and with different target states.” - Samek [6]. This approach is the preferred solution for many firmware projects with a small state-space. Taking this option, our preliminary statechart is modified to the following: Benefits of this approach include: • Avoids error handling in the enter-state handler for the State-of-TransmittingAuditFile. • The event handling code is clear and easy to understand. • The need for an additional cache or intermediary to store the file handle for future use by the destination state. • If additional event handlers in other states require a similar transition, then the code will potentially violate the DRY principle as developers copy and paste the transition code to other states. • Additionally, the firmware may increase in code size if this pattern is needed in multiple states. • In large projects with dozens if not hundreds of states and events, we are increasing the likelihood of overlooking this pattern of event handling, especially during maintenance. • However, this concern may be mitigated through templates or macros or other helper functions to contain this common logic. Despite the disadvantages, this approach is the least complicated and adheres nicely to the UML requirements. I personally use this approach primarily in smaller projects where I do not expect requirements for multiple transitions to the same destination state. ## Failure-Event Self-Posting In this option our enter-state handler properly confirms the success or failure of the file open function. If the operation fails, the handler self-posts a failure event to the active object’s corresponding event message queue, enabling a state transition as a result of the failure event. It is critical to note that this option should only be considered if the underlying framework or queue allows for a “high priority” or LIFO (Last In First Out) posting of an event. Examples include:  QActive::postLIFO(), FreeRTOS xQueueSendToFront(), or even from the first major RTOS I used: pSOS’s q_urgent(). A negative example would be a state machine based on Qt’s QStateMachine [5], which would not enable this concept. Why does this option require an underlying LIFO event queue? Our firmware designs typically handle many sources of asynchronous events, any of which may have already been posted to the event queue before this state is entered. If the newly entered state processes any of those events before processing the self-posted error event, then the state may accidentally process those events in an undefined state. Undefined behavior must be avoided. Given this information and modifying our example state machine we find: Benefits of this approach include: • All logic related to opening the file and handling the error is fully contained in a single state. • Maintenance mistakes are reduced. • No intermediary API/storage is needed for the file’s handle. • When multiple events across multiple states need to transition to this state, the code size will be smaller than the “Explicit Transitions” approach. • Not all statechart frameworks or underlying event queues support a LIFO event. • This pattern would be discouraged by strict adherents to the UML statechart design. • The pattern falls apart as soon as multiple error conditions may be generated during the entry-state handler. This approach is probably the least preferred of the options presented. However, I have personally used this approach in mid-sized projects where the underlying framework supports a LIFO event queue, where multiple states and events need to transition to the same destination state, where I want to avoid maintenance issues involved as the firmware team size grows, and where firmware code size is constrained. ## Asynchronous service In the asynchronous service solution the firmware implements a separate asynchronous service for the purpose of transforming our synchronous file open method into an asynchronous operation. Along with this new service, this solution requires a more complex statechart design involving an additional intermediate state to initiate the asynchronous request and await its response. In some systems a common thread pool may already exist to enable equivalent behavior. Modifying our example state machine design to this solution, the design might now appear as shown in the following figure: Benefits of this approach include: • Logic is fully contained and all transitions to the required behavior use the same composite destination state. • The asynchronous service creates clear success or failure events which may then create appropriate explicit transitions. • The asynchronous service could be extended to other needs and could, in some systems, be the equivalent of a thread pool. • An additional asynchronous service must be implemented. • More complex. Really, we are just trying to open a file! And yet, this is often the difference between naïve bug-prone software and robust commercially successful software. Despite the increased complexity, this approach tends to be my preferred solution in larger firmware projects where multiple states and multiple events drive the state machine to the same destination state and where team size exceeds 8-10 software engineers. I hope this was a useful post to all concerned. If interested in reading more on this topic, checkout Samek’s book [6] and my first related post. What challenges have you faced with UML statechart design? Let us know in the comments! References • [1] The UML spec: https://www.omg.org/spec/UML/2.5.1 • [2] https://www.w3.org/TR/scxml/ • [3] Related presentation:  https://covemountainsoftware.files.wordpress.com/2... • [4] GPL or commercial solution: https://www.state-machine.com/ • [5] Qt’s statemachine: https://doc.qt.io/qt-5/statemachine-api.html • [6] “Practical UML Statecharts in C/C++”, 2nd Edition, by Miro Samek, https://amzn.to/2uaSFH7 • Previous post by Matthew Eshleman: The Hardest Bug I Never Solved [ - ] Comment by March 13, 2020 At the end of the article, you conclude with "and where team size exceeds 8-10 software engineers." I'm curious why you add team size as a consideration. [ - ] Comment by March 13, 2020 Thank you for the question. The larger the team size becomes the more I become concerned about maintenance issues... i.e. mistakes as multiple people maintain/modify/amend the code. So, for example, if we stick with "explicit transitions", then everyone needs to remember to write code like this: case NEW_EVENT_SIGNAL: //this signal requires a transition to our audit functionality if (fopen(..)) { rtn = TransitionTo( TransmittingAuditFile ); } else { rtn = TransitionTo( AuditFileAccessError ); } break; instead, we end up with more maintenance resilient code with the recommended approach: case NEW_EVENT_SIGNAL: //this signal requires a transition to our audit functionality rtn = TransitionTo( TransmitAuditFile ); break; Additionally, a larger team size most likely indicates a larger more complex project where the "FAILURE-EVENT SELF-POSTING" approach may not be a pattern we want to encourage. Hope that helps! Matthew [ - ] Comment by April 3, 2020 Nice articles. I have a question, in your experience, how do you handle interrupts with a state machine? [ - ] Comment by April 3, 2020 There are two typical ways: 1. The interrupt creates an appropriate event which is published or pushed onto an event queue, which feeds a statemachine in a separate thread context (an active object). i.e. the interrupt becomes a source of events for the active object. 2. Or the interrupt creates an event which is processed immediately by the statemachine in the interrupt context. In this case the statemachine in question must be entirely owned by the interrupt in question and only process events in the ISR context. I normally do the first, but have certainly implemented number two as well. Hope that helps! Matthew To post reply to a comment, click on the 'reply' button attached to each comment. To post a new comment (not a reply to a comment) check out the 'Write a Comment' tab at the top of the comments.
2020-09-21 13:58:56
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.26957178115844727, "perplexity": 3388.2650234221855}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-40/segments/1600400201699.38/warc/CC-MAIN-20200921112601-20200921142601-00101.warc.gz"}