url
stringlengths 14
2.42k
| text
stringlengths 100
1.02M
| date
stringlengths 19
19
| metadata
stringlengths 1.06k
1.1k
|
|---|---|---|---|
https://cdmhub.org/wiki
|
## Main Page
### What are wiki pages?
Wiki pages are user-written articles on a range of subjects. Any contributor or a group of contributors can create (and own) new articles, and there can be multiple articles on the same wiki, each written by a different author.
### Who can make a wiki page?
Anyone with an account can create a new article. When creating a new article, the initial contributor can choose to have a defined list of authors, all of whom can edit the page, or have an open, wiki-like format where anyone can contribute.
## Recent articles
### Analyzing Linear Viscoelastic Behaviors
Created on 19 May 2020
Contents 1 Scope of Linear Viscoelastic Behaviors 1.1 “Viscoelastic” 1.2 “Linear” vs. Nonlinear 2 Linear Viscoelastic Behaviors in Simple Shear 2.1 Constitutive Equations for Transient (Time-Dependent) Behaviors 2.1.1 In the Form of Relaxation Modulus $$G(t)$$ 2.1.2...
### Template: Tutorial template
Created on 21 Jan 2020
Problem Description Provide a detailed description of the problem. Possibly with a picture. (Image(ProblemPicture.jpg) failed - File not found) where ProblemPicture.jpg should be attached. Software Used Provide which code, which version, and also which interface are used for the solution....
|
2021-05-15 05:35:44
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.20885711908340454, "perplexity": 3916.398569626873}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-21/segments/1620243989812.47/warc/CC-MAIN-20210515035645-20210515065645-00176.warc.gz"}
|
https://en.wikipedia.org/wiki/Talk:Bounded_inverse_theorem
|
# Talk:Bounded inverse theorem
• The completion of the example space X is l-infinity. However, in this case, the map T is not onto (and thus not bijective). So, for example, the sequence ${\displaystyle a_{n}=1}$ is in l-infty, but is not in the range of T.
|
2018-05-26 14:18:41
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 1, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9882268309593201, "perplexity": 505.3131137401704}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-22/segments/1526794867417.75/warc/CC-MAIN-20180526131802-20180526151802-00101.warc.gz"}
|
http://www.verycomputer.com/18_0a8cd5d68aa19cf8_1.htm
|
Putting together TeX boxes
Putting together TeX boxes
Hi,
I am facted with the following problem: I have to vertical boxes
(\vbox) of different height. No I want to glue them together in a
horizontal box (\hbox) such that the top lienes of the two boxes are
on the same line (see figure below). Who can I do this, preferably in
plain TeX (but*is OK too)?
Figure
+----------------------------+ +-----------------+
| Box 1 | | |
+----------------------------+ | Box 2 |
| |
+-----------------+
Claude
-----------------------------------------------------------------------------
Claude G. Diderich PGP V2.3 public key available
Swiss Federal Institute of Technology, Lausanne -----------------------------
Department of Computer Science Fields of interest:
Computer Science Theory Laboratory - Complexity theory
CH-1015 Lausanne (Switzerland - Europe) - Combinatorial optimization
Phone: (021)/693-52-86 - Parallel computations
-----------------------------------------------------------------------------
Putting together TeX boxes
Some time ago I posted the following question to the TeX net.
Quote:> I am faced with the following problem: I have to vertical boxes
> (\vbox) of different height. No I want to glue them together in a
> horizontal box (\hbox) such that the top lienes of the two boxes are
> on the same line (see figure below). Who can I do this, preferably in
> plain TeX (but*is OK too)?
> Figure
> +----------------------------+ +-----------------+
> | Box 1 | | |
> +----------------------------+ | Box 2 |
> | |
> +-----------------+
Here are some of the repies I received:
---------------------------------------------------------------------------
If you want an easy*solution, try
\parbox[t]{3in}{ First box here } \hspace{ a little space if needed }%
\parbox[t]{3in}{ Second box here }\\
---------------------------------------------------------------------------
Did you try \hbox{\vtop{stuff}\vtop{stuff}} ?
\vbox aligns with the bottom baseline rather than the top.
---------------------------------------------------------------------------
\line{\vtop{ ... Box 1 ...}\hfill\vtop{... Box 2 ...}}
should do what you want: if the boxes are \vbox's their bottoms
are lined up, but if they are \vtop's their tops are aligned.
---------------------------------------------------------------------------
The best thing to do is use \vtop rather than \vbox. \vtop is exactly
like \vbox, except the reference point is always one line from the top
of the box (off hand, I think it's really at the reference point of the
first box it encloses). When I have this problem, I usually have text
in the \vboxes, so \vtop does what I want.
The second best thing is to use \setbox and fiddle with heights:
% first use the scratch boxes \box0 and \box2 to hold the
% contents of the two boxes in your figure
\setbox0=\vbox{<whatever's in Box 1>}
\setbox2=\vbox{<whatever's in Box 2>}
% then fiddle with the height of box 2
\skip2=\ht 2 % skip 2 has the height of box 2
\ht 2=\ht 0 % set height of box 2 to be the same as box 1
% and finally adjust the depth of box 2 to take up the
% slack. We can't \advance \dp2 by \skip2, incidentally
\dp 2=\skip2
% you need to be careful that you don't do anything with
% \box0, \box2, or \skip2 while this is going on. If in
% doubt, define registers with \newbox and \newskip
% oh -- assemble in an \hbox
\hbox{\box 0 \hskip 4 pt \box 2}
Hope this helps.
---------------------------------------------------------------------------
In fact only the last reply solved my problem. In fact the two boxes I want
to align contain figures. Therefore aligning on the top baseline doesn't
change the problem.
Claude
-----------------------------------------------------------------------------
Claude G. Diderich PGP V2.3 public key available
Swiss Federal Institute of Technology, Lausanne -----------------------------
Department of Computer Science Fields of interest:
Computer Science Theory Laboratory - Complexity theory
CH-1015 Lausanne (Switzerland - Europe) - Combinatorial optimization
Phone: (021)/693-52-86 - Parallel computations
-----------------------------------------------------------------------------
In summary: I seek a style file for use with TeX which will allow me to
put incidental text in a framed box, perhaps with a slightly shaded
background, and allow the box to float to an appropriate position close
to the text it complements. If in addition the boxes could be numbered
then you would make my day.
Dear all,
I am currently in the process of writing up my thesis.
I would like to include some little pieces of additional
information which is aside to the main text. For example,
it would be nice to have a little biography of James Clerk Maxwell
beside his equations.
I particularly like the way that this sort of thing has been done by
Aki and Richards (seismologists know who these are) but if you read
the NewScientist you will also be familiar with the little boxes
containing extra information.
I could probably just about write a tex style file to do this, but I'd
rather not re-invent the wheel and so I am hoping someone out there will
be able to help me. If someone has a partially working style file I
would be happy to try help make it complete.
Many thanks,
University of Edinburgh | mathematicians and all those who make
Dept of Geology and Geophysics | empty prophecies. The danger already
http://www.glg.ed.ac.uk/~ajsw | exists that mathematicians have made a
phone +44 131 650 8533 | covenant with the devil to darken the
fax +44 131 668 3184 | spirit and confine man in the bonds of
| Hell." -- St. Augustine
|
2020-02-24 01:49:06
|
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9366829991340637, "perplexity": 12480.151643021163}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-10/segments/1581875145869.83/warc/CC-MAIN-20200224010150-20200224040150-00158.warc.gz"}
|
https://www.physicsforums.com/threads/would-a-magnetic-charge-have-the-same-strength-as-a-electric-charge.683014/
|
# Would a magnetic charge have the same strength as a electric charge?
1. Apr 3, 2013
### Thesnake22
If magnetic charges existed, would the strength of the field be the same as a electric charge? Would you be able to plug it in to the equation of coulomb's law? If so, what would the constant be? The same?
2. Apr 3, 2013
### Vodkacannon
Well if they don't exist, then who's to say that they would have the same strength as the E.M.F.?
3. Apr 4, 2013
### mickybob
You would need to use the permeability of free space rather than the permittivity, but otherwise yes.
4. Apr 4, 2013
### vanhees71
Using quantum theory, Dirac has shown that the existence of a magnetic monopole implies the quantization of electrical charges. This would be great, because there is no explanation for a quantization of charges from any fundamental principle within the standard model of elementary particles yet (despite the fact that the charge pattern is restricted by the demand of an anomaly free chiral gauge group for the electroweak sector). Dirac's analysis shows that the strength of the magnetic monopole would be given by the then quantized electric charge of elementary particles. This rule reads (in Gaussian units)
$$e g_n =\frac{n}{2} \hbar c,$$
where $e$ is the elementary electric charge and $g_n$ possible values for the magnetic charge with $n \in \mathbb{Z}$.
|
2017-12-16 19:22:07
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.7805162668228149, "perplexity": 371.37875416729565}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-51/segments/1512948588420.68/warc/CC-MAIN-20171216181940-20171216203940-00510.warc.gz"}
|
http://khea.arqp.pw/one-sample-permutation-test.html
|
# One Sample Permutation Test
Fill in all requested information, using one line per sample. Five elements have 120; six elements have 720, and so on. 2 Permutation tests and combination based tests 3. The mathematical and statistical foundations for understanding permutation tests are laid out. 1), describing the key concepts at length, before turning to permutation tests (§2. That is, we have ktreatments in either b blocks from a RCBD or bsubjects from a SRMD. Permutations for One-Sample or Paired Two-Sample Tests Wilcoxon Signed Rank Tests. heavily on the performance of the underlying CI tests. Figure 2 shows the distribution of over the same 1000 randomizations. Permutation Combination questions, practice tests, sample problems, question bank : Ascent - MBA TANCET, XAT Classes of which one can seat 5 and the other only 4. In this calculator, the degree of freedom for one sample and two sample t-tests are calculated based on number of elements in sequences. Under the null hypothesis, there is no difference in the populations. Finally, permutation or (re-)randomization tests are used for hypothesis testing, where the reference distribution is obtained by calculating all possible values of the test statistic, or a subset thereof, under rearrangements of the labels on the observed data points. We are performing one in this example. The following function performs a Mantel test between two similarity matrices and computes the p value using permutation tests. 1-sample t-test on the di erences:mass di erences are iid sample from normal distribution, unknown variance, zero mean. Introductory permutation problems. permutation (x) ¶ Randomly permute a sequence, or return a permuted range. Multivariate permutation tests for two sample testing in presence of nondetects with application to microarray data Two-sample tests and one-way MANOVA for. heavily on the performance of the underlying CI tests. -Good for small datasets. The term permutation tests refers to rearrangements of the data. 1 General Aspects 48 3. Description. kim at duke. This is because when sample sizes are very small, the discreteness of the permutation distribution makes only certain p-values achievable. Lab Use Only Sample ID County Crop Code(s) (See back of form. The function performs an ANOVA like permutation test for Constrained Correspondence Analysis (), Redundancy Analysis () or Constrained Analysis of Principal Coordinates () to assess the significance of constraints. Introductory permutation problems. 3 However, four of the algorithms (OPDN, OPDN-Alt, Bebb-Sim, and PROCSS) easily can be modified to. The overview and steps of such a test are:. One Sample Permutation t-test Description. Permutation Tests. mat and design. First: The first thing to decide in doing a permutation test for a one-way ANOVA is the 'metric' you are going to use to judge differences. To ensure stability of the results, the number of permutations should be large. Method 2: simulation-based permutation test I This can evaluate evidence for/against a null hypothesis. Second, permute the data and compute the test statistic for each data permutation, which in turn creates the so-called reference distribution [1]. Another alternative is a permutation test, or a bootstrap. Keith Dunker 1, Slobodan Vucetic 2 1 Center for Computational Biology and Bioinformatics, Indiana University,. The null hypothesis of the test specifies that the permutations are all equally likely. Introduction. It supports one- and two-tailed tests, and returns a p-value, the observed difference, and the effect size. to each sample, we will have. Theory of Permutation Tests for One-Sample Problems. Some connections between permutation tests and t-tests and their relevance for adaptive designs Ekkehard Glimm 1, Michael Proschan. For example, you can change the significance level or conduct the test without assuming equal variances. They describe permutations as n distinct objects taken r at a time. In Section 4 we look at permutation tests for two-sample data. The null hypothesis of this test is that both samples come from the same distribution. In this paper, we propose a new type of permutation tests for testing the difference between two population means: the split sample permutation t-tests. Here computation is performed on MNE sample dataset between 40 and 60 ms. To test this hypothesis, you clone 100 cells. Permutation tests in this book will use the coin package, with either of two functions, independence_test and symmetry_test. It is useful to transform the paired data into their pairwise differences and sums,. For example we could just use the difference in the sample means as one test statistic. However, in real data where the tests are often correlated (like neuroimaging data), the Bonferroni correction can give overly-conservative results. The second is to measure the probability that a dependency. One-sample t-test (testing against a known mean μ 0): where is the sample mean, σ is the sample standard deviation and n is the sample size. With permutations, every little detail matters. He found out that he has lost the assignment questions. Based on these technical arguments, the ideas are broadly applicable and generalizations have been made to the k-sample problem of comparing general parameters, the two-sample U-statistics, and d-dimensional multivariate cases and multiple testing. If x is a multi-dimensional array, it is only shuffled along its first index. These tests do not assume random sampling from well-defined populations. The test above is usually called a chi-squared test of homogeneity. Multivariate analysis of variance (MANOVA) is simply an ANOVA with several dependent variables. We are performing one in this example. Medical University of Vienna, Vienna, Austria. Model selection based on permutation tests consistently produces networks with higher BIC and BDEu scores for both small and moderately large sample sizes. If you do require a 'randomness' test of the permutations wrt themselves, I think that you're going to have to redefine randomness to something specific to your problem. Horizontal Line Test. 3 of the book, we describe how to carry out a 2 group permutation test in SAS as well as with the coin package in R. The paired sample t-test, sometimes called the dependent sample t-test, is a statistical procedure used to determine whether the mean difference between two sets of observations is zero. Permutation tests for a single sample based on means were described by Fisher (1935). Suppose we test additive e ects of 8 SNPs, one at a time, and we want to know if the most signi cant association is real. Second, permute the data and compute the test statistic for each data permutation, which in turn creates the so-called reference distribution [1]. test, which of course performs one-sample and two-sample t-tests. ${z = \frac{(p - P)}{\sigma}}$ where P is the hypothesized value of population proportion in the null hypothesis, p is the sample proportion, and ${\sigma}$ is the standard deviation of the sampling distribution. First think about the two-sample t-test. We will call the permutation method using test statistic T 1 the regular permutation and the method using test statistic T 2 the studentized permu-tation. So using the permutation test seems to give us the best of both worlds. the only line of the output file contains one integer - A[i] too. The mean. When k is small, we can consider all possible permutations; otherwise, a large number of random permutations, say B , can be used. % % In: % sample1 - vector of measurements representing one sample % sample2 - vector of measurements representing a second sample % permutations - the number of permutations % % Optional (name-value pairs): % sidedness - whether to test one. If we assume both samples come from the same approximately normal distribution, we can use math formulas based on probability theory t. This article provides a good general overview of permutation feature importance, its theoretical basis, and its applications in machine learning: Permutation feature importance. Permutation tests (also called exact tests, randomization tests, or re-randomization tests) are nonparametric test procedures to test the null hypothesis that two different groups come from the same distribution. 4 Rank tests versus permutation tests There are some similarities and some differences between the two kinds of nonparametric tests. Permutation tests in this book will use the coin package, with either of two functions, independence_test and symmetry_test. A one sample z test is one of the most basic types of hypothesis test. Permutation tests with ANOVA have an advantage over traditional non-parametric techniques which are often not very powerful (with the exception of Kruskal-Wallis). Confidence Intervals Based on Permutation Tests Based on the relationship between hypothesis tests and confidence intervals, it is possible to construct a two-sided or one-sided $$(1-\alpha)100\%$$ confidence interval for the mean $$\mu$$ based on the one-sample permutation test by finding the values of $$\mu_0$$ that correspond to obtaining a. where n is the sample size, d is the effect size, and type indicates a two-sample t-test, one-sample t-test or paired t-test. In this paper we propose an approximate permutation test for a. Choose from 109 different sets of Probability with Combinations and Permutations flashcards on Quizlet. One fundamental difference is that exact tests exhaust all possible outcomes while resampling simulates a large number of possible. In R, a permutation of order n is one possible rearrangement of the integers 1 through n inclusive. Previous message (by thread): [FieldTrip] One-sample cluster based permutation t-test ERP data. In this case, the test is usually called a chi-squared test of goodness-of-fit. This is usually written n P k. One simple way to run our test is to imagine all possible rearrangements of the data between pre-test and post-test scores, keeping the pairs of scores together. Main difference: randomization tests consider every possible permutation of the labels, permutation tests take a random sample of permutations of the labels. As a result, modern statistics needs permutation testing for complex data with low sample size and many variables, especially in observational studies. In the Listening section, one sample question with an audio file is offered for each test item. For any one SNP the z-statistic from a logistic. permutation sample is obtained by assigning one subject to the experimental treatment and the remaining ones (m j) to the standard treatment, within each observed stratum of – 318 – m j +1 subjects. As is well known (Romano [23]), the permutation test possesses a certain. Permutation tests with ANOVA have an advantage over traditional non-parametric techniques which are often not very powerful (with the exception of Kruskal-Wallis). The above description applies easily to the case of a one-way Anova or a t test, where it is obvious how permutations should be done. ) of the sample. Results: Both simulated and real data examples are used for illustration and comparison. COURSE OBJECTIVE This full-day or half-day course is designed to introduce participants to Bootstrapping and Bootstrapping methods. In this case, the test is usually called a chi-squared test of goodness-of-fit. The Wilcoxon sum rank test is more powerful than a t test statistic for moderate and large sample sizes for heavier tailed distributions. The ideas are broadly applicable and special attention is given to the. The numbers are shown below. The testing pow-ers obtained without permutation tests were typically lower than those obtained with permutation tests for all methods when the sample size is small (100 and below). 2 Power Functions of Permutation Tests 93. Stochastic Ordering and ANOVA: performs multivariate two-sample permutation tests for continuous data based on Student's t Nonparametric One-way ANOVA. 1), describing the key concepts at length, before turning to permutation tests (§2. p-values are exact and not asymptotic. For any one SNP the z-statistic from a logistic. For other tests, permutation is necessary to obtain any significance values at all (e. A paired test using data x and nonNULL y is equivalent to a one-sample test using data x-y. Basic Inference - Proportions and Means. 3 However, four of the algorithms (OPDN, OPDN-Alt, Bebb-Sim, and PROCSS) easily can be modified to. These tests do not assume random sampling from well-defined populations. Note: The function y = f(x) is a function if it passes the vertical line test. In other words, if the null hypothesis is true, a permutation within any pair of scores is as likely as the reverse. In this post, we will take a look at the later. Moreover, the one-sample t-tests appear more powerful than the two-sample t-tests because of a positive correlation between the control and treated samples on the same array. This function can perform the test on one variable or simultaneously on multiple variables. Permutation Test. The function performs an ANOVA like permutation test for Constrained Correspondence Analysis (), Redundancy Analysis () or Constrained Analysis of Principal Coordinates () to assess the significance of constraints. If we observe only one sample, but we wish to test whether the categories occur in some pre-specified proportions, a similar test (and the same R function) may be applied. 3 However, four of the algorithms (OPDN, OPDN-Alt, Bebb-Sim, and PROCSS) easily can be modified to. 10 3 Permutation test, Monte Carlo p-value The Multtest Procedure Model Information Test for continuous variables Mean t-test Degrees of Freedom Method Pooled Tails for continuous tests Two-tailed. The Kruskal-Wallis statistic. That it, its significance level is exactly what we assign it to be. The permutation test compares values across groups, and can also be used to compare ranks or counts. It covers all forms of test item types for all levels (the number of questions is different from the number of test items in an actual test). In the video, you learned that permutation sampling is a great way to simulate the hypothesis that two variables have identical probability distributions. The number of independent ways a dynamic system can move without breaking any limitations applied on them is the number of degrees of freedom. For example, you might want to know how your sample mean compares to the population mean. Also, similar to a result for two-sample tests, the F statistic can be rewritten F = SST=(k ¡1) (C ¡SST)=(N ¡k); which is an increasing function of SST, so that the permutation F test can be based on SST or just SSX, a weighted sum of squared sample means. However, you need to remember that no “little trick” will replace the sample size to achieve the optimum power of the experiment. Finally, we ask the question, "Is S obs very different from the other S π values?". ttest_ind¶ scipy. test(n1 = , n2= , d = , sig. Alice, Bob and Charlie is different from Charlie, Bob and Alice (insert. Given independent samples from P and Q, two-sample permutation tests allow one to construct exact level tests when the null hypothesis is P=Q. edu Fri Oct 5 19:58:39 CEST 2018. , problem solving and data sufficiency. You can delete/downvote my answer if you deem it unfit. Our algorithm combines depth-first search and backtracking. the sample size is less than 50 observations) and tol is not given, the scores are mapped into \{1,…,N\}, see pperm for the details. Examples of Univariate Multi-Sample Problems. About two thirds of the species grow in the nival zone (above 3,000m above sea level) now while about one third do not. test, which of course performs one-sample and two-sample t-tests. 'Student's' t Test is one of the most commonly used techniques for testing a hypothesis on the basis of a difference between sample means. 9251 and under the randomization approach the probability of observing a difference this large or larger is 0. Einsporn and Desale Habtzghi University of Akron Abstract: This paper presents a permutation test for the incomplete pairs setting. -Wide variety of statistics (but needs pivotality). Rather than referring to a distribution (e. That is to say, ANOVA tests for the difference in means between two or more groups, while MANOVA tests for the difference in two or more vectors of means. Both can only be applied to a comparison situation (e. The following Matlab project contains the source code and Matlab examples used for one sample paired samples permutation t test with correction for multiple comparisons. Of course permutation is very much helpful to prepare the sc. For large samples, the power of the permutation test using the difference in sample means is equal to the t-test [1] for normally-distributed alternates. In R, a permutation of order n is one possible rearrangement of the integers 1 through n inclusive. Now let's look at a second simple example which is also a classic permutation test. See Example 16. Permutations and Combinations Aptitude Questions Candidates need to check the basic info that we are providing in this section that is Permutations and Combinations Aptitude Multiple Choice Questions and Answers. Comput Stat Data Anal 2009;53(12):4290-4300. This is done by generating the reference distribution by Monte Carlo sampling, which takes a small (relative to the total number of permutations) random sample of the possible replicates. Generally speaking, there are two kinds of permutation tests that we will use. Now there are 200 cells composed of 100 pairs of identical clones. [FieldTrip] One-sample cluster based permutation t-test ERP data Eelke Spaak e. Pievani 2, R. Permutations for One-Sample or Paired Two-Sample Tests Wilcoxon Signed Rank Tests. John, one of the students in the class, is studying for the final exams now. Permutation procedures are available for a variety of tests, as described below. For one-sample or paired two-sample tests, in particular, for Wilcoxon signed rank tests, the permutations are really subsets. It is given here. Introduction: Permutations and Combination: Permutations: Permutations are the different arrangements of a given number of things by taking some or all at a time. You can delete/downvote my answer if you deem it unfit. Permutation Analysis in Factorial Designs. Examples of Nonparametric Combination. - [Instructor] So when we count things, it's a permutation if one order of the arrangement counts separately from another order of the same arrangement. Fully enumerating a permutation test requires calculating the test statistic appropriate for the hypotheses being tested for every possible two-sample 1 Permutation tests were advocated by one of the fathers of modern statistics, Sir R. To conduct a randomization test, first specify the test statistic of interest, e. To carry out the permutation methods, first use equation (1) to compute the test statistic T0 a from the observed samples [1], where a=1,2. Fisher (1935a) was the first co propose a permutation test that employ- ed a reference set of test statistic values dependent on the actual observa- tions, rather than their ranks (Kennedy, 1995). One simple way to run our test is to imagine all possible rearrangements of the data between pre-test and post-test scores, keeping the pairs of scores together. The following function performs a Mantel test between two similarity matrices and computes the p value using permutation tests. Now draw the numbers one at a time, recording the order in which the numbers were selected. The null hypothesis of this test is that both samples come from the same distribution. A comparison between a permutation test and the usual t-test for this problem. However, permutation tests can be used to test significance on sample statistics that do not have well known distributions like the t-distribution. For example, you might want to know how your sample mean compares to the population mean. National Institute of Allergy and Infectious Diseases, Bethesda, Maryland. However, we argue that the permutation tests have generally been misused across all disciplines and in this paper, we formally examine this problem in great detail. permutation¶ numpy. Question: Which Of The Following About Permutation Tests Are True? (Permutation Tests Have Similar Power To A Parametric Test When Sample Sizes Are Small. Permutation Test VS Bootstrap Hypothesis Testing •Accuracy: In the two-sample problem, 𝑆𝐿𝑒𝑟 is the exact probability of obtaining a test statistic as extreme as the one observed. We explore why the methods fail to appropriately control the false-positive risk. You might pick the maximum difference in the sample means, the variance of the sample means, the standard F-statistic, and so on. Permutation and Combination Questions with Answers: Ques. This is a two-sided test for the null hypothesis that 2 independent samples have identical average (expected) values. The script creates an output file in tab-separated format where each row is a different group comparison. One-sample t-test. Another alternative is a permutation test, or a bootstrap. What is the Permutation Formula, Examples of Permutation Word Problems involving n things taken r at a time, How to solve Permutation Problems with Repeated Symbols, How to solve Permutation Problems with restrictions or special conditions, items together or not together or are restricted to the ends, how to differentiate between permutations and combinations, examples with step by step solutions. An ordered arrangement of sample data or sample points is called as a permutation. , 500 or 1,000). Under very weak assumptions for comparing estimators, we provide a general test procedure whereby the asymptotic validity of the permutation test holds while retaining the exact rejection probability $\alpha$ in finite samples when the underlying distributions are identical. If real values x or y are passed to this function the following applies: if exact is true (i. Power Report for T-Test This report gives the power of the paired-sample T-Test when it is assumed that the population mean difference. Thus, the assignment of values to one population or the other is regarded as one arbitrary permutation. One typical use of validation is model selection. The paired sample t-test, sometimes called the dependent sample t-test, is a statistical procedure used to determine whether the mean difference between two sets of observations is zero. 105 IBPS Clerk for just Rs. The typically small size of the one sample makes a permutation test the appropriate statistical test to use when making the comparison (other statistical tests are precluded from use under these conditions because the distributional assumptions they rely upon are violated by small sample sizes), but the often large size of the other sample makes a permutation test computationally very difficult to implement quickly enough to be a viable method of comparison. What resampling does is to take randomly drawn (sub)samples of the sample and calculate the statistic from that (sub)sample. The variable Trt is specified in the CLASS statement so that permutations are done for the groups formed by different levels of the variable. Introduction. You can adapt permutation tests to many different ANOVA designs. A concise way to say this is that the distribution of the data under the null hypothesis satisfies exchangeability. Permutation tests for a single sample based on means were described by Fisher (1935). However, if the permutation test agrees with the parametric test, one may have a greater degree of con dence in the estimates and con dence intervals constructed using the parametric method. actual size equals desired size) only if the pairwise differences have a distribution that is continuous and symmetric around zero. if the t-statistic is used, the test assumes either exchangability or a sufficiently large sample size. The confidence interval (also called margin of error) is the plus-or-minus figure usually reported in newspaper or television opinion poll results. 2 Statistical testing by permutation The role of a statistical test is to decide whether some parameter of the reference population may take a value assumed by hypothesis, given the fact that the corresponding statistic, whose value i s estimated from a sample of objects, may have a somewhat different value. For example, if G1={1,2,3} and G2={4,5}, then a valid permutation is G1={3,2,1} and G2={5,4}. hi, Library DAAG has onet. One of the problems with this approach is that the false alarm (FA) rate of these parametric statistical tests (the probability of falsely rejecting the null hypothesis) often cannot be controlled (Eklund et al. 2 Permutation tests and combination based tests 3. Three commonly used test statistics, the sample mean, SAM statistic and Student's t-statistic, are considered. Use permutations to count the number of ways an event can happen, as applied in Ex. 2 Theory of One-Dimensional Permutation Tests 2. That is to say, ANOVA tests for the difference in means between two or more groups, while MANOVA tests for the difference in two or more vectors of means. Estimating the precision of sample statistics (medians, variances, percentiles) by using subsets of available data (jackknifing) or drawing randomly with replacement from a set of data points (bootstrapping). The conduct of a randomization or permutation test for the equality of two population means is as follows. Here computation is performed on MNE sample dataset between 40 and 60 ms. actual size equals desired size) only if the pairwise differences have a distribution that is continuous and symmetric around zero. Prepare Cogat Test. On the Theory of Rank Order Tests for Location in the Multivariate One Sample Problem Sen, Pranab Kumar and Puri, Madan Lal, The Annals of Mathematical Statistics, 1967; A Non-Parametric Test of Independence Hoeffding, Wassily, The Annals of Mathematical Statistics, 1948 + See more. In this article, I'll show you how to create and manipulate mathematical permutations using the R language. However, as the sample size increases, the testing powers were similar irrespective of using permutation tests. We will see that some applications are naturally called re-randomization , as that is how the problem is approached. As is well known (Romano [23]), the permutation test possesses a certain. Lecture 1: Random number generation, permutation test, and the bootstrap one has to rely on other methods such as Welch Two Sample t-test data: x and y. Second Language Learners who have taken the Advanced Placement (AP) test, the SAT II Achievement test, or other advanced Spanish test should consult with their advisors. While permutation tests can also be used when random sampling was used, they require a different sort of justification (see Ernst 2004). To generate a set of feature scores requires that you have an already trained model, as well as a test dataset. Obtaining the null distribution. Remember: Choose either test A or B for each sample. This test uses the density of the running variable to examine if there is a disproportionate mass of individuals on one side of the threshold, which represents an alternative implication of the identi cation assumption. Translation: n refers to the number of objects from which the permutation is formed; and r refers to the number of objects used to form the permutation. If you sample 5 men and 5 women at random, you might get something like this: Men: 140 180 188 210 190. Theory of Permutation Tests for One-Sample Problems. For example, if G1={1,2,3} and G2={4,5}, then a valid permutation is G1={3,2,1} and G2={5,4}. That is to say, ANOVA tests for the difference in means between two or more groups, while MANOVA tests for the difference in two or more vectors of means. permutation) is an equivalent test to the one-sample t-test. One simple way to run our test is to imagine all possible rearrangements of the data between pre-test and post-test scores, keeping the pairs of scores together. A permutation test computes the sampling distribution for any test statistic, under the ‘strong null hypothesis’ that a set of genetic variants has absolutely no eect on the outcome. The total number of permutations of a set of elements can be expressed as 3! or n factorial, where n represents the number of elements in the set. This article describes the formula syntax and usage of the PERMUT function in Microsoft Excel. Then, a kernel two-sample test, which has been studied extensively in prior work, can be applied to a permuted and an unpermuted. So you compute power retrospectively to see if the test was powerful enough or not. , the difference between arithmetic means. Like bootstrapping, a permutation test builds - rather than assumes - sampling distribution (called the "permutation distribution") by resampling the observed data. It supports one- and two-tailed tests, and returns a p-value, the observed difference, and the effect size. We show one such adaptation, sample size change, in a two-stage adaptive t-test setting. Of course it wasn’t powerful enough – that’s why the result isn’t significant. In the resampling technique, only a small fraction of pos-sible permutations are generated and the statistical sig-nificance is approximately computed. To test this hypothesis, you clone 100 cells. The coin package provides the ability to perform a wide variety of re-randomization or permutation based statistical tests. On the one hand, the p-values of a permutation test are exact conditional probabilities (up to computational limits) for all sample sizes Permutation tests do not make any e ort to estimate the common distribution F; it is treated as a nuisance parameter In contrast, a bootstrap test estimates Fusing the empirical. The mathematical and statistical foundations for understanding permutation tests are laid out. One fundamental difference is that exact tests exhaust all possible outcomes while resampling simulates a large number of possible. I have to perform a permutation test without replacement. approximate permutation test or random permutation tests. The overview and steps of such a test are:. In addition there is a categorical column added in which it is indicated by a '. permutation (x) ¶ Randomly permute a sequence, or return a permuted range. The sampling distribution of the test statistic under the null hypothesis is. The basic principle is that to test differences between two groups assigned at random we can determine the exact distribution of a test statistic (such as a difference in means) under the null hypothesis by calculating the value of the test statistic for all. Here computation is performed on MNE sample dataset between 40 and 60 ms. • The quantile test • Permutation tests — test the mean for non-normal distributions Comparing Three or More Groups • One- and two-factor ANOVA • Nonparametric Kruskal-Wallis test • Multiple comparison tests: who’s different? • Permutation one-factor test: never worry about a normal distribution again! Contingency Tables. With permutations, every little detail matters. Software for the multiple response permutation tests was available previously in the SAS® Supplemental Library as PROC MRPP. Permutation tests were first introduced by Fisher (1935) and Bizhannia et al. If interested in proportions rather than location shift (median), McNemar’s test. When they refer to permutations, statisticians use a specific terminology. Very early in the book he gives example code to implement a Permutation Test on one of his datasets (included…. BOOSTRAP POWER OF THE ONE-SAMPLE PERMUTATION TEST We first introduce the permutation test, then define the power of the test and show that it tends to 1 as, under suitable conditions, the critical value of the permutation test converges to a constant and the test statistic tends to +f. In some cases, repetition of the same element is allowed in the permutation. The null hypothesis is that the ratings are uninfluenced by reported gender—any particular student would assign the same rating regardless of instructor gender. So lets go through some examples of using power. To conduct a randomization test, first specify the test statistic of interest, e. the permutation test can be used for any linear model. Opdyke DataMineIt Marblehead, MA While the distribution-free nature of permutation tests makes them the most appropriate method for. Four elements have 4! permutations, or 1 x 2 x 3 x 4 = 24. Maindonald References. The test is based on a t-statistic and can be applied to situations in which a one sample or paired sample/repeated measures t-test is appropriate. the permutation test can be used for any linear model. PERMUTATION TESTING TO THE RESCUE! This framework already incorporates multiple comparison corrections! Unlike Bonferroni, permutation testing: 1. Let's talk about permutation tests and why we might want to do them. The one sample t test compares the mean of your sample data to a known value. Then we repeat the process for every possible permutation of the sample. One of the most well known is the classic permutation test dated back to Fisher. The overview and steps of such a test are:. You have a small sample size. In this paper, ranked set two-sample permutation test of comparing two-independent groups in terms of some measure of location is presented. As is well known (Romano [23]), the permutation test possesses a certain. The first one is to assess the probability that the difference of a statistic between two distributions is explained by chance. It covers all forms of test item types for all levels (the number of questions is different from the number of test items in an actual test). flip each pair the other wa y with probability 50%) If it is a regression, and if the Y points are randomly associated with. Generally speaking, there are two kinds of permutation tests that we will use. Permutation tests also get referred to as "Exact Hypothesis Tests", and serve as an alternative approach to large-sample. The typically small size of the one sample makes a permutation test the appropriate statistical test to use when making the comparison (other statistical tests are precluded from use under these conditions because the distributional assumptions they rely upon are violated by small sample sizes), but the often large size of the other sample makes a permutation test computationally very difficult to implement quickly enough to be a viable method of comparison. Description. Permutation tests belong to a wider class of methods called randomization tests. In a paired sample t-test, each subject or entity is measured twice, resulting in pairs of observations. the population), then you are violating the independence assumption of the Wilcoxon Rank Sum Test; in fact the Wilcoxon Rank Sum Test is really testing whether the two data sets come from the same population, which in this case would clearly be true since one of the sets is the population from. One of the objectives of the present study was to develop methods for ascertaining the null distributions of global, voxel, and cluster statistics by permutation procedures and to crossvalidate these permutation tests by comparison to the corresponding tests derived from normal theory. The parametric t-test should not be used with highly skewed data. But they were not pleased so much because they needed time consuming calculations. Permutation- and Rank-Based Methods Yibi Huang I Two-sample data I two-sample t tests and Welch t-tests (Review) I permutation test I (Wilcoxon) rank-sum test (aka. Theory of Permutation Tests for Multi-Sample Problems. [FieldTrip] One-sample t-test with cluster-based permutation test Seung Goo Kim, Ph. The paired sample t-test, sometimes called the dependent sample t-test, is a statistical procedure used to determine whether the mean difference between two sets of observations is zero. Two-Sample Unpaired T-test. Use permutations to count the number of ways an event can happen, as applied in Ex. Perform an asymptotic two-sample Kolmogorov-Smirnov-test of the null hypothesis that x and y are drawn from the same distribution against the alternative hypothesis that they come from different distributions. Learn and practice questions on permutations and combinations. The term permutation tests refers to rearrangements of the data. So using the permutation test seems to give us the best of both worlds. 1 Definition and Algorithm for the Conditional Power. Now let's look at a second simple example which is also a classic permutation test. The permutation distribution results from taking all possible samples of n2 values from the total of n values. Introduction. Question: Which Of The Following About Permutation Tests Are True? (Permutation Tests Have Similar Power To A Parametric Test When Sample Sizes Are Small. then the permutation test T. Permutation procedures are available for a variety of tests, as described below. Example 2: Permutation. First: The first thing to decide in doing a permutation test for a one-way ANOVA is the 'metric' you are going to use to judge differences. Choose from 109 different sets of Probability with Combinations and Permutations flashcards on Quizlet. the sample size is less than 50 observations) and tol is not given, the scores are mapped into \{1,\dots,N\}, see pperm for the details. This is one of the common stumbling blocks-in order to make sense of your sample and have the one sample z test give you the right information you must make sure you. The ideas are broadly applicable and special attention is given to the. With permutations, every little detail matters. When the permutation is repeated, the results might vary greatly. Once you have your design files run:. The number of independent ways a dynamic system can move without breaking any limitations applied on them is the number of degrees of freedom. ), these methods repeatedly sample (resample) the original data to build new distributions to test some analysis outcome.
|
2019-11-14 20:14:18
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.7477243542671204, "perplexity": 766.2969530050457}, "config": {"markdown_headings": true, "markdown_code": false, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-47/segments/1573496668534.60/warc/CC-MAIN-20191114182304-20191114210304-00098.warc.gz"}
|
https://www.gradesaver.com/textbooks/math/algebra/college-algebra-11th-edition/chapter-r-review-exercises-page-75/56
|
## College Algebra (11th Edition)
$8r^2+26rs-99s^2$
Using the $(a+b)(c+d)=ac+ad+bc+bd$ or the FOIL Method and the laws of exponents, the given expression, $(2r+11s)(4r-9s) ,$ simplifies to \begin{array}{l}\require{cancel} 2r(4r)+2r(-9s)+11s(4r)+11s(-9s) \\\\= 8r^2-18rs+44rs-99s^2 \\\\= 8r^2+26rs-99s^2 .\end{array}
|
2018-11-20 07:42:46
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9991305470466614, "perplexity": 10439.85934308173}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-47/segments/1542039746301.92/warc/CC-MAIN-20181120071442-20181120093442-00134.warc.gz"}
|
https://tech.zsoldier.com/2011/03/
|
## Posts
Showing posts from March, 2011
### Emulex OneCommand Plugin for vCenter
Summary: This plug in adds an extra tab to vCenter that lets you manage your Emulex HBA/UCNA’s. From setting drivers parameters to allowing you to apply firmware updates to your HBA/UCNA card. To do so, you must install the CIM package onto your host and have a server for the Emulex OneCommand Software Plug-in. PreReqs: Windows Server VM(suggest 2008 x64 R1 or R2) Software Plug-in CIM Provider 3.2.x + vSphere CLI or vMA <— Needed to remotely install CIM provider bundle vCenter 4.1+ Details: You can install the Emulex Software Plug-in on the vCenter server, but I suggest keeping all modules separate from vCenter if possible. Run the Software Plug-in installation. Install the CIM Provider on all hosts you would like to have onecommand management capabilities. Place ESXi server in maintenance mode. vCLI command is as follows: vihostupdate.pl --server nameofyouresxserver --install --bundle \\path\to\elx-esx4.1.0-emulex-cim-provider-3.2.30.1-offline_bundle-364582.zi
### Outlook 2011 Reply, New E-mail, etc. not working...
Summary: Click to create an new e-mail or hitting reply cause nothing to happen. This seems to occur after updating to Safari 5.0.4. Workaround: Uninstall Safari and reinstall it. Download the current Safari version. Do not install it yet, it will attempt to run. http://www.apple.com/safari/download/ Drag and drop the Safari icon from the applications folder to your trash and empty it. Now go to your downloads folder and find the file named something like "Safari5.0.4".dmg Double-click it. Then double click the pkg file that appears in a new window. Follow the wizard, then reboot.
### NFS Mapping to ESX and why you should use PowerCLI, not vCenter.
Summary: Inserting a new host in an ESX Cluster and mapping an NFS share to it produced an interesting result. vCenter showed 2 datastores w/ similar names. 1 w/ 48 hosts mapped to it and another w/ my 1 new host mapped to it. The names were the same albeit one having a "(1)" appended to it. Attempting to change the new one would result in error stating that a datastore already exists w/ that name. Details: Something as minor as an extra '/', servername.local vs. servername, or IP instead of name, makes ESX treat those mapped NFS shares as completely different even if they all point to the same end point. Example: Both of these are valid paths to a NFS share in ESX and they both connect to the same resource, but because of one having a FQDN ESX treats them as different shares completely. netfs://servername.local/share netfs://servername/share Resolution: Use PowerCLI to map NFS shares (or host profiles if you're licensed for them) 1: #Use an
Summary: Was looking for the method to do this, it’s pretty simple, but everyone had very detailed posts of other things they were doing (checks and balances). Here is the key snippet to do it: 1: #This simply gets you the object to work with. You can replace 2: #'nameofcomputer' with a variable to target more systems. 3: $admin = [adsi]( "WinNT://" + nameofcomputer + "/administrator, user" ) 4: 5: #This invokes the set password method and changed the administrator password 6: #to Whatever1 7:$admin.psbase.invoke( "SetPassword" , "Whatever1" ) If you want more robust script that records data of it’s changes see here .
|
2020-11-25 15:51:39
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.23215383291244507, "perplexity": 9603.399295907935}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-50/segments/1606141183514.25/warc/CC-MAIN-20201125154647-20201125184647-00471.warc.gz"}
|
https://python.quantecon.org/cake_eating_problem.html
|
# Cake Eating I: Introduction to Optimal Saving¶
## Overview¶
In this lecture we introduce a simple “cake eating” problem.
The intertemporal problem is: how much to enjoy today and how much to leave for the future?
Although the topic sounds trivial, this kind of trade-off between current and future utility is at the heart of many savings and consumption problems.
Once we master the ideas in this simple environment, we will apply them to progressively more challenging—and useful—problems.
The main tool we will use to solve the cake eating problem is dynamic programming.
In what follows, we require the following imports:
In [1]:
import numpy as np
import matplotlib.pyplot as plt
%matplotlib inline
## The Model¶
We consider an infinite time horizon $t=0, 1, 2, 3..$
At $t=0$ the agent is given a complete cake with size $\bar x$.
Let $x_t$ denote the size of the cake at the beginning of each period, so that, in particular, $x_0=\bar x$.
We choose how much of the cake to eat in any given period $t$.
After choosing to consume $c_t$ of the cake in period $t$ there is
$$x_{t+1} = x_t - c_t$$
left in period $t+1$.
Consuming quantity $c$ of the cake gives current utility $u(c)$.
We adopt the CRRA utility function
$$u(c) = \frac{c^{1-\gamma}}{1-\gamma} \qquad (\gamma \gt 0, \, \gamma \neq 1) \tag{1}$$
In Python this is
In [2]:
def u(c, γ):
return c**(1 - γ) / (1 - γ)
Future cake consumption utility is discounted according to $\beta\in(0, 1)$.
In particular, consumption of $c$ units $t$ periods hence has present value $\beta^t u(c)$
The agent’s problem can be written as
$$\max_{\{c_t\}} \sum_{t=0}^\infty \beta^t u(c_t) \tag{2}$$
subject to
$$x_{t+1} = x_t - c_t \quad \text{and} \quad 0\leq c_t\leq x_t \tag{3}$$
for all $t$.
A consumption path $\{c_t\}$ satisfying (3) where $x_0 = \bar x$ is called feasible.
In this problem, the following terminology is standard:
• $x_t$ is called the state variable
• $c_t$ is called the control variable or the action
• $\beta$ and $\gamma$ are parameters
The key trade-off in the cake-eating problem is this:
• Delaying consumption is costly because of the discount factor.
• But delaying some consumption is also attractive because $u$ is concave.
The concavity of $u$ implies that the consumer gains value from consumption smoothing, which means spreading consumption out over time.
This is because concavity implies diminishing marginal utility—a progressively smaller gain in utility for each additional spoonful of cake consumed within one period.
### Intuition¶
The reasoning given above suggests that the discount factor $\beta$ and the curvature parameter $\gamma$ will play a key role in determining the rate of consumption.
Here’s an educated guess as to what impact these parameters will have.
First, higher $\beta$ implies less discounting, and hence the agent is more patient, which should reduce the rate of consumption.
Second, higher $\gamma$ implies that marginal utility $u'(c) = c^{-\gamma}$ falls faster with $c$.
This suggests more smoothing, and hence a lower rate of consumption.
In summary, we expect the rate of consumption to be decreasing in both parameters.
Let’s see if this is true.
## The Value Function¶
The first step of our dynamic programming treatment is to obtain the Bellman equation.
The next step is to use it to calculate the solution.
### The Bellman Equation¶
To this end, we let $v(x)$ be maximum lifetime utility attainable from the current time when $x$ units of cake are left.
That is,
$$v(x) = \max \sum_{t=0}^{\infty} \beta^t u(c_t) \tag{4}$$
where the maximization is over all paths $\{ c_t \}$ that are feasible from $x_0 = x$.
At this point, we do not have an expression for $v$, but we can still make inferences about it.
For example, as was the case with the McCall model, the value function will satisfy a version of the Bellman equation.
In the present case, this equation states that $v$ satisfies
$$v(x) = \max_{0\leq c \leq x} \{u(c) + \beta v(x-c)\} \quad \text{for any given } x \geq 0. \tag{5}$$
The intuition here is essentially the same it was for the McCall model.
Choosing $c$ optimally means trading off current vs future rewards.
Current rewards from choice $c$ are just $u(c)$.
Future rewards given current cake size $x$, measured from next period and assuming optimal behavior, are $v(x-c)$.
These are the two terms on the right hand side of (5), after suitable discounting.
If $c$ is chosen optimally using this trade off strategy, then we obtain maximal lifetime rewards from our current state $x$.
Hence, $v(x)$ equals the right hand side of (5), as claimed.
### An Analytical Solution¶
It has been shown that, with $u$ as the CRRA utility function in (1), the function
$$v^*(x_t) = \left( 1-\beta^{1/\gamma} \right)^{-\gamma}u(x_t) \tag{6}$$
solves the Bellman equation and hence is equal to the value function.
You are asked to confirm that this is true in the exercises below.
The solution (6) depends heavily on the CRRA utility function.
In fact, if we move away from CRRA utility, usually there is no analytical solution at all.
In other words, beyond CRRA utility, we know that the value function still satisfies the Bellman equation, but we do not have a way of writing it explicitly, as a function of the state variable and the parameters.
We will deal with that situation numerically when the time comes.
Here is a Python representation of the value function:
In [3]:
def v_star(x, β, γ):
return (1 - β**(1 / γ))**(-γ) * u(x, γ)
And here’s a figure showing the function for fixed parameters:
In [4]:
β, γ = 0.95, 1.2
x_grid = np.linspace(0.1, 5, 100)
fig, ax = plt.subplots()
ax.plot(x_grid, v_star(x_grid, β, γ), label='value function')
ax.set_xlabel('$x$', fontsize=12)
ax.legend(fontsize=12)
plt.show()
## The Optimal Policy¶
Now that we have the value function, it is straightforward to calculate the optimal action at each state.
We should choose consumption to maximize the right hand side of the Bellman equation (5).
$$c^* = \arg \max_{c} \{u(c) + \beta v(x - c)\}$$
We can think of this optimal choice as a function of the state $x$, in which case we call it the optimal policy.
We denote the optimal policy by $\sigma^*$, so that
$$\sigma^*(x) := \arg \max_{c} \{u(c) + \beta v(x - c)\} \quad \text{for all } x$$
If we plug the analytical expression (6) for the value function into the right hand side and compute the optimum, we find that
$$\sigma^*(x) = \left( 1-\beta^{1/\gamma} \right) x \tag{7}$$
Now let’s recall our intuition on the impact of parameters.
We guessed that the consumption rate would be decreasing in both parameters.
This is in fact the case, as can be seen from (7).
Here’s some plots that illustrate.
In [5]:
def c_star(x, β, γ):
return (1 - β ** (1/γ)) * x
Continuing with the values for $\beta$ and $\gamma$ used above, the plot is
In [6]:
fig, ax = plt.subplots()
ax.plot(x_grid, c_star(x_grid, β, γ), label='default parameters')
ax.plot(x_grid, c_star(x_grid, β + 0.02, γ), label=r'higher $\beta$')
ax.plot(x_grid, c_star(x_grid, β, γ + 0.2), label=r'higher $\gamma$')
ax.set_ylabel(r'$\sigma(x)$')
ax.set_xlabel('$x$')
ax.legend()
plt.show()
## The Euler Equation¶
In the discussion above we have provided a complete solution to the cake eating problem in the case of CRRA utility.
There is in fact another way to solve for the optimal policy, based on the so-called Euler equation.
Although we already have a complete solution, now is a good time to study the Euler equation.
This is because, for more difficult problems, this equation provides key insights that are hard to obtain by other methods.
### Statement and Implications¶
The Euler equation for the present problem can be stated as
$$u^{\prime} (c^*_{t})=\beta u^{\prime}(c^*_{t+1}) \tag{8}$$
This is necessary condition for the optimal path.
It says that, along the optimal path, marginal rewards are equalized across time, after appropriate discounting.
This makes sense: optimality is obtained by smoothing consumption up to the point where no marginal gains remain.
We can also state the Euler equation in terms of the policy function.
A feasible consumption policy is a map $x \mapsto \sigma(x)$ satisfying $0 \leq \sigma(x) \leq x$.
The last restriction says that we cannot consume more than the remaining quantity of cake.
A feasible consumption policy $\sigma$ is said to satisfy the Euler equation if, for all $x > 0$,
$$u^{\prime}( \sigma(x) ) = \beta u^{\prime} (\sigma(x - \sigma(x))) \tag{9}$$
Evidently (9) is just the policy equivalent of (8).
It turns out that a feasible policy is optimal if and only if it satisfies the Euler equation.
In the exercises, you are asked to verify that the optimal policy (7) does indeed satisfy this functional equation.
Note
A functional equation is an equation where the unknown object is a function.
For a proof of sufficiency of the Euler equation in a very general setting, see proposition 2.2 of [MST20].
The following arguments focus on necessity, explaining why an optimal path or policy should satisfy the Euler equation.
### Derivation I: A Perturbation Approach¶
Let’s write $c$ as a shorthand for consumption path $\{c_t\}_{t=0}^\infty$.
The overall cake-eating maximization problem can be written as
$$\max_{c \in F} U(c) \quad \text{where } U(c) := \sum_{t=0}^\infty \beta^t u(c_t)$$
and $F$ is the set of feasible consumption paths.
We know that differentiable functions have a zero gradient at a maximizer.
So the optimal path $c^* := \{c^*_t\}_{t=0}^\infty$ must satisfy $U'(c^*) = 0$.
Note
If you want to know exactly how the derivative $U'(c^*)$ is defined, given that the argument $c^*$ is a vector of infinite length, you can start by learning about Gateaux derivatives. However, such knowledge is not assumed in what follows.
In other words, the rate of change in $U$ must be zero for any infinitesimally small (and feasible) perturbation away from the optimal path.
So consider a feasible perturbation that reduces consumption at time $t$ to $c^*_t - h$ and increases it in the next period to $c^*_{t+1} + h$.
Consumption does not change in any other period.
We call this perturbed path $c^h$.
$$\lim_{h \to 0} \frac{U(c^h) - U(c^*)}{h} = U'(c^*) = 0$$
Recalling that consumption only changes at $t$ and $t+1$, this becomes
$$\lim_{h \to 0} \frac{\beta^t u(c^*_t - h) + \beta^{t+1} u(c^*_{t+1} + h) - \beta^t u(c^*_t) - \beta^{t+1} u(c^*_{t+1}) }{h} = 0$$
After rearranging, the same expression can be written as
$$\lim_{h \to 0} \frac{u(c^*_t - h) - u(c^*_t) }{h} + \lim_{h \to 0} \frac{ \beta u(c^*_{t+1} + h) - u(c^*_{t+1}) }{h} = 0$$
or, taking the limit,
$$- u'(c^*_t) + \beta u'(c^*_{t+1}) = 0$$
This is just the Euler equation.
### Derivation II: Using the Bellman Equation¶
Another way to derive the Euler equation is to use the Bellman equation (5).
Taking the derivative on the right hand side of the Bellman equation with respect to $c$ and setting it to zero, we get
$$u^{\prime}(c)=\beta v^{\prime}(x - c) \tag{10}$$
To obtain $v^{\prime}(x - c)$, we set $g(c,x) = u(c) + \beta v(x - c)$, so that, at the optimal choice of consumption,
$$v(x) = g(c,x) \tag{11}$$
Differentiating both sides while acknowledging that the maximizing consumption will depend on $x$, we get
$$v' (x) = \frac{\partial }{\partial c} g(c,x) \frac{\partial c}{\partial x} + \frac{\partial }{\partial x} g(c,x)$$
When $g(c,x)$ is maximized at $c$, we have $\frac{\partial }{\partial c} g(c,x) = 0$.
Hence the derivative simplifies to
$$v' (x) = \frac{\partial g(c,x)}{\partial x} = \frac{\partial }{\partial x} \beta v(x - c) = \beta v^{\prime}(x - c) \tag{12}$$
(This argument is an example of the Envelope Theorem.)
But now an application of (10) gives
$$u^{\prime}(c) = v^{\prime}(x) \tag{13}$$
Thus, the derivative of the value function is equal to marginal utility.
Combining this fact with (12) recovers the Euler equation.
## Exercises¶
### Exercise 1¶
How does one obtain the expressions for the value function and optimal policy given in (6) and (7) respectively?
The first step is to make a guess of the functional form for the consumption policy.
So suppose that we do not know the solutions and start with a guess that the optimal policy is linear.
In other words, we conjecture that there exists a positive $\theta$ such that setting $c_t^*=\theta x_t$ for all $t$ produces an optimal path.
Starting from this conjecture, try to obtain the solutions (6) and (7).
In doing so, you will need to use the definition of the value function and the Bellman equation.
## Solutions¶
### Exercise 1¶
We start with the conjecture $c_t^*=\theta x_t$, which leads to a path for the state variable (cake size) given by
$$x_{t+1}=x_t(1-\theta)$$
Then $x_t = x_{0}(1-\theta)^t$ and hence
\begin{aligned} v(x_0) & = \sum_{t=0}^{\infty} \beta^t u(\theta x_t)\\ & = \sum_{t=0}^{\infty} \beta^t u(\theta x_0 (1-\theta)^t ) \\ & = \sum_{t=0}^{\infty} \theta^{1-\gamma} \beta^t (1-\theta)^{t(1-\gamma)} u(x_0) \\ & = \frac{\theta^{1-\gamma}}{1-\beta(1-\theta)^{1-\gamma}}u(x_{0}) \end{aligned}
From the Bellman equation, then,
\begin{aligned} v(x) & = \max_{0\leq c\leq x} \left\{ u(c) + \beta\frac{\theta^{1-\gamma}}{1-\beta(1-\theta)^{1-\gamma}}\cdot u(x-c) \right\} \\ & = \max_{0\leq c\leq x} \left\{ \frac{c^{1-\gamma}}{1-\gamma} + \beta\frac{\theta^{1-\gamma}} {1-\beta(1-\theta)^{1-\gamma}} \cdot\frac{(x-c)^{1-\gamma}}{1-\gamma} \right\} \end{aligned}
From the first order condition, we obtain
$$c^{-\gamma} + \beta\frac{\theta^{1-\gamma}}{1-\beta(1-\theta)^{1-\gamma}}\cdot(x-c)^{-\gamma}(-1) = 0$$
or
$$c^{-\gamma} = \beta\frac{\theta^{1-\gamma}}{1-\beta(1-\theta)^{1-\gamma}}\cdot(x-c)^{-\gamma}$$
With $c = \theta x$ we get
$$\left(\theta x\right)^{-\gamma} = \beta\frac{\theta^{1-\gamma}}{1-\beta(1-\theta)^{1-\gamma}}\cdot(x(1-\theta))^{- \gamma}$$
Some rearrangement produces
$$\theta = 1-\beta^{\frac{1}{\gamma}}$$
This confirms our earlier expression for the optimal policy:
$$c_t^* = \left(1-\beta^{\frac{1}{\gamma}}\right)x_t$$
Substituting $\theta$ into the value function above gives
$$v^*(x_t) = \frac{\left(1-\beta^{\frac{1}{\gamma}}\right)^{1-\gamma}} {1-\beta\left(\beta^{\frac{{1-\gamma}}{\gamma}}\right)} u(x_t) \\$$
Rearranging gives
$$v^*(x_t) = \left(1-\beta^\frac{1}{\gamma}\right)^{-\gamma}u(x_t)$$
Our claims are now verified.
• Share page
|
2020-08-04 19:19:16
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9402186274528503, "perplexity": 682.7971111294327}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-34/segments/1596439735882.86/warc/CC-MAIN-20200804191142-20200804221142-00411.warc.gz"}
|
https://www.nature.com/articles/s41467-018-07489-z?error=cookies_not_supported&code=750ce580-f341-47fc-bfac-574444f8e2b4
|
# Geometric phase magnetometry using a solid-state spin
## Abstract
A key challenge of magnetometry lies in the simultaneous optimization of magnetic field sensitivity and maximum field range. In interferometry-based magnetometry, a quantum two-level system acquires a dynamic phase in response to an applied magnetic field. However, due to the 2π periodicity of the phase, increasing the coherent interrogation time to improve sensitivity reduces field range. Here we introduce a route towards both large magnetic field range and high sensitivity via measurements of the geometric phase acquired by a quantum two-level system. We experimentally demonstrate geometric-phase magnetometry using the electronic spin associated with the nitrogen vacancy (NV) color center in diamond. Our approach enables unwrapping of the 2π phase ambiguity, enhancing field range by 400 times. We also find additional sensitivity improvement in the nonadiabatic regime, and study how geometric-phase decoherence depends on adiabaticity. Our results show that the geometric phase can be a versatile tool for quantum sensing applications.
## Introduction
The geometric phase1,2 plays a fundamental role in a broad range of physical phenomena3,4,5. Although it has been observed in many quantum platforms6,7,8,9 and is known to be robust against certain types of noise10,11, geometric phase applications are somewhat limited, including certain protocols for quantum simulation12,13 and computation14,15,17. However, when applied to quantum sensing, e.g., of magnetic fields, unique aspects of the geometric phase can be exploited to allow realization of both good magnetic field sensitivity and large field range in one measurement protocol. This capability is in contrast to conventional dynamic-phase magnetometry, where there is a trade-off between sensitivity and field range. In dynamic-phase magnetometry using a two-level system (e.g., two spin states), the amplitude of an unknown magnetic field B can be estimated by determining the relative shift between two energy levels induced by that field (Methods). A commonly used approach is to measure the dynamic phase accumulated in a Ramsey interferometry protocol. An initial resonant π/2 pulse prepares the system in a superposition of the two levels. In the presence of an external static magnetic field B along the quantization axis, the system evolves under the Hamiltonian H=ħγBσz/2, where γ denotes the gyromagnetic ratio and σz is the z-component of the Pauli spin vector. During the interaction time T (limited by the spin dephasing time T2*), the Bloch vector s(t) depicted on the Bloch sphere precesses around the fixed Larmor vector R = (0, 0, γB), and acquires a dynamic phase ϕd = γBT. The next π/2 pulse maps this phase onto a population difference P = cos ϕd, which can be measured to determine ϕd and hence the magnetic field B (Supplementary Note 1).
Such dynamic-phase magnetometry possesses two well-known shortcomings. First, the sinusoidal variation of the population difference with magnetic field leads to a 2π phase ambiguity in interpretation of the measurement signal and hence determination of B. Specifically, since the dynamic phase is linearly proportional to the magnetic field, for any measured signal Pmeas (throughout the text, this value corresponds to (ΔFL/FL) × k, where k is a constant that depends on NV readout contrast), there are infinite magnetic field ambiguities: Bm = (γT)−1 (cos−1Pmeas + 2πm), where m = 0, ±1, ±2 …±∞. Thus, the range of magnetic field amplitudes that one can determine without modulo 2π phase ambiguity is limited to one cycle of oscillation: Bmax 1/T (Supplementary Note 2, Supplementary Figure 5). Second, there is a trade-off between magnetic field sensitivity and field range, as the interaction time also restricts the shot-noise-limited magnetic field sensitivity: η 1/T1/2. Consequently, an improvement in field range via shorter T comes at the cost of a degradation in sensitivity (Supplementary Note 3). Use of a closed-loop lock-in type measurement18, quantum phase estimation algorithm19,20, or non-classical states21,22 can alleviate these disadvantages; however, such approaches require either a continuous measurement scheme with limited sensitivity, large resource overhead (additional experimental time) or realization of long-lived entangled or squeezed states.
In the present work, we use the electronic spin associated with a single nitrogen vacancy (NV) color center in diamond to demonstrate key advantages of geometric-phase magnetometry: (i) it resolves the 2π phase ambiguity limiting dynamic-phase magnetometry; and (ii) it decouples magnetic field range and sensitivity, leading to a 400-fold enhancement in field range at constant sensitivity in our experiment. We also show additional improvement of magnetic field sensitivity in the nonadiabatic regime of mixed geometric and dynamic-phase evolution. By employing a power spectral density analysis23, we find that adiabaticity plays an important role in controlling the degree of coupling to environmental noise and hence the spin coherence timescale.
## Results
### Geometric-phase magnetometry protocol
To implement geometric-phase magnetometry, we use a modified version of an experimental protocol (“Berry sequence”) previously applied to a superconducting qubit9. In our realization, the NV spin sensor is placed in a superposition state by a π/2 pulse, where the driving frequency of the π/2 pulse is chosen to be resonant with the NV ms = 0 ↔ ms= + 1 transition at constant bias field Bbias (≈9.6 mT in our experiment) aligned with the NV axis. A small signal field B (~100 µT in our experiment) is then applied parallel to Bbias, and the NV spin acquires a geometric phase due to off-resonant microwave driving with control parameters cycled along a closed path as illustrated in Fig. 1b (Methods). Under the rotating wave approximation, the effective two-level Hamiltonian is given by:
$$H = \frac{\hbar }{2}\left( {\Omega \cos (\rho )\sigma _x + \Omega \sin (\rho )\sigma _y + \gamma B\sigma _z} \right).$$
(1)
Here, Ω is the NV spin Rabi frequency for the microwave driving field, ρ is the phase of the driving field, and σ = (σx, σy, σz) is the Pauli spin vector. By sweeping the phase, the Larmor vector R(t) = R*(sinθ cosρ, sinθ cosρ, cosθ), where cosθ = γB/(Ω2 + (γB)2)1/2, R = (Ω2 +(γB)2)1/2, rotates around the z-axis. The Bloch vector s(t) then undergoes precession around this rotating Larmor vector (for detailed picture of the measurement protocol, see Supplementary Fig 2). If the rotation is adiabatic (i.e., adiabaticity parameter $$A \equiv \dot \rho \sin \theta /2R \ll 1$$), then the system acquires a geometric phase proportional to the product of (i) the solid angle Θ = 2π(1 − cosθ) subtended by the Bloch vector trajectory and (ii) the number of complete rotations N of the Bloch vector around the Larmor vector in the rotating frame defined by the frequency of the initial π/2 pulse. We apply this Bloch vector rotation twice during the interaction time T, with alternating direction separated by a π pulse, which cancels the accumulated dynamic phase and doubles the geometric phase: ϕg = 2 (Supplementary Note 1). A final π/2 pulse allows this geometric phase to be determined from standard fluorescence readout of the NV spin-state population difference:
$${{P}}_{{\mathrm{meas}}}\left( B \right) = \cos \left[ {4\pi N\left( {1 - \frac{{\gamma B}}{{\sqrt {\left( {\gamma B} \right)^2 + \Omega ^2} }}} \right)} \right].$$
(2)
This normalized geometric-phase signal (Supplementary Note 1) exhibits chirped oscillation as a function of magnetic field. There are typically only a small number of field ambiguities that give the same signal Pmeas; these can be resolved uniquely by measuring the slope dPmeas/dB (Supplementary Note 2, Supplementary Fig. 5). From the form of Eq. (2) it is evident that at large B, cosine signal approaches to zero like B−2, and the slope goes to zero. Hence, we define the field range as the largest magnetic field value (Bmax) that gives the last oscillation minimum in the signal: BmaxΩ N1/2. Importantly, the field range of geometric-phase magnetometry has no dependence on the interaction time T. If the magnetic field is below Bmax, then one can make a geometric-phase magnetometry measurement with optimal sensitivity ηΩ N−1 T1/2 (Supplementary Note 3).
### Comparison between dynamic- and geometric-phase magnetometry
We implemented both dynamic- and geometric-phase magnetometry using the optically addressable electronic spin of a single NV color center in diamond (Fig. 2a) (Supplementary Figs. 1-3). NV-diamond magnetometers provide high spatial resolution under ambient conditions24,25,26, and have therefore found wide-ranging applications, including in condensed matter physics27,28, the life sciences29,30, and geoscience31. At an applied bias magnetic field of 9.6 mT, the degeneracy of the NV ms = ± 1 levels is lifted. The two-level system used in this work consists of the ground state magnetic sublevels ms = 0 and ms = +1, which can be coherently addressed by applied microwave fields. The hyperfine interaction between the NV electronic spin and the host 14N nuclear spin further splits the levels into three states, each separated by 2.16 MHz. Upon green laser illumination, the NV center exhibits spin-state-dependent fluorescence and optical pumping into ms = 0 after a few microseconds. Thus, one can prepare the spin states and determine the population by measuring the relative fluorescence (see Methods for more details).
First, we performed dynamic-phase magnetometry using a Ramsey sequence to illustrate the 2π phase ambiguity and show how the dependence on interaction time gives rise to a trade-off between field range and magnetic field sensitivity. We recorded the NV fluorescence signal as a function of the interaction time T between the two microwave π/2 pulses (Fig. 1a). Signal contributions from the three hyperfine transitions of the NV spin result in the observed beating behavior seen in Fig. 2b. We fixed the interaction time at T = 0.2, 0.5, 1.0 μs, varied the external magnetic field for each value of T, and observed a periodic fluorescence signal with a 2π phase ambiguity (Fig. 2c). The oscillation period decreased as the interaction time was increased, indicating a reduction in the magnetic field range (i.e., smaller Bmax). In contrast, the magnetic field sensitivity, which depends on the maximum slope of the signal, improved as the interaction time increased. For each value of T, we fit the fluorescence signal to a sinusoid dependent on the applied magnetic field and extracted the oscillation period and slope, which we used to determine the experimental sensitivity and field range. From this procedure, we obtained ηT−0.49(6) and BmaxT−0.96(2), consistent with expectations for dynamic-phase magnetometry and illustrative of the trade-off inherent in optimizing both η and Bmax as a function of interaction time (Supplementary Fig. 7).
Next, we used a Berry sequence to demonstrate two key advantages of geometric-phase magnetometry: i.e., there is neither a 2π phase ambiguity nor a sensitivity/field-range trade-off with respect to interaction time. For fixed adiabatic control parameters of Ω/2π = 5 MHz and N = 3, the observed geometric-phase magnetometry signal Pmeas has no dependence on interaction time T (Fig. 2d). Varying the external magnetic field with fixed interaction times T = 4.0, 6.0, 8.0 μs, Pmeas exhibits identical chirped oscillations for all T values (Fig. 2e), as expected from Eq. (2). From the Pmeas data we extract dPmeas/dB, which allows us to determine the magnetic field uniquely for values within the oscillatory range (Supplementary Note 2), and also to quantify Bmax from the last minimum point of the chirped oscillation (Fig. 2e). Additional measurements of the dependence of Pmeas on the adiabatic control parameters Ω, N, and T (Supplementary Figs. 4, 6) yield the scaling of sensitivity and field range: ηΩ1.2(5)N−0.92(1)T0.46(1) and BmaxΩ0.9(1)N0.52(5)T0.02(1), which is consistent with expectations and shows that geometric-phase magnetometry allows η and Bmax to be independently optimized as a function of interaction time (Supplementary Fig. 7).
In Fig. 3 we compare the measured sensitivity and field range for geometric-phase and dynamic-phase magnetometry. For each point displayed, the sensitivity is measured directly at small B (0.01 ~ 0.1 mT), whereas the field range is calculated from the measured values of N and Ω (for geometric-phase magnetometry) and T (for dynamic-phase magnetometry, with T limited by the dephasing time T2*), following the scaling laws give above. Since geometric-phase magnetometry has three independent control parameters (T, N, and Ω), Bmax can be increased without changing sensitivity by increasing N and Ω while keeping the ratio N/Ω fixed. Such “smart control” allows a tenfold improvement in geometric-phase sensitivity (compared to dynamic-phase measurements) for Bmax ~ 1 mT, and a 400-fold enhancement Bmax at a sensitivity of ~2 μT Hz−1/2. Similarly, the sensitivity can be improved without changing Bmax by decreasing the interaction time, with a limit set by the adiabaticity condition ($$A \equiv \dot \rho \sin \theta /2R \approx N/\Omega T \ll 1$$).
### Geometric-phase magnetometry in nonadiabatic regime
Finally, we explored geometric-phase magnetometry outside the adiabatic limit by performing Berry sequence experiments and varying the adiabaticity parameter by more than two orders of magnitude (from A ≈ 0.01−5). We find good agreement between our measurements and simulations, with an onset of nonadiabatic behavior for A$$\gtrsim$$ 0.2 (Supplementary Figure 8). At each value of the adiabaticity parameter A, we determine the magnetic field sensitivity from the largest slope of the measured magnetometry curve. (The magnetometry curve is the plot of Pmeas obtained as a function of applied magnetic field B.) To compare with the best sensitivity provided by dynamic-phase magnetometry, we fix the interaction time at T$$\approx$$T2*/2 in the nonadiabatic geometric-phase measurements. We find that the sensitivity of geometric-phase magnetometry improves in the nonadiabatic regime, and becomes smaller than the sensitivity from dynamic-phase measurements for A$$\gtrsim$$ 1.0 (Fig. 4a).
To understand this behavior, we recast the sensitivity scaling in terms of the adiabaticity parameter and interaction time, ηA−1T−1/2 and investigated the trade-off between these parameters. (Note that in the nonadiabatic regime the Bloch vector no longer strictly follows the Larmor vector, and thus the sensitivity scaling is not exact.) We performed a spectral density analysis to assess how environmental noise leads to both dynamic- and geometric-phase decoherence, with the relative contribution set by the adiabaticity parameter A, thereby limiting the interaction time T. We take the exponential decay of the NV spin coherence W(T) ~ exp(−χ(T)), characterized by the decoherence function χ(T) given by
$$\chi \left( T \right) = A^2\mathop {\int }\nolimits_{\hskip -5pt 0}^\infty \frac{{{\mathrm d}\omega }}{\pi }S\left( \omega \right)\frac{{F_0\left( {\omega T} \right)}}{{\omega ^2}} + \mathop {\int }\nolimits_{\hskip -5pt 0}^\infty \frac{{{\mathrm d}\omega }}{\pi }S\left( \omega \right)\frac{{F_1\left( {\omega T} \right)}}{{\omega ^2}}.$$
(3)
Here, S(ω) is a spectral density function that describes magnetic noise from the environment; F0(ωT) = 2sin2(ωT/2) is the filter function for geometric-phase evolution in the Berry sequence, which is spectrally similar to a Ramsey sequence, with maximum sensitivity to static and low frequency ($$\lesssim$$1/T) magnetic fields; and F1(ωT) = 8sin4(ωT/4) is the filter function for dynamic-phase evolution in the Berry sequence, which is spectrally similar to a Hahn-echo sequence, with maximum sensitivity to higher frequency ($$\gtrsim$$1/T) magnetic fields (Supplementary Note 4).
### Geometric-phase coherence time
Figure 4b shows examples of the measured decay of the geometric-phase signal (Pmeas) as a function of interaction time T and adiabaticity parameter A. From such data we extract the geometric-phase coherence time T2g by fitting Pmeas ~ exp[−(T/T2g)2]. We observe four regimes of decoherence behavior (Fig. 4c), which can be understood from Eq. (3) and its schematic spectral representation in Fig. 4d. For A < 0.1 (adiabatic regime), dynamic-phase evolution (i.e., Hahn-echo-like behavior) dominates the decoherence function χ(T) and thus T2g ~ T2 ≈ 500 μs. For 0.1 ≤ A < 1.0 (intermediate regime), the coherence time is inversely proportional to the adiabaticity parameter (T2g ~ T2*/A) as geometric-phase evolution (with Ramsey-like dephasing) becomes increasingly significant. For A$$\approx$$ 1.0 (nonadiabatic regime), geometric-phase evolution dominates χ(T) at long times and thus T2g ~ T2* ≈ 50 μs. For $$A \gg 1.0$$ (strongly nonadiabatic limit), the driven rotation of the Larmor vector is expected to average out during a Berry sequence (Fig. 1b) and only the z-component of the Larmor vector remains. Thus, the Berry sequence converges to a Hahn-echo-like sequence and the coherence time is expected to increase to T2 for very large A.
## Discussion
In summary, we demonstrated an approach to NV-diamond magnetometry using geometric-phase measurements, which avoids the trade-off between magnetic field sensitivity and maximum field range that limits traditional dynamic-phase magnetometry. For an example experiment with a single NV, we realize a 400-fold enhancement in static (DC) magnetic field range at constant sensitivity. We also explored geometric-phase magnetometry as a function of adiabaticity, with good agreement between measurements and model simulations. We find that adiabaticity controls the coupling between the NV spin and environmental noise during geometric manipulation, thereby determining the geometric-phase coherence time. Furthermore, we showed that operation in the nonadiabatic regime, where there is mixed geometric- and dynamic-phase evolution, allows magnetic field sensitivity to be better than that of dynamic-phase magnetometry. We expect that geometric-phase AC field sensing will provide similar advantages to dynamic-phase magnetometry, although the experimental protocol (Berry sequence) will need to be adjusted to allow only accumulation of geometric phase due to the AC field. The generality of our geometric-phase technique should make it broadly applicable to precision measurements in many quantum systems, such as trapped ions, ultracold atoms, and other solid-state spins.
## Methods
### NV diamond sample
The diamond chip used in this experiment is an electronic-grade single-crystal cut along the [110] direction into a volume of 4 × 4 × 0.5 mm3 (Element 6 Corporation). A high-purity chemical vapor deposition layer with 99.99% 12C near the surface contains preferentially oriented NV centers. The estimated N and NV densities are 1×1015 and 3×1012 cm−3, respectively. The ground state of an NV center consists of an electronic spin triplet with the ms = 0 and ±1 Zeeman sublevels split by 2π × 2.87 GHz due to spin−spin interactions. Excitation with green (532 nm) laser light induces spin-preserving optical cycles between the electronic ground and excited states, entailing red fluorescence emission (637−800 nm). There is also a nonradiative decay channel from the ms = ±1 excited states to the ms = 0 ground state via metastable singlet states with a branching ratio of ~30%. Thus, the amount of red fluorescence from the NV center is a marker for the z-component of the spin-state, and continuous laser excitation prepares the spin into the ms = 0 state over a few microseconds. The spin qubit used in this work consists of the ms = +1 and 0 ground states. Near-resonant microwave irradiation allows coherent manipulation of the ground spin states. The NV spin resonance lifetimes are T1 ~ 3 ms, T2 ~ 500 µs, and T2* ~ 50 µs.
### Confocal scanning laser microscope
Geometric-phase magnetometry using single NV centers is conducted using a home-built confocal scanning laser microscope (Supplementary Fig. 1). A three-axis motorized stage (Micos GmbH) moves the diamond sample in three dimensions. An acousto-optic modulator (Isomet Corporation) operated at 80 MHz allows time-gating of a 400 mW, 532 nm diode-pumped solid-state laser (Changchun New Industries). An oil-immersion objective (×100, 1.3 NA, Nikon CFI Plan Fluor) focuses the green laser pulses onto an NV center. NV red fluorescence passes through the same objective, through a single-mode fiber cable with a mode-field-diameter of ~5 μm (Thorlabs), and then onto a silicon avalanche photodetector (Perkin Elmer SPCM-ARQH-12). The NV spin initialization and readout pulses are 3 µs and 0.5 µs, respectively. The change of fluorescence signal is calculated from ΔFL=FL+FL, where FL± are the fluorescence counts obtained after spin projection using a microwave π/2-pulse along the ±x-axis, respectively. For each measurement, the fluorescence count FL when the spin is in the ms = 0 state is also measured as a reference. The temperature of the confocal scanning laser microscope is monitored by a 10k thermistor (Thorlabs) and stabilized to within 0.05 oC using a 15 W heater controlled with a PID algorithm.
### Hamiltonian parameter control system
The Rabi frequency (Ω) and phase (ρ) of the microwave drive field, as well as the applied magnetic field to be sensed (B), are key variables of this work. It is thus crucial to calibrate the microwave driving system and magnetic field control system beforehand. Microwave pulses for NV geometric phase magnetometry are generated by mixing a high frequency (~3 GHz) local oscillator signal and a low frequency (~50 MHz) arbitrary waveform signal using an IQ mixer (Supplementary Fig. 1). The Rabi frequency and microwave phase are controlled by the output voltage of an arbitrary waveform generator (Tektronix AWG5014C) (Supplementary Fig. 2). The microwave pulses are amplified (Mini-circuits ZHL-16W-43-S+) and sent through a gold coplanar waveguide (10 µm gap, 1 µm height) fabricated on a glass coverslip by photo-lithography. An external magnetic field for magnetometry demonstration is created by sending an electric current through a copper electromagnetic coil (4 mm diameter, 0.2 mm thick, n = 40 turns, R = 0.25 Ω) placed h = 0.5 mm above the diamond surface. The electric current is provided by a high-stability DC voltage controller (Agilent E3640A). To enable fine scan of the electric current with limited voltage resolution, another resistor with 150 Ω is added in series. Thus, a DC power supply voltage of 3 V approximately corresponds to I = 0.02 A, which creates an external field of B = μ0nI/4πh ~ 16 G. One can determine the change of the external magnetic field as a function of DC power supply voltage ΔB(V) by measuring the shift of the resonance peak Δf in the NV electron spin resonance spectrum using Δf = γΔB. The result is ΔB/V= 0.50 ± 0.01 G V−1 (Supplementary Fig. 3). Joule heating produced by the coil is P = I2R ~ 10−4 W. The mass and heat capacity of the coil are about 0.15 g and 0.06 J K−1, respectively. Thus, the temperature rise is at most 2 mK s−1. Since the temperature coefficient of the fractional resistivity change for copper is 0.00386 K−132, the change of resistance due to Joule heating is negligible.
### Numerical methods for geometric phase simulation
All simulations of NV spin evolution in this work are carried out by computing the time-ordered time evolution operator at each time step.
$$U\left( {t_{\mathrm i},t_{\mathrm f}} \right) = \hat T\left\{ {\exp \left( { - i\mathop {\int }\nolimits_{\hskip -5pt t_{\mathrm i}}^{t_{\mathrm f}} H\left( t \right){\mathrm d}t} \right)} \right\} = \mathop {\prod }\limits_{j = 1}^N \exp \left[ { - i{\mathrm{\Delta }}tH\left( {t_j} \right)} \right],$$
(4)
where ti and tf are the initial and final time, respectively, $$\hat T$$ is the time-ordering operator, Δt is the time step size of the simulation, N=(tf − ti)/Δt is the number of time step, and H(t) is the time-dependent Hamiltonian (Eq. (1)). In the simulation, we used Δt=1 ns step size which is sufficiently small in the rotating frame. The algorithm is implemented with MATLAB®.
### Data and code availability
The data and numerical simulation code that support the findings of this study are available from the corresponding author upon reasonable request.
## References
1. 1.
Berry, M. V. Quantal phase factors accompanying adiabatic changes. Proc. R. Soc. Lond. A 392, 45–57 (1984).
2. 2.
Wilczek, F. & Zee, A. Appearance of gauge structure in simple dynamical systems. Phys. Rev. Lett. 52, 2111–2114 (1984).
3. 3.
Hannay, J. H. Angle variable holonomy in adiabatic excursion of an integrable Hamiltonian. J. Phys. A 18, 221–223 (1985).
4. 4.
Thouless, D. J., Kohmoto, M., Nightingale, M. P. & den Nijs, M. Quantized Hall conductance in a two-dimensional periodic potential. Phys. Rev. Lett. 49, 405–408 (1982).
5. 5.
Haldane, F. D. M. Model for a quantum Hall Effect without Landau levels: condensed-matter realization of the ‘parity’ anomaly. Phys. Rev. Lett. 61, 2015–2018 (1988).
6. 6.
Zhang, Y., Tan, Y.-W., Stormer, L. & Kim, P. Experimental observation of the quantum Hall effect and Berry’s phase in graphene. Nature 438, 201 (2005).
7. 7.
Tomita, A. & Chiao, R. Y. Observation of Berry’s topological phase by use of an optical fiber. Phys. Rev. Lett. 57, 937 (1986).
8. 8.
Suter, D., Mueller, K. T. & Pines, A. Study of the Aharonov−Anandan quantum phase by NMR interferometry. Phys. Rev. Lett. 60, 1218 (1988).
9. 9.
Leek, P. J. et al. Observation of Berry’s phase in a solid-state qubit. Science 318, 1889–1892 (2007).
10. 10.
De Chiara, G. & Palma, G. M. Berry phase for a spin 1/2 particle in a classical fluctuating field. Phys. Rev. Lett. 91, 090404 (2003).
11. 11.
Filipp, S. et al. Experimental demonstration of the stability of Berry’s phase for a spin-1/2 particle. Phys. Rev. Lett. 102, 030404 (2009).
12. 12.
Lin, Y. J. et al. Synthetic magnetic fields for ultracold neutral atoms. Nature 462, 628–632 (2009).
13. 13.
Jotzu, G. et al. Experimental realization of the topological Haldane model with ultracold fermions. Nature 515, 237–240 (2014).
14. 14.
Zanardi, P. & Rosetti, M. Holonomic quantum computation. Phys. Lett. A 264, 94 (1999).
15. 15.
Leibfried, D. et al. Experimental demonstration of a robust, high-fidelity geometric two ion-qubit phase gate. Nature 422, 412–415 (2003).
16. 16.
Jones, J. A., Vedral, V., Ekert, A. & Castagnoli, G. Geometric quantum computation using nuclear magnetic resonance. Nature 403, 869 (2000).
17. 17.
Zu, C. et al. Experimental realization of universal geometric quantum gates with solid-state spins. Nature 514, 72–75 (2014).
18. 18.
Clevenson, H. et al. Robust high-dynamic-range vector magnetometry with nitrogen-vacancy centers in diamond. Appl. Phys. Lett. 112, 252406 (2018).
19. 19.
Nusran, N. M. et al. High-dynamic range magnetometry with a single electronic spin in diamond. Nat. Nanotechnol. 7, 109–113 (2012).
20. 20.
Bonato, C. et al. Optimized quantum sensing with a single electron spin using real-time adaptive measurements. Nat. Nanotechnol. 11, 247–252 (2016).
21. 21.
Bollinger, J. J. et al. Optimal frequency measurements with maximally correlated states. Phys. Rev. A 54(R), R4649 (1996).
22. 22.
Giovannetti, V., Lloyd, S. & Maccone, L. Quantum-enhanced measurements: beating the standard quantum limit. Science 306, 1330–1336 (2004).
23. 23.
Bar-Gill, N. et al. Suppression of spin-bath dynamics for improved coherence of multi-spin-qubit systems. Nat. Commun. 3, 858 (2012).
24. 24.
Taylor, J. M. et al. High-sensitivity diamond magnetometer with nanoscale resolution. Nat. Phys. 4, 810–816 (2008).
25. 25.
Maze, J. R. et al. Nanoscale magnetic sensing with an individual electronic spin in diamond. Nature 455, 644–647 (2008).
26. 26.
Balasubramanian, G. et al. Nanoscale imaging magnetometry with diamond spins under ambient conditions. Nature 455, 648–651 (2008).
27. 27.
Tetienne, J.-P. et al. Nanoscale imaging and control of domain-wall hopping with a nitrogen vacancy center microscope. Science 344, 1366–1369 (2014).
28. 28.
Du, C. et al. Control and local measurement of the spin chemical potential in a magnetic insulator. Science 357, 195–198 (2017).
29. 29.
Le Sage, D. et al. Optical magnetic imaging of living cells. Nature 496, 486–489 (2013).
30. 30.
Glenn, D. R. et al. Single cell magnetic imaging using a quantum diamond microscope. Nat. Methods 12, 736–738 (2015).
31. 31.
Fu, R. R. et al. Solar nebula magnetic fields recorded in the Semarkona meterorite. Science 346, 6213 (2014).
32. 32.
Weast, R. C. Handbook of Chemistry and Physics (CRC Press, Boca Raton, FL, 1984).
## Acknowledgements
This material is based upon work supported by, or in part by, the U.S. Army Research Laboratory and the U.S. Army Research Office under contract/grant numbers W911NF1510548 and W911NF1110400. This work was performed in part at the Center for Nanoscale Systems (CNS), a member of the National Nanotechnology Coordinated Infra-structure Network (NNCI), which is supported by the National Science Foundation under NSF award no. 1541959. J.L. was supported by the ILJU Graduate Fellowship. We thank John Barry, Jeff Thompson, Nathalie de Leon, Kristiaan de Greve, and Shimon Kolkowitz for helpful discussions.
## Author information
Authors
### Contributions
K.A., C.B., and R.L.W. conceived and K.A. and J.L. designed the experiments. K.A., J.L., and H.Z. performed the experiments and processed the data. All authors analyzed the results. K.A., J.L., D.R.G. and R.L.W. wrote the manuscript.
### Corresponding author
Correspondence to R. L. Walsworth.
## Ethics declarations
### Competing interests
The authors declare no competing interests.
Publisher’s note: Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.
## Rights and permissions
Reprints and Permissions
Arai, K., Lee, J., Belthangady, C. et al. Geometric phase magnetometry using a solid-state spin. Nat Commun 9, 4996 (2018). https://doi.org/10.1038/s41467-018-07489-z
• Accepted:
• Published:
• ### Unterschätzte Farbzentren: Defekte als nützliche Reduktionsmittel in Lanthanid‐dotierten lumineszenten Materialien
• Markus Suta
• , Flavie Lavoie‐Cardinal
• & Claudia Wickleder
Angewandte Chemie (2020)
• ### Sensitivity optimization for NV-diamond magnetometry
• John F. Barry
• , Jennifer M. Schloss
• , Erik Bauch
• , Matthew J. Turner
• , Connor A. Hart
• , Linh M. Pham
• & Ronald L. Walsworth
Reviews of Modern Physics (2020)
• ### Monopole field textures in interacting spin systems
• Andreas Eriksson
• & Erik Sjöqvist
Physical Review A (2020)
• ### Observation of a Quantum Phase from Classical Rotation of a Single Spin
• A. A. Wood
• , L. C. L. Hollenberg
• , R. E. Scholten
• & A. M. Martin
Physical Review Letters (2020)
• ### Underestimated Color Centers: Defects as Useful Reducing Agents in Lanthanide‐Activated Luminescent Materials
• Markus Suta
• , Flavie Lavoie‐Cardinal
• & Claudia Wickleder
Angewandte Chemie International Edition (2020)
|
2020-08-14 17:09:59
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 2, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.7783802151679993, "perplexity": 2666.4238257445922}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-34/segments/1596439739347.81/warc/CC-MAIN-20200814160701-20200814190701-00445.warc.gz"}
|
https://math.stackexchange.com/questions/1332953/taking-it-a-step-further-with-a-sum?noredirect=1
|
# Taking it a step further with a sum
So I was watching an "old" video from numberphile about the three square problem. https://youtu.be/m5evLoL0xwg Here is also an image: http://mathforlove.com/wp-content/uploads/2014/09/Screen-Shot-2014-09-21-at-6.59.05-PM.png
It's pretty easy to see that the sum of three angles is 90°, but now I am curious what if we keep going on. What if we had more than 3 suqars and with that more than 3 angles. What would the sum be? Basically I want to find the value of: $\lim\limits_{n \rightarrow \infty} \sum\limits_{k=1}^n \arctan(\frac{1}{\sqrt{k^2+1}})$
If there is an answer I would like to know it. Thank you. P.S. $\arctan$ stands for $\tan^{-1}$
• Please give a summary of the relevant information from the video in your post. You can't expect people to watch a 12-minute youTube clip in order to understand your question. – John Gowers Jun 20 '15 at 19:25
• As it relates to generalizing the three-square problem, it should really just be $1/k$ inside the arctan, not $1/\sqrt{k^2+1}$. (Note, there is a link to the question here from math.stackexchange.com/questions/2800600/… .) – Barry Cipra May 29 '18 at 20:31
The limit does not exist.
Indeed, for $0\le x< \pi$ we have
\begin{align} \tan(x)=\frac{\sin(x)}{\cos(x)}\le\sin(x)\le x \end{align}
Since $\arctan$ is increasing for the appropriate range of values of $x$, we get:
$$x\le\arctan (x)$$
for all $x$.
Now we get, for all $k\ge 1$: \begin{align} \arctan\left(\frac{1}{\sqrt{k^2+1}}\right) & \ge\frac1{\sqrt{k^2+1}}\\ &\ge \frac1{\sqrt{k^2+3k^2}}\\ &=\frac{1}{2k} \end{align}
Since the series $\sum_{k=0}^\infty \frac{1}{k}$ is known to diverge, your series diverges too.
• One of the initial inequalities seems to be backwards: $x\le\tan x$ for $x\in[0,\frac\pi2)$. This can be repaired, though: since arctan is concave on $[0,\infty)$, the secant over $[0,\frac1{\sqrt2}]$ lies under the graph; thus $\arctan x\ge cx$ with $c = \arctan(1/\sqrt2)/(1/\sqrt2)$. The argument can then proceed as before, just with the extra $c$. – user21467 Jun 21 '15 at 21:27
• @StevenTaschuk Feel free to edit it so it's correct. – John Gowers Jun 21 '15 at 21:50
|
2019-10-20 13:52:31
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 2, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.837887167930603, "perplexity": 356.36491113453695}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-43/segments/1570986710773.68/warc/CC-MAIN-20191020132840-20191020160340-00549.warc.gz"}
|
https://plainmath.net/8862/simplify-the-expression-4-3-plus-4x
|
# Simplify the expression 4×3+4x
Simplify the expression $4×3+4x$
You can still ask an expert for help
• Questions are typically answered in as fast as 30 minutes
Solve your problem for the price of one coffee
• Math expert for every subject
• Pay only if we can solve it
Nathalie Redfern
Since $4×3=12$,
then
$4×3+4x=12+4x$
The expression cannot be simplified further since 12 and 4x are not like terms so they cannot be combined.
|
2022-08-08 22:11:49
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 15, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9036186337471008, "perplexity": 1838.4485556819734}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-33/segments/1659882570879.1/warc/CC-MAIN-20220808213349-20220809003349-00090.warc.gz"}
|
https://4gravitons.wordpress.com/tag/string-theory/
|
# Assumptions for Naturalness
Why did physicists expect to see something new at the LHC, more than just the Higgs boson? Mostly, because of something called naturalness.
Naturalness, broadly speaking, is the idea that there shouldn’t be coincidences in physics. If two numbers that appear in your theory cancel out almost perfectly, there should be a reason that they cancel. Put another way, if your theory has a dimensionless constant in it, that constant should be close to one.
(To see why these two concepts are the same, think about a theory where two large numbers miraculously almost cancel, leaving just a small difference. Take the ratio of one of those large numbers to the difference, and you get a very large dimensionless number.)
You might have heard it said that the mass of the Higgs boson is “unnatural”. There are many different physical processes that affect what we measure as the mass of the Higgs. We don’t know exactly how big these effects are, but we do know that they grow with the scale of “new physics” (aka the mass of any new particles we might have discovered), and that they have to cancel to give the Higgs mass we observe. If we don’t see any new particles, the Higgs mass starts looking more and more unnatural, driving some physicists to the idea of a “multiverse”.
If you find parts of this argument hokey, you’re not alone. Critics of naturalness point out that we don’t really have a good reason to favor “numbers close to one”, nor do we have any way to quantify how “bad” a number far from one is (we don’t know the probability distribution, in other words). They critique theories that do preserve naturalness, like supersymmetry, for being increasingly complicated and unwieldy, violating Occam’s razor. And in some cases they act baffled by the assumption that there should be any “new physics” at all.
Some of these criticisms are reasonable, but some are distracting and off the mark. The problem is that the popular argument for naturalness leaves out some important assumptions. These assumptions are usually kept in mind by the people arguing for naturalness (at least the more careful people), but aren’t often made explicit. I’d like to state some of these assumptions. I’ll be framing the naturalness argument in a bit of an unusual (if not unprecedented) way. My goal is to show that some criticisms of naturalness don’t really work, while others still make sense.
I’d like to state the naturalness argument as follows:
1. The universe should be ultimately described by a theory with no free dimensionless parameters at all. (For the experts: the theory should also be UV-finite.)
2. We are reasonably familiar with theories of the sort described in 1., we know roughly what they can look like.
3. If we look at such a theory at low energies, it will appear to have dimensionless parameters again, based on the energy where we “cut off” our description. We understand this process well enough to know what kinds of values these parameters can take, starting from 2.
4. Point 3. can only be consistent with the observed mass of the Higgs if there is some “new physics” at around the scales the LHC can measure. That is, there is no known way to start with a theory like those of 2. and get the observed Higgs mass without new particles.
Point 1. is often not explicitly stated. It’s an assumption, one that sits in the back of a lot of physicists’ minds and guides their reasoning. I’m really not sure if I can fully justify it, it seems like it should be a consequence of what a final theory is.
(For the experts: you’re probably wondering why I’m insisting on a theory with no free parameters, when usually this argument just demands UV-finiteness. I demand this here because I think this is the core reason why we worry about coincidences: free parameters of any intermediate theory must eventually be explained in a theory where those parameters are fixed, and “unnatural” coincidences are those we don’t expect to be able to fix in this way.)
Point 2. may sound like a stretch, but it’s less of one than you might think. We do know of a number of theories that have few or no dimensionless parameters (and that are UV-finite), they just don’t describe the real world. Treating these theories as toy models, we can hopefully get some idea of how theories like this should look. We also have a candidate theory of this kind that could potentially describe the real world, M theory, but it’s not fleshed out enough to answer these kinds of questions definitively at this point. At best it’s another source of toy models.
Point 3. is where most of the technical arguments show up. If someone talking about naturalness starts talking about effective field theory and the renormalization group, they’re probably hashing out the details of point 3. Parts of this point are quite solid, but once again there are some assumptions that go into it, and I don’t think we can say that this point is entirely certain.
Once you’ve accepted the arguments behind points 1.-3., point 4. follows. The Higgs is unnatural, and you end up expecting new physics.
Framed in this way, arguments about the probability distribution of parameters are missing the point, as are arguments from Occam’s razor.
The point is not that the Standard Model has unlikely parameters, or that some in-between theory has unlikely parameters. The point is that there is no known way to start with the kind of theory that could be an ultimate description of the universe and end up with something like the observed Higgs and no detectable new physics. Such a theory isn’t merely unlikely, if you take this argument seriously it’s impossible. If your theory gets around this argument, it can be as cumbersome and Occam’s razor-violating as it wants, it’s still a better shot than no possible theory at all.
In general, the smarter critics of naturalness are aware of this kind of argument, and don’t just talk probabilities. Instead, they reject some combination of point 2. and point 3.
This is more reasonable, because point 2. and point 3. are, on some level, arguments from ignorance. We don’t know of a theory with no dimensionless parameters that can give something like the Higgs with no detectable new physics, but maybe we’re just not trying hard enough. Given how murky our understanding of M theory is, maybe we just don’t know enough to make this kind of argument yet, and the whole thing is premature. This is where probability can sneak back in, not as some sort of probability distribution on the parameters of physics but just as an estimate of our own ability to come up with new theories. We have to guess what kinds of theories can make sense, and we may well just not know enough to make that guess.
One thing I’d like to know is how many critics of naturalness reject point 1. Because point 1. isn’t usually stated explicitly, it isn’t often responded to explicitly either. The way some critics of naturalness talk makes me suspect that they reject point 1., that they honestly believe that the final theory might simply have some unexplained dimensionless numbers in it that we can only fix through measurement. I’m curious whether they actually think this, or whether I’m misreading them.
There’s a general point to be made here about framing. Suppose that tomorrow someone figures out a way to start with a theory with no dimensionless parameters and plausibly end up with a theory that describes our world, matching all existing experiments. (People have certainly been trying.) Does this mean naturalness was never a problem after all? Or does that mean that this person solved the naturalness problem?
Those sound like very different statements, but it should be clear at this point that they’re not. In principle, nothing distinguishes them. In practice, people will probably frame the result one way or another based on how interesting the solution is.
If it turns out we were missing something obvious, or if we were extremely premature in our argument, then in some sense naturalness was never a real problem. But if we were missing something subtle, something deep that teaches us something important about the world, then it should be fair to describe it as a real solution to a real problem, to cite “solving naturalness” as one of the advantages of the new theory.
If you ask for my opinion? You probably shouldn’t, I’m quite far from an expert in this corner of physics, not being a phenomenologist. But if you insist on asking anyway, I suspect there probably is something wrong with the naturalness argument. That said, I expect that whatever we’re missing, it will be something subtle and interesting, that naturalness is a real problem that needs to really be solved.
# How to Get a “Minimum Scale” Without Pixels
Zoom in, and the world gets stranger. Down past atoms, past protons and neutrons, far past the smallest scales we can probe at the Large Hadron Collider, we get to the scale at which quantum gravity matters: the Planck scale.
Weird things happen at the Planck scale. Space and time stop making sense. Read certain pop science articles, and they’ll tell you the Planck scale is the smallest scale, the scale where space and time are quantized, the “pixels of the universe”.
That last sentence, by the way, is not actually how the Planck scale works. In fact, there’s pretty good evidence that the universe doesn’t have “pixels”, that space and time are not quantized in that way. Even very tiny pixels would change the speed of light, making it different for different colors. Tiny effects like that add up, and astronomers would almost certainly have noticed an effect from even Planck-scale pixels. Unless your idea of “pixels” is fairly unusual, it’s already been ruled out.
If the Planck scale isn’t the scale of the “pixels of the universe”, why do people keep saying it is?
Part of the problem is that the real story is vaguer. We don’t know what happens at the Planck scale. It’s not just that we don’t know which theory of quantum gravity is right: we don’t even know what different quantum gravity proposals predict. People are trying to figure it out, and there are some more or less viable ideas, but ultimately all we know is that at the Planck scale our description of space-time should break down.
“Our description breaks down” is unfortunately not very catchy. Certainly, it’s less catchy than “pixels of the universe”. Part of the problem is that most people don’t know what “our description breaks down” actually means.
So if that’s the part that’s puzzling you, maybe an example would help. This won’t be the full answer, though it could be part of the story. What it will be is an example of what “our description breaks down” can actually mean, how there can be a scale beyond which space-time stops making sense without there being “pixels”.
The example comes from string theory, from a concept called “T duality”. In string theory, “extra” dimensions beyond our usual three space and one time are curled up small, so that traveling along them just gets you back where you started. Instead of particles, there are strings, with length close to the Planck length.
Picture a loop of string in a small extra dimension. What can it do?
One thing it can do is move along the extra dimension. Since it has to end up back where it started, it can’t just move at any speed it wants. It turns out that the smaller the extra dimension, the more energy the string has when it spins around it.
The other thing it can do is wrap around the extra dimension. If it wraps around, the string has more energy if the dimension is larger, like a rubber band stretched around a pipe.
The string can do either or both of these multiple times. It can wrap many times around the extra dimension, or move in a quicker circle around it, or both at once. And if you calculate the energy of these combinations, you notice something: a string wound around a big circle has the same energy as a string moving around a small circle. In particular, you get the same energy on a circle of radius $R$, and a circle of radius $l^2/R$, where $l$ is the length of the string.
It turns out it’s not just the energy that’s the same: for everything that happens on a circle of radius $R$, there’s a matching description with a circle of radius $l^2/R$, with wrapping and moving swapped. We say that the two descriptions are dual: two seemingly different pictures that turn out to be completely physically indistinguishable.
Since the two pictures are indistinguishable, it doesn’t actually make sense to talk about dimensions smaller than the length of the string. It’s not that they can’t exist, or that they’re smaller than the “pixels of the universe”: it’s just that any description you write down of such a small dimension could just as easily have been of a larger, dual dimension. It’s that your picture, of one obvious size of the curled up dimension, broke down and stopped making sense.
As I mentioned, this isn’t the whole picture of what happens at the Planck scale, even in string theory. It is an example of a broader idea that string theorists are investigating, that in order to understand space-time at the smallest scales you need to understand many different dual descriptions. And hopefully, it’s something you can hold in your mind, a specific example of what “our description breaks down” can actually mean in practice, without pixels.
# A Micrographia of Beastly Feynman Diagrams
Earlier this year, I had a paper about the weird multi-dimensional curves you get when you try to compute trickier and trickier Feynman diagrams. These curves were “Calabi-Yau”, a type of curve string theorists have studied as a way to curl up extra dimensions to preserve something called supersymmetry. At the time, string theorists asked me why Calabi-Yau curves showed up in these Feynman diagrams. Do they also have something to do with supersymmetry?
I still don’t know the general answer. I don’t know if all Feynman diagrams have Calabi-Yau curves hidden in them, or if only some do. But for a specific class of diagrams, I now know the reason. In this week’s paper, with Jacob Bourjaily, Andrew McLeod, and Matthias Wilhelm, we prove it.
We just needed to look at some more exotic beasts to figure it out.
Like this guy!
Meet the tardigrade. In biology, they’re incredibly tenacious microscopic animals, able to withstand the most extreme of temperatures and the radiation of outer space. In physics, we’re using their name for a class of Feynman diagrams.
A clear resemblance!
There is a long history of physicists using whimsical animal names for Feynman diagrams, from the penguin to the seagull (no relation). We chose to stick with microscopic organisms: in addition to the tardigrades, we have paramecia and amoebas, even a rogue coccolithophore.
The diagrams we look at have one thing in common, which is key to our proof: the number of lines on the inside of the diagram (“propagators”, which represent “virtual particles”) is related to the number of “loops” in the diagram, as well as the dimension. When these three numbers are related in the right way, it becomes relatively simple to show that any curves we find when computing the Feynman diagram have to be Calabi-Yau.
This includes the most well-known case of Calabi-Yaus showing up in Feynman diagrams, in so-called “banana” or “sunrise” graphs. It’s closely related to some of the cases examined by mathematicians, and our argument ended up pretty close to one made back in 2009 by the mathematician Francis Brown for a different class of diagrams. Oddly enough, neither argument works for the “traintrack” diagrams from our last paper. The tardigrades, paramecia, and amoebas are “more beastly” than those traintracks: their Calabi-Yau curves have more dimensions. In fact, we can show they have the most dimensions possible at each loop, provided all of our particles are massless. In some sense, tardigrades are “as beastly as you can get”.
We still don’t know whether all Feynman diagrams have Calabi-Yau curves, or just these. We’re not even sure how much it matters: it could be that the Calabi-Yau property is a red herring here, noticed because it’s interesting to string theorists but not so informative for us. We don’t understand Calabi-Yaus all that well yet ourselves, so we’ve been looking around at textbooks to try to figure out what people know. One of those textbooks was our inspiration for the “bestiary” in our title, an author whose whimsy we heartily approve of.
Like the classical bestiary, we hope that ours conveys a wholesome moral. There are much stranger beasts in the world of Feynman diagrams than anyone suspected.
# IGST 2018
Conference season in Copenhagen continues this week, with Integrability in Gauge and String Theory 2018. Integrability here refers to integrable theories, theories where physicists can calculate things exactly, without the perturbative approximations we typically use. Integrable theories come up in a wide variety of situations, but this conference was focused on the “high-energy” side of the field, on gauge theories (roughly, theories of fundamental forces like Yang-Mills) and string theory.
Integrability is one of the bigger sub-fields in my corner of physics, about the same size as amplitudes. It’s big enough that we can’t host the conference in the old Niels Bohr Institute auditorium.
Instead, they herded us into the old agriculture school
I don’t normally go to integrability conferences, but when the only cost is bus fare there’s not much to lose. Integrability is arguably amplitudes’s nearest neighbor. The two fields have a history of sharing ideas, and they have similar reputations in the wider community, seen as alternately deep and overly technical. Many of the talks still went over my head, but it was worth getting a chance to see how the neighbors are doing.
Sometimes physics debates get ugly. For the scientists reading this, imagine your worst opponents. Think of the people who always misinterpret your work while using shoddy arguments to prop up their own, where every question at a talk becomes a screaming match until you just stop going to the same conferences at all.
Now, imagine writing a paper with those people.
Adversarial collaborations, subject of a recent a contest on the blog Slate Star Codex, are a proposed method for resolving scientific debates. Two scientists on opposite sides of an argument commit to writing a paper together, describing the overall state of knowledge on the topic. For the paper to get published, both sides have to sign off on it: they both have to agree that everything in the paper is true. This prevents either side from cheating, or from coming back later with made-up objections: if a point in the paper is wrong, one side or the other is bound to catch it.
This won’t work for the most vicious debates, when one (or both) sides isn’t interested in common ground. But for some ongoing debates in physics, I think this approach could actually help.
One advantage of adversarial collaborations is in preventing accusations of bias. The debate between dark matter and MOND-like proposals is filled with these kinds of accusations: claims that one group or another is ignoring important data, being dishonest about the parameters they need to fit, or applying standards of proof they would never require of their own pet theory. Adversarial collaboration prevents these kinds of accusations: whatever comes out of an adversarial collaboration, both sides would make sure the other side didn’t bias it.
Another advantage of adversarial collaborations is that they make it much harder for one side to move the goalposts, or to accuse the other side of moving the goalposts. From the sidelines, one thing that frustrates me watching string theorists debate whether the theory can describe de Sitter space is that they rarely articulate what it would take to decisively show that a particular model gives rise to de Sitter. Any conclusion of an adversarial collaboration between de Sitter skeptics and optimists would at least guarantee that both parties agreed on the criteria. Similarly, I get the impression that many debates about interpretations of quantum mechanics are bogged down by one side claiming they’ve closed off a loophole with a new experiment, only for the other to claim it wasn’t the loophole they were actually using, something that could be avoided if both sides were involved in the experiment from the beginning.
It’s possible, even likely, that no-one will try adversarial collaboration for these debates. Even if they did, it’s quite possible the collaborations wouldn’t be able to agree on anything! Still, I have to hope that someone takes the plunge and tries writing a paper with their enemies. At minimum, it’ll be an interesting read!
# Strings 2018
I’m at Strings this week, in tropical Okinawa. Opening the conference, organizer Hirosi Ooguri joked that they had carefully scheduled things for a sunny time of year, and since the rainy season had just ended “who says that string theorists don’t make predictions?”
There was then a rainstorm during lunch, falsifying string theory
This is the first time I’ve been to Strings. There are almost 500 people here, which might seem small for folks in other fields, but for me this is the biggest conference I’ve attended. The size is noticeable in the little things: this is the first conference I’ve been to with a diaper changing room, the first managed by a tour company, the first with a dedicated “Cultural Evening” featuring classical music from the region. With this in mind, the conference were impressively well-organized, but there were some substantial gaps (tightly packed tours before the Cultural Evening that didn’t leave time for dinner, and a talk by Morrison cut short by missing slides that offset the schedule of the whole last day).
On the well-organized side, Strings has a particular structure for its talks, with Review Talks and Plenary Talks. The Review Talks each summarize a subject: mostly main focuses of the conference, but with a few (Ashoke Sen on String Field Theory, David Simmons-Duffin on the Conformal Bootstrap) that only covered the content of a few talks.
I’m not going to make another pie chart this year, if you want that kind of breakdown Daniel Harlow gave one during the “Golden Jubilee” at the end. If I did something like that this time, I’d divide it up not by sub-fields, but by goals. Talks here focused on a few big questions: “Can we classify all quantum field theories?” “What are the general principles behind quantum gravity?” “Can we make some of the murky aspects of string theory clearer?” “How can string theory give rise to sensible physics in four dimensions?”
Of those questions, classifying quantum field theories made up the bulk of the conference. I’ve heard people dismiss this work on the ground that much of it only works in supersymmetric theories. With that in mind, it was remarkable just how much of the conference was non-supersymmetric. Supersymmetry still played a role, but the assumption seemed to be that it was more of a sub-topic than something universal (to the extent that one of the Review Talks, Clay Cordova’s “What’s new with Q?”, was “the supersymmetry review talk”). Both supersymmetric and non-supersymmetric theories are increasingly understood as being part of a “landscape”, linked by duality and thinking at different scales. These links are sometimes understood in terms of string theory, but often not. So far it’s not clear if there is a real organizing principle here, especially for the non-supersymmetric cases, and people seem to be kept busy enough just proving the links they observe.
Finding general principles behind quantum gravity motivated a decent range of the talks, from Andrew Strominger to Jorge Santos. The topics that got the most focus, and two of the Review Talks, were by what I’ve referred to as “entanglers”, people investigating the structure of space and time via quantum entanglement and entropy. My main takeaway from these talks was perhaps a bit frivolous: between Maldacena’s talk (about an extremely small wormhole made from Standard Model-compatible building blocks) and Hartman’s discussion of the Average Null Energy Condition, it looks like a “useful sci-fi wormhole” (specifically, one that gets you there faster than going the normal way) has been conclusively ruled out in quantum field theory.
Only a minority of talks discussed using string theory to describe the real world, though I get the impression this was still more focus than in past years. In particular, there were several talks trying to discover properties of Calabi-Yaus, the geometries used to curl up string theory’s extra dimensions. Watching these talks I had a similar worry to Strominger’s question after Irene Valenzuela’s talk: it’s not clear that these investigations aren’t just examining a small range of possibilities, one that might become irrelevant if new dualities or types of compactification are found. Ironically, this objection seems to apply least to Valenzuela’s talk itself: characterizing the “swampland” of theories that don’t make sense as part of a theory of quantum gravity may start with examples from string compactifications, but its practitioners are looking for more general principles about quantum gravity and seem to manage at least reasonable arguments that don’t depend on string theory being true.
There wasn’t much from the amplitudes field at this conference, with just Yu-tin Huang’s talk carrying that particular flag. Despite that, amplitudes methods came up in several talks, with Silviu Pufu praising an amplitudes textbook and David Simmons-Duffin bringing up amplitudes several times (more than he did in his talk last week at Amplitudes).
The end of the conference featured a panel discussion in honor of String Theory’s 50th Anniversary, its “Golden Jubilee”. The panel was evenly split between founders of string theory, heroes of the string duality revolution, and the current crop of young theorists. The panelists started by each giving a short presentation. Michael Green joked that it felt like a “geriatric gong show”, and indeed a few of the presentations were gong show-esque. Still, some of the speeches were inspiring. I was particularly impressed by Juan Maldacena, Eva Silverstein, and Daniel Harlow, who each laid out a compelling direction for string theory’s future. The questions afterwards were collated by David Gross from audience submissions, and were largely what you would expect, with quite a lot of questions about whether string theory can ever connect with experiment. I was more than a little disappointed by the discussion of whether string theory can give rise to de Sitter space, which was rather botched: Maldacena was appointed as the defender of de Sitter, but (contra Gross’s summary) the quantum complexity-based derivation he proposed didn’t sound much like the flux compactifications that have inspired so much controversy, so everyone involved ended up talking past each other.
Edit: See Shamit’s comment below, I apparently misunderstood what Maldacena was referring to.
# Calabi-Yaus for Higgs Phenomenology
less joking title:
# You Didn’t Think We’d Stop at Elliptics, Did You?
When calculating scattering amplitudes, I like to work with polylogarithms. They’re a very well-understood type of mathematical function, and thus pretty easy to work with.
Even for our favorite theory of N=4 super Yang-Mills, though, they’re not the whole story. You need other types of functions to represent amplitudes, elliptic polylogarithms that are only just beginning to be properly understood. We had our own modest contribution to that topic last year.
You can think of the difference between these functions in terms of more and more complicated curves. Polylogarithms just need circles or spheres, elliptic polylogarithms can be described with a torus.
A torus is far from the most complicated curve you can think of, though.
String theorists have done a lot of research into complicated curves, in particular ones with a property called Calabi-Yau. They were looking for ways to curl up six or seven extra dimensions, to get down to the four we experience. They wanted to find ways of curling that preserved some supersymmetry, in the hope that they could use it to predict new particles, and it turned out that Calabi-Yau was the condition they needed.
That hope, for the most part, didn’t pan out. There were too many Calabi-Yaus to check, and the LHC hasn’t seen any supersymmetric particles. Today, “string phenomenologists”, who try to use string theory to predict new particles, are a relatively small branch of the field.
This research did, however, have lasting impact: due to string theorists’ interest, there are huge databases of Calabi-Yau curves, and fruitful dialogues with mathematicians about classifying them.
This has proven quite convenient for us, as we happen to have some Calabi-Yaus to classify.
Our midnight train going anywhere…in the space of Calabi-Yaus
We call Feynman diagrams like the one above “traintrack integrals”. With two loops, it’s the elliptic integral we calculated last year. With three, though, you need a type of Calabi-Yau curve called a K3. With four loops, it looks like you start needing Calabi-Yau three-folds, the type of space used to compactify string theory to four dimensions.
“We” in this case is myself, Jacob Bourjaily, Andrew McLeod, Matthias Wilhelm, and Yang-Hui He, a Calabi-Yau expert we brought on to help us classify these things. Our new paper investigates these integrals, and the more and more complicated curves needed to compute them.
Calabi-Yaus had been seen in amplitudes before, in diagrams called “sunrise” or “banana” integrals. Our example shows that they should occur much more broadly. “Traintrack” integrals appear in our favorite N=4 super Yang-Mills theory, but they also appear in theories involving just scalar fields, like the Higgs boson. For enough loops and particles, we’re going to need more and more complicated functions, not just the polylogarithms and elliptic polylogarithms that people understand.
(And to be clear, no, nobody needs to do this calculation for Higgs bosons in practice. This diagram would calculate the result of two Higgs bosons colliding and producing ten or more Higgs bosons, all at energies so high you can ignore their mass, which is…not exactly relevant for current collider phenomenology. Still, the title proved too tempting to resist.)
Is there a way to understand traintrack integrals like we understand polylogarithms? What kinds of Calabi-Yaus do they pick out, in the vast space of these curves? We’d love to find out. For the moment, we just wanted to remind all the people excited about elliptic polylogarithms that there’s quite a bit more strangeness to find, even if we don’t leave the tracks.
|
2019-02-24 05:08:45
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 5, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.5440008640289307, "perplexity": 761.3044669479242}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-09/segments/1550249595829.93/warc/CC-MAIN-20190224044113-20190224070113-00207.warc.gz"}
|
https://plainmath.net/force-motion-and-energy/102586-what-does-98-n-kg-mean
|
skeletordtgp
2023-02-18
What does $9.8N/\mathrm{kg}$ mean?
Makenna Martinez
Force is equal to the rate of change of momentum in accordance with Newton's second law of motion. Force is defined as mass times acceleration for a constant mass.
$\mathrm{Force}=\mathrm{mass}×\mathrm{acceletration}\mathrm{acceleration}=\frac{\mathrm{Force}}{\mathrm{mass}}$
Force per mass means Newton per kg. $9.8N/\mathrm{kg}$ means $9.8N$ earth's gravitational field strength is required to pull $1\mathrm{kg}$ mass towards the earth's center.
Do you have a similar question?
|
2023-03-24 16:30:50
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 5, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9001318216323853, "perplexity": 369.65710012847927}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2023-14/segments/1679296945287.43/warc/CC-MAIN-20230324144746-20230324174746-00103.warc.gz"}
|
https://www.calcufox.com/eng/5014.html
|
Initial Data
$$ax^{2}+bx+c=0$$
a=
b=
c=
$$x=\frac{-b\pm \sqrt{b^{2}-4ac}}{2a}$$
x1=
x2=
A quadratic equation is a polynomial equation of the second degree (known as early as 2000 BC), where x represents an unknown, and a, b, and c known numbers.
|
2022-07-05 01:19:43
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.7304871678352356, "perplexity": 395.3347830133319}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-27/segments/1656104506762.79/warc/CC-MAIN-20220704232527-20220705022527-00004.warc.gz"}
|
https://yuqinghu.blog/2017/03/09/more-detailed-proof-of-blocking-lemma-gale-and-sotomayor-1985/
|
Lemma: Let $\mu$ be any individually rational matching with respect to strict preferences $P$ and let $M'$ be all men who prefer $\mu$ to $\mu_M$. If $M'$ is nonempty, there is a pair $(m, w)$ that blocks $\mu$ such that $m$ is in $M-M'$ and $w$ is in $\mu(M')$.
Proof.
Case 1: $\mu(M')\not=\mu_M(M')$, i.e. the set of men who prefer $\mu$ to $\mu_M$ are matched to different sets of women under the matching rules $\mu$ and $\mu_M$.
Pick any $w\in\mu(M')-\mu_M(M')$. Note that $\mu(M')\not\subset\mu_M(M')$ in this case because it’s a one-to-one matching.
Denote $m'=\mu(w)$, and $m=\mu_M(w)$, so $m'\in M'$, and $m\not\in M'$. This implies that $w\succ \mu_M(m)$ and $w\succ_m\mu(m)$.
Note that $m\succ_w m'$, otherwise $(m', m)$ will block $\mu_M$, contradicting to that $\mu_M is stable$.
Therefore, $(m, w)$ blocks $\mu$ such that $w\in\mu(M')$ and $m\in M-M’$.
Case 2:$\mu(M')=\mu_M(M')$, i.e. the set of men who prefer $\mu$ to $\mu_M$ are matched to the same set of women under the matching rules $\mu$ and $\mu_M$.
Let $w$ be the last woman in $W'$ to receive a proposal from an acceptable member of $M'$ in the deferred acceptance. Denote this man by $m'$. So there are two possibilities: 1) $w$ has not been proposed before, and $\mu_M(m')=w$; 2) $w$ is already engaged with some other man $m$, and she rejects $m$ and accepts $m'$.
1) is not possible: If it were true, then since $w\in W'$, there must be some $m'\in M$ such that $\mu(m')=w\succ_{m'}\mu_M(m)$. That means when we run the deferred-acceptance algorithm to implement $\mu_M$, $m'$ has already proposed to $w$ and got rejected, contradicting that $w$ has not been proposed.
2) i. $m\not\in M'$, otherwise $m$ will propose to another woman, contradicting that $w$ is the last woman in $W'$ to receive a proposal. Therefore $w\succ_m\mu_M(m)$. And since $m\not\in M'$, that means $\mu_M(m)\succ_m\mu(m)$. This implies $w\succ_m\mu(m)$.
ii. Since $w$ is the last woman to receive the proposal, it must be that before $w$ rejects $m$, she must have rejected $\mu(w)$, i.e. her mate under $\mu$. That means $m\succ_w\mu(m)$.
Combine i. and ii., we conclude that $(m, w)$ blocks $\mu$.
Q.E.D.
Alternative proof:
Case 1: The same as above.
Case 2$\mu(M')=\mu_M(M')$.
Define the new market $(M', W', P')$. $P'(m)$ is the same as $P(m)$ restricted to $W'\cup\{m\}$, and $P'(w)$ is the same as $P(w)$ restricted to $M'\cup\{w\}$, $\forall m\in M'$, $w\in W'$.
Note that $\forall m\in M'$, $w\in W'$, we must have $\mu(m)\succ_m\mu_M(m)$ and $\mu_M(m)\succ_w\mu(w)$, otherwise $\mu_M$ would be blocked. We can write this as:
$\mu_M\succ_{W'}\succ\mu$
and
$\mu\succ_{M'}\succ\mu_M$.
So that means under $P'$, $w'$ is now ranked just below $\mu(w)$, and $m$ is now ranked just below $\mu_M(m)$.
In other words, the only men in $M'$ who are unacceptable to $w$ are those $m$ such that $m\succeq_w \mu(w)$, so $\mu_M(w)$ is acceptable to $w$ under $P'$ for all $w\in W'$.
Note that $\mu_M$ restricted to $M'\cup W'$ is still stable for $(M', W, P')$, because any pair that blocks $\mu_M$ under $P'$ would also block it under $P$.
Let $\mu_M'$ be the $M'$-optimal matching for $(M', W', P')$, then by Pareto-optimality Theorem, it must be that
(*) $\mu_M'\succ_{M'}$.
Otherwise, if $\mu_M=\mu_M'$, then it would be contradicting $\mu\succ_{M'}\succ\mu_{M'}$.
Furthermore, $\mu_{M'}\succeq_{W'}\mu$ by the construction of $P'$.
Define $\mu'$ on $M\cup W$ by $\mu'=\mu_{M'}$ on $M'\cup W'$, and $latex\mu’=\mu_M$ on $(M'-M)\cup (W'-W)$.
Combine it with (*), we have $\mu'\succ_M\mu_M$. $\mu'$ is not stable for $(M,W, P)$, so let $\{m, w\}$ be a blocking pair.
i). If $\{m, w\}$ in $M'\cup W'$, $m$ and $w$ would be mutually acceptable under $P'$, by construction of $P'$, and so $\{m, w\}$ would block $\mu_M'$. Also note that
then
$w=\mu'(m)=\mu_{M'}(m)\succ_m\succ\mu_M(m)$
and
$\mu_{M'}(w)=\mu'(w)=m\succ_w\mu(w)$.
So $\{m, w\}$ does not block $\mu'$.
ii). If $m\in M'$, and $m\in W-W'$, then
$w=\mu'(m)=\mu'_M(m)\succ_m\succ\mu_M(m)$
and
$\mu(w)\succ_w\mu_M(w)=\mu'(w)=m$.
$\{m, w\}$ blocks $\mu_M$, but it does not block $\mu'$.
iii). If $m\in M-M'$, $w\in W-W'$, then
$w=\mu'(m)=\mu_M(m)\succ_m\mu(m)$
and
$\mu(w)\succ_w\mu_M(w)=\mu'(w)=m$.
So $\{m, w\}$ does not block $\mu'$.
iv). If $m\in M-M'$, $w\in W'$, then
$w=\mu'(m)=\mu_M(m)\succ_m\mu(m)$
and
$\mu_{M'}(w)=\mu'(w)=m\succ_w\mu(w)$.
So $\{m, w\}$ does blocks $\mu'$. It’s the desired blocking pair.
Q.E.D.
References:
Gale, D., & Sotomayor, M. (1985). Some Remarks on the Stable Matching. Discrete Applied Mathematics, 11, 223–232.
Roth, A. E., & Sotomayor, M. (1990). Two-Sided Matching: A Study in Game-Theoretic Modeling and Analysis. Cambridge University Press.
|
2021-01-19 04:49:26
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 154, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9242510795593262, "perplexity": 267.7164518951245}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-04/segments/1610703517966.39/warc/CC-MAIN-20210119042046-20210119072046-00783.warc.gz"}
|
http://alberding.phys.sfu.ca/wordpress/?p=341
|
# Comments on 1st year lab equipment options.
If one wants to emulate scope and function generator (f.g.) with Labview then sensorDAQ is not adequate. One needs NI myDAQ.
In order to use Vernier’s plug-in sensors then the Vernier myDAQ attachment is also needed to plug in to it.
Limitations:
* 20 kHz max freq
* 0.5 W max total power output for all ports (e.g., 10V, 50 mA)
* only 1 digital and 2 analog inputs for Vernier sensor plugs.
* using the Vernier power amplifier ($247) will allow the computer to serve as a f.g. up to 15 kHz with ±10V, 1A using the computer’s sound card. http://www.vernier.com/products/sensors/pamp/ Don’t know how well the virtual system will work with both f.g. and oscilloscope emulation vi’s being used at the same time. Compared to stand-alone * oscilloscope: 50 MHz or greater bandwidth (http://www.tek.com/oscilloscope/tbs1000b-edu-digital-storage-oscilloscope) * f.g. max freq several MHz and about 5 to 10 W power output depending on model. It is possible to design most of our experiments to work in the audio range in most cases, but we do use the max power output of our current function generators. For example in the slinky induction lab we go up to 1 kHz but the pick-up signal is small and we put the f.g. on max output. (https://wiki.sfu.ca/departments/phys-studio/index.php/U26s3) If we use a stand-alone f.g. then we might consider the Labquest mini. The labquest mini has 3 analog and 2 digital ports. It can be accessed with Labview and voltage signals can be input through mini grappler plugs into any or all of the analog ports — no need for tiny screw drivers. ## Tracks: The Pastrack is a plastic multi-segment track It is composed of 50 cm segments that have to be put together. (http://www.pasco.com/prodCatalog/ME/ME-6960_pastrack/index.cfm) This design causes glitches in data when the carts pass over the junction. The FIC instructors don’t like them and have ordered one-piece Al replacements. The tables work with 1.2 m track lengths, 2m (as shown in Dave’s presentation) would be too long. The Vernier 1.2 m track is about the same price as Pasco’s ($150) but includes better options for brackets and mounting and include the feet. It allows for a bracket to mount the go-motion so that glitch-free data are usually collected compared to using the pasco tracks without the bracket. (http://www.vernier.com/products/accessories/track/) End stops are \$10 extra. (http://www.vernier.com/products/accessories/as-vds/)
The optics kit designed to fit on on Pasco’s dynamics track is not comparable to the Pasco introductory optics kits we now hove. (http://www.pasco.com/prodCatalog/OS/OS-8500_introductory-optics-system/index.cfm)
There are only light source, screen and two lenses.
The lenses are demountable from the holders with fussy 3-screw mounts that will give problems in a first-year lab environment.
http://www.pasco.com/prodCatalog/OS/OS-8471_dynamics-track-optics-kit/index.cfm#resourcesTab.
The main bulk of the box of the basic Pasco optics kit we now have is the foam rubber cutouts that allows one to quickly verify the many items that are included in the kit. The extra encumbrance of the optics bench only adds about 2 cm to the box width.
|
2017-07-21 18:34:42
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.21822477877140045, "perplexity": 6133.752344359862}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-30/segments/1500549423808.34/warc/CC-MAIN-20170721182450-20170721202450-00656.warc.gz"}
|
http://acooke.org/cute/CInterface0.html
|
# C[omp]ute
Welcome to my blog, which was once a mailing list of the same name and is still generated by mail. Please reply via the "comment" links.
Always interested in offers/projects/new ideas. Eclectic experience in fields like: numerical computing; Python web; Java enterprise; functional languages; GPGPU; SQL databases; etc. Based in Santiago, Chile; telecommute worldwide. CV; email.
© 2006-2015 Andrew Cooke (site) / post authors (content).
## C Interfaces and Implementations (A Review)
From: andrew cooke <andrew@...>
Date: Sat, 19 May 2012 03:15:10 -0400
I've seen this book recommended for people who know C and want to improve.
And it is, I think, something of a classic, originally published in 1996.
Recently I've been working in C. While I enjoy some aspects of the language I
wasn't too happy with my project (now drawing to a close). Looking round for
a possible cure for the ills I felt in my code, I decided to read this book.
In retrospect, I think the problems with my code come from the usual
compromises that responsible, professional development involves (there is
certainly room for improvement, but the skills needed are likely "softer" than
those discussed here).
Before coming to that conclusion I bought and read this book. That it didn't
help me is my own fault (see above), but I am not convinced how much it will
help others, either.
It does teach an important technique: the use of abstract data types (ADTs) in
C. This is possible because you can name a struct separately from its
definition (as you can a function - this is what header files are for). With
care this can be used to define an interface that depends on a type whose
details are opaque to the user.
For example, consider a (partial) API for linked lists:
list.h:
typedef struct list_struct *LIST;
LIST append(LIST list, void *data);
int length(LIST list);
...
list.c:
#include "list.h"
struct {
LIST tail;
} list_struct;
LIST append(LIST list, void *data) {
LIST next = calloc(1, sizeof(*next));
next->tail = list;
return next;
}
...
You can see that someone using the API, who would include only list.h in their
code, would know nothing of the internal structure of lists. This means that
the implementation can be changed without harming the client code (assuming
that the contracts between client and library are clear).
(Forgive me if I have an error above - I haven't compiled it - but I hope it's
clear enough to give the general idea).
If you look at the code above, you might wonder what could change. What makes
a good API? One interesting technical question is: why is LIST defined as a
pointer? Why the above, rather than:
list.h:
typedef struct list_struct LIST;
LIST *append(LIST *list, void *data);
int length(LIST *list);
...
Maybe that seems like a trivial detail, but it raises an interesting issue:
the initial approach makes the use of "const" invalid.
In other words:
typedef struct list_struct *LIST;
LIST append(const LIST list, void *data);
is silly (it's just guaranteeing the constness of the pointer, not the struct
itself). In contrast
typedef struct list_struct LIST;
LIST append(const LIST *list, void *data);
is saying something about the list itself. So isn't that better? I must
admit that I generally prefer the latter approach for a much more practical
reason - I get less confused about levels of indirection. But Hanson (the
author of CI&I) argues that it places constraints on the implementation. For
example, a hashtable might reasonably want to resize during an insert, or some
optimisation might imply lazy initialisation.
It's a valid point, and something I am glad I read. Unfortunately that was
the best part, just 29 pages in.
Now at this point I should say that I may be to blame. I may just be too lazy
- the book might repay a more detailed reading than I managed. Because, while
I read with the best of intentions, I found that there were times when I had
no real recollection of the previous few pages.
The problem, I feel, is that this book is written as "literate code". That
means that every line of the library appears in the book. Which has to mean
that many sections are, frankly, mundane detail.
Worse, the literate approach gives the entire book a very flat structure.
Each chapter is arranged with a description of the interface before the
implementation. Which might lead you to hope that the first section of each
chapter will give a high-level overview of the design. But it doesn't really
work like that - the initial sections tend to be vague and incomplete.
Largely, I suspect, so that the later sections aren't quite so boring.
And the library developed - while useful - doesn't dogfood itself. In other
words, most parts of the library are implemented in isolation. There are a
few exceptions where there's an obvious layered approach (for example, text
processing built on low level strings, or an arbitrary precision calculator
built on a library for extended precision positive integers), but that's
pretty much all: the hash table (chapter 8), for example, isn't used by the
"atoms" library (chapter 3), which instead implements a hash table all by
itself.
Perhaps there a reason for this duplication, but that kind of explanation was
what I missed most - there's very little (apart from the reasoning on ADTs
above) to justify the choices made.
An example of this lack of explanation is the choice of indexing. Various
APIs allow indexing from either end of a sequence (eg to access characters
counting left from the end of a string). The common convention is to use
indexes to the right starting from 1, and indexes to the left starting from 0,
counting down (if you just skimmed that sentence, go back and think about it.
Yeah. Weird).
As far as I can tell, the motivation for this is (1) that is how Icon does
(did?) it and (2) this schema means indices in the two directions are unique,
letting you specify a pair of indices (for a range) in either order.
Obviously it's hard to judge historical decisions when clouded by current
conventions (Python stays consistent with C's zero indexing, supports
backwards indexing from -1, and expects pairs to be ordered left-right), but
as far as I remember, even back in 96, Icon was not *that* big a deal. And
breaking C's conventions in a C library seems, well, at the very least
something you should justify in detail.
While I remember - one other symptom of age is that the Threads library, which
uses assembler, doesn't support x86.
Another example of where I would have appreciated more analysis was in the
choice of representation for signed, arbitrary precision integers. The
implementation here uses a separate flag for sign. I am unsure why that was
chosen rather than a two's complement approach, or why the flag doesn't also
indicate zero (I'm not saying that these would be better; I just expected this
book to be the kind of book that explains such things).
So there could be more "high-level" explanations. There could also be less
"low-level" detail. There is a chapter for lists and another for
double-linked lists (called rings). Both might be useful in a library, but in
a book they are 90% duplication. And the difference between vectors (called
sequences) and dynamic arrays is even smaller (in passing, note that although
one advantage of ADTs is that you can swap implementations, these pairs cannot
be swapped, even though I suspect that in both cases one is a subset of the
other; again, I am not saying that would be good, just that it would be nice
to have heard why that choice was made).
Oh, and there are exercises. Some of which address the design issues. None
of which have answers. So I guess if Hanson was your lecturer you got a
better deal.
In conclusion, this book has a good idea: ADTs. It hammers that idea home in
the detail you would expect when every line of code is described. But it
otherwise lacks the discussion of general principles you would expect for an
"advanced" text, it's dated in parts, and I am not sure that all of the
contents made sense even back when it was written.
Andrew
http://www.amazon.com/dp/0201498413
|
2015-05-04 11:01:45
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.3353135585784912, "perplexity": 2987.769338659389}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2015-18/segments/1430454040787.28/warc/CC-MAIN-20150501042040-00031-ip-10-235-10-82.ec2.internal.warc.gz"}
|
https://www.doitpoms.ac.uk/tlplib/deformation/printall.php
|
Dissemination of IT for the Promotion of Materials Science (DoITPoMS)
DoITPoMS Teaching & Learning Packages Deformation of Honeycombs and Foams Deformation of Honeycombs and Foams (all content)
# Deformation of Honeycombs and Foams (all content)
Note: DoITPoMS Teaching and Learning Packages are intended to be used interactively at a computer! This print-friendly version of the TLP is provided for convenience, but does not display all the content of the TLP. For example, any video clips and answers to questions are missing. The formatting (page breaks, etc) of the printed version is unpredictable and highly dependent on your browser.
## Aims
On completion of this TLP you should:
• Understand what processes determine the elastic behaviour of a honeycomb structure;
• Know what processes cause the onset of yielding;
• Understand how these ideas for simple structures might be extended to more complex ones, such as foams.
## Before you start
There are no special prerequisites for this TLP.
## Introduction
There is an important class of materials, many biological, that are highly porous and made by bonding rods, ribbons or fibres in both regular and irregular structures. These include paper, bone, wood, packaging foam and insulating fibre mats and they can be made of polymers, metals, ceramics and natural materials. Despite the different structures and materials, there are many similarities in how they behave. An important class of these materials is where the rods or ribbons form cells, co-called cellular structures. Here we explain how such structures deform in compression. To understand the deformation processes more easily the behaviour of a regular honeycomb structure is described before extending the ideas to structures such as foams.
## Compression of a honeycomb: Experimental
The honeycomb studied here is an array of regular hexagonal cells, with the cell walls made of thin strips of aluminium. The structure is not quite as simple as it first appears because of the way in which it is made (see details). This causes one in every three cell walls to consist of two layers of aluminium bonded with adhesive, making some walls stiffer than others. In the honeycomb used here the thickness of an individual sheet, t, was 0.09 mm, and the length of each of the cell faces, l, was 6.30 mm. The relative density, the measured density as a fraction of the density of the solid material, is 0.008.
There are two different directions in which a hexagonal honeycomb can be compressed in the plane of the honeycomb.
Cell walls lie parallel to the loading axis Cell walls lie diagonal to the loading axis
Although quantitatively different, the basic deformation processes are similar in both cases so only the situation where some cell walls lie parallel to the loading axis is described here.
Squares of honeycomb with 6 cells along each side were cut from a sheet of material. The samples were then compressed between flat, parallel platens at a constant displacement rate of 1 mm min-1, giving the stress-strain curve below.
The stress-strain curve has 3 distinct regions:
• an initial elastic region;
• followed by the onset of irreversible leading to region where the stress does not change with increasing strain, known as the plateau region;
• and lastly a region where the stress again begins to rise rapidly with increasing strain, known as densification.
#### Elastic region
The elastic region is characterised by the effective Young modulus of the material, that is the Young modulus of a uniform material that for the same imposed stresses gives rise to the same strains. This is found by taking the slope of the unloading curve, which helps to reduce the effects of any local plastic flow.
The measured Young modulus was 435 kPa.
#### Plateau region
With continued loading the stresses in the faces increase. Eventually these reach the flow stress of the material and irreversible yielding begins. The stress at which this occurred was approximately 15 kPa. The structure then began to collapse at an approximately constant stress, σP.
#### Densification
This continued up to a strain of ~ 0.7, at which point the stress started to rise much more rapidly as the faces from opposite sides of the cells were pressed up against one another.
This three-stage deformation behaviour is typical of virtually all highly porous materials, even ones made of very brittle materials. The next step is to try and quantitatively describe the observed behaviour.
## Elastic behaviour (I)
To start let us assume that the predominant contribution to the elastic strain comes from the axial compression of the vertical struts, as shown below.
We can estimate the magnitude of this strain as the cross-sectional area of solid material, AV, in a cut across just the vertical faces is less than that if the material were completely solid by the ratio of the cell wall thickness, t, to the horizontal distance across each cell, 2 l cos θ.
As t << l cos θ, this is given by $${A_{\rm{V}}} = t/2l\;\cos \theta$$
Using the measured values of t ( = 0.09 mm) and l ( = 6.30 mm), and taking θ = 30° as the cells are hexagonal gives AV as 0.008. Taking the Young modulus of aluminium as 70 GPa, this predicts the Young modulus of the honeycomb to be 560 MPa. This is greater than the observed value of 435 kPa by more than 3 orders of magnitude and shows that axial compression of the vertical faces makes a negligible contribution to the elastic strain.
## Elastic behaviour (II)
If the axially loaded faces do not deform, then clearly it must be those at an angle to the loading axis that do, the diagonal faces. And because they are at an angle to the loading axis they will bend.
The bending of each face beams must be symmetrical about the mid-point of the face and can be estimated using beam bending theory. To do this each face is described as two beams, cantilevered at the vertices of the hexagonal cell and loaded at the centre point. Note that one beam (i.e. a half cell wall) is pushed upward, the other downward.
This [derivation] gives the Young modulus of the honeycomb as
$E = \frac{4}{{\sqrt 3 }}{E_{\rm{S}}}{\rm{ }}{\left( {\frac{t}{l}} \right)^3}$
Using the measured values of t( = 0.09 mm) and l ( = 6.30 mm) and taking ES as 70 GPa, the elastic modulus is predicted to be 471 kPa. This is within 10% of the measured value for the sample.
The predominant contribution to the Young modulus is therefore from the bending of the diagonal faces.
## Yielding and plateau behaviour
The aluminium honeycomb will start to plastically deform if the stress in the faces anywhere exceeds the flow stress, σY, of the aluminium cell wall. We have already shown that the predominant contribution to the elastic strain is the bending of the diagonal faces. (see here). Furthermore we could estimate this if each face were considered to be made up of two beams, each of length l/2, cantilevered at the end connected to the vertical cell wall and acted upon by a force of magnitude F cos θ, where F is the force applied at the ends of the sample and θ is the angle between the diagonal face and the horizontal. It is clear then that the stress will be a maximum where the moment is greatest, that is at the vertices of the hexagonal cells.
It can be shown that the applied stress, σ, when the maximum stress in each face reaches the flow stress, σY, of the material making up the cell walls is given by (derivation)
$\sigma = \frac{4}{9} \cdot {\left( {\frac{t}{l}} \right)^2}{\sigma _{\rm{Y}}}$
Using the measured values of t ( = 0.09 mm), l ( = 6.30 mm) and σY ( = 100 MPa), predicts the yield strength of the honeycomb, σ, to be 9 kPa, somewhat lower than the measured value of 15 kPa. The stress we have estimated is the stress at which plastic flow will start in the outer surfaces at the cantilevered point. To enable plastic flow to spread through the thickness of the cell face requires that the stress is increased further by a factor of 1.5, giving a macroscopic flow stress of 13.5 kPa, much closer to the measured value.
Once the material has started to yield the cell walls begin to collapse. This occurs at an approximately constant stress until the cell walls impinge on one another when the stress begins to rise more rapidly with increasing strain.
## Densification
As the honeycomb yields in the plateau region, the regular hexagonal cell with a height (l + 2l sin θ) changes shape with the protruding apices being pressed toward one another to give cells with the shape of a bow-tie and a height l.
If the cells deform uniformly then the strain at which this occurs, εD, is given by
${\varepsilon _{\rm{D}}} = \ln \left( {\frac{l}{{l + 2l\sin \theta }}} \right) = \ln \left( {\frac{1}{{1 + 2\sin \theta }}} \right)$
Note true strain is used because the strains are large and compressive. As θ = 30°, εD, is predicted to have a magnitude of 0.7. Further increases in strain cause opposing cell walls to be pressed against one another and the stress required for further deformation increases rapidly. As can be seen in the stress strain curve below this prediction gives good agreement with the observed stress-strain curve.
It can be seen that this collapse does not occur uniformly throughout the whole structure, but layer by layer of cells. This behaviour is rather dependent on the size of the cell compared to that of the sample. Increasing the number of cells in a cross-section causes the behaviour to become more uniform as might be expected.
It is now possible to quantitatively understand the entire stress-strain behaviour of a simple honeycomb. The next step is to extend these ideas to less regular structures, such as foams and fibrous structures.
## Other porous structures
Many other porous structures show the same type of stress-strain behaviour. The basic reasons are similar. The initial behaviour is elastic, until the stresses in the struts reach their flow or fracture stress. There is then a plateau region as the cells collapse, until the struts from opposite sides of the cells impinge on one another and the applied stress increases more rapidly. However the details can be very different. For instance the struts in ceramic foams tend to break, but a plateau region is still seen.
Many combinations of material and cell structure are possible. Seeing how the cells deform in a foam is more difficult than in the simple honeycomb. However this has been done using X-ray tomography as shown in the short video.
Deformation of cells in foam
Deformation of cells in foam
(For further details see J.A. Elliott et al, “In-situ deformation of an open-cell flexible polyurethane foam characterised by 3D computed microtomography”, J. Mater. Sci. 37 (2002) 1547-1555.)
Looking at the large cell on the right-hand side, it is clear that the deformation of the foam is similar to the honeycomb and the strain comes predominantly from the bending of the struts transverse to the loading axis.
For simplicity, consider the open-cell foam as having a cubic unit cell as shown below.
Note that each transverse strut has a vertical strut half-way along it, so that axial loading causes the struts transverse to the loading axis to bend, as shown in the diagram above. The Young modulus can now be estimated in a similar way to that for the honeycomb, except that the struts are assumed to have a square cross-section, rather than being rectangular as before and θ, the angle between the transverse strut and the horizontal is 0.
This gives an expression for the relative Young modulus, E/ES, as
$\frac{E}{{{E_{\rm{S}}}}} = k{\rm{ }}{\left( {\frac{\rho }{{{\rho _{\rm{S}}}}}} \right)^2}$
where E is the Young modulus of the porous material and ES that of the solid material and k is a numerical constant, experimentally found to approximately equal to 1 (derivation).
As can be seen in the graph above, experiments show this is correct for isotropic, open-cell foams and even appears to be obeyed where the struts are not slender beams and also, at least approximately, where the cells are closed rather than open. This is thought to arise because in most closed-cell foams most of the material is still along the edges of the cells, rather than being uniformly distributed across the faces. (The data is taken from various sources cited in L.J. Gibson and M.F. Ashby, "On the mechanics of three-dimensional cellular materials", Proc. Roy. Soc. A, 382[1782] (1982) 43-59.)
## Porous structures in bending
Porous structures are often used as a lightweight core separating two strong, stiff outer layers to form a sandwich panel. Like the I-beam, such structures have a greater resistance to bending per unit weight of material than a solid beam and so are useful where weight-saving and stiffness are important. Typical applications include flooring panels in aircraft or rotors in helicopter blades. Sandwich structures are also common in biological structures, such as leaves or spongy bone.
From J. Banhart, Manufacture, Characterisation and Application of Cellular Metals and Metal Foams, Progress Mater. Sci., 2001, 46, pp.559-632. From Cell Biology by Thomas D. Pollard and William C. Earnshaw, Saunders 2004, pp.540 (Figure 34-4), courtesy of D.W. Fawcett, Harvard Medical School.
However some porous solids, such as wood are used without the stiff, strong outer layers. We might ask whether it is generally true, or under what conditions a porous rod will be stiffer in bending (i.e. give a smaller deflection for a given applied force) than a solid rod of the same length and overall mass.
Consider two rods one is porous and the other is solid. As each rod has the same length and mass and a circular cross-section, the porous one must have a larger radius.
The deflection, δ, of a cantilevered beam of length L under an imposed force W is given by
$\delta = \frac{1}{3}\frac{{W{L^3}}}{{EI}}$
For given values of W and L an increase in beam stiffness requires a higher value of the product EI. For a beam of circular cross-section <I = πr4/4. The porous beam has a larger radius and therefore a larger moment of area than the solid beam. However the porous beam also has a lower Young modulus. For the porous beam to be stiffer in bending, the rate at which the moment of area increases with radius must therefore be greater than the rate at which the Young modulus decreases.
If the density of the porous beam is ρ and solid beam is ρS and both have the same length and mass, then the ratio of the radius of the porous beam, r, to that of the solid beam, rS, is
$\left( {\frac{{{\rho _{\rm{S}}}}}{\rho }} \right) = {\left( {\frac{r}{{{r_{\rm{S}}}}}} \right)^2}$
As I ∝ r4 the ratio of the second moments of area of the porous and solid beams, I and IS respectively, is
$\frac{I}{{{I_{\rm{S}}}}} = {\left( {\frac{{{\rho _{\rm{S}}}}}{\rho }} \right)^2}$
In other words I/IS increases as the inverse square of the relative density, ρ/ρS.
Now the expression derived above for the elastic modulus of an open-cell porous body was
$\frac{E}{{{E_{\rm{S}}}}} = k{\rm{ }}{\left( {\frac{\rho }{{{\rho _{\rm{S}}}}}} \right)^2}$
That is E/ES decreases as the square of the relative density. In other words although I is increasing with decreasing density, E is decreasing at the same rate. In this case there would be no advantage in using such a material in bending compared with the solid material.
As nothing can be done about the change in radius, and hence I, with relative density, a higher bending stiffness can only be obtained by ensuring that E/ES varies with ρ/ρS by a power less than 2. This is the case for the axial Young modulus of wood where the exponent lies closer to 1 rather than 2, as shown below.
(The data is taken from K.E. Easterling et al, “On the mechanics of balsa and other woods”, Proc. Roy. Soc. A, 383[1784] (1982) 31-41.) Such changes can be brought about by varying the cell structure, for instance by elongating the cells, as occurs in wood. However in the transverse (radial and tangential) directions E/ES for wood decreases much more rapidly with decreasing ρ/ρS. Here the exponent lies between 2 and 3.
## Summary
In this TLP, the elastic, yielding and densification behaviour of a simple honeycomb structure has been studied experimentally. It is shown that the deformation of a honeycomb structure is made up of 3 main regions: an elastic region, which ends when the maximum stress in the cell faces becomes equal to the flow stress of the material, allowing the cells compact at a constant stress, followed by a region in which the load rises rapidly with increasing strain, as the honeycomb is compacted.
Quantitative descriptions of the behaviour have been derived and compared with the experimental measurements. These show that the deformation behaviour of a honeycomb is determined not by the axial compression of those faces parallel to the loading axis, but by the bending of faces lying at some angle to the loading axis.
It has been shown that these ideas can be extended to describe the deformation behaviour of more irregular structures, such as foams. For foams that are isotropic and have open cells, it is predicted that the relative elastic modulus varies with the square of the relative density, consistent with observations in the literature.
The uses of such structures are described and it is shown that in isotropic open-cell foams, sandwich structures are required to obtain improved specific stiffness in bending. The enhanced stiffness of cellular structures such as wood arises from modifications to the structure that gives a different dependence of the relative stiffness on the elastic modulus.
## Questions
### Quick questions
You should be able to answer these questions without too much difficulty after studying this TLP. If not, then you should go through it again!
1. How does elastic deformation of a honeycomb with hexagonal cells with some faces aligned parallel to the loading axis take place?
a By elastic compression of the vertical faces b By elastic buckling of the vertical faces c By elastic bending of the diagonal faces
### Deeper questions
The following questions require some thought and reaching the answer may require you to think beyond the contents of this TLP.
1. How might elongating the cells of a hexagonal honeycomb in the direction of loading change E/ES for a given relative density?
a Decrease it b Have no effect c Increase it
### Quick questions
You should be able to answer these questions without too much difficulty after studying this TLP. If not, then you should go through it again!
1. What is the criterion for the onset of yielding in a honeycomb?
a That somewhere the stress should exceed the flow stress through the thickness of the film b That the maximum stress in the face exceeds the material flow stress c That the stress at the centre of the face should exceed the material flow stress
### Deeper questions
The following questions require some thought and reaching the answer may require you to think beyond the contents of this TLP.
1. For a highly porous structure E/ES is proportional to (ρ/ρS)n. In which case will the structure show an increased bending stiffness? Hence explain why many porous materials are often used as the core in sandwich structures.
a n > 2 b n = 2 c n < 2
2. Consider a honeycomb loaded with some faces parallel to the compressive loading direction, in which the shape of the hexagonal cell is such that θ < 0, how would the material deform elastically in the transverse direction?
a contract inwards b no tranverse movement c expands outwards
## Going further
### Books
• L.J. Gibson and M.F. Ashby, Cellular solids: structure and properties, Cambridge University Press, 2nd edition (1997).
Covers honeycombs and foams, both open and close celled, as well as the effects of gases and liquids in the cells. It also discusses the properties of bone, wood and the iris leaf as highly porous solids.
• K.K. Chawla, Fibrous materials, Cambridge University Press, 2nd edition (1998).
Covers fibrous and some woven structures.
• D. Boal, Mechanics of the Cell, Cambridge University Press, 2002.
See chapter 3 on two-dimensional networks.
## How the honeycomb is made
The honeycomb is made by printing a pattern of parallel, thin stripes of adhesive onto thin sheets of aluminium. These sheets are then stacked in a heated press to cure the adhesive and slices cut through the thickness of the sheet. The slices, or block form, are then gently stretched and expanded to form a sheet of continuous hexagonal cell shapes.
## Derivation of Young modulus
Here we estimate the Young modulus when the load is applied to the honeycomb made of hexagonal cells with some of the faces parallel to the loading axis where the displacement results from bending of diagonal faces The thickness of the cell wall is t and its through-thickness width is w and the length of each of the hexagonal faces is l.
The displacements in the upper and lower halves of the diagonal faces must be symmetrical about the centre, with the lower half being bent downward and the upper half in the opposite direction. Each face can therefore be treated as if it were made up of two cantilevered beams. Each is length l/2, cantilevered at the end fixed to the vertical face and loaded at the other end.
From beam bending theory the displacement of the loaded end, δ, of a cantilevered beam is given by
$\delta = \frac{1}{3}\frac{{W{L^3}}}{{EI}}$
where W is the applied load, L is the length of the beam, E is the Young modulus and I is the second moment of area (definition).
Here the bending beam lies at an angle (90-θ)° to the axis of loading. The component of the applied force in the direction normal to the beam is therefore F cos θ. As the length of each beam is l/2, then the displacement from its original position (in the direction normal to the diagonal beam) is
$\delta = \frac{1}{{24}}\frac{{F{l^3}}}{{{E_{\rm{S}}}I}} \cdot \cos \theta$
ES is the modulus of the cell wall material from which the honeycomb is made and the second moment of area of the cell wall, I, is wt3/12.
Just as with the force, the direction of the displacement is also not in the direction of loading. The vertical component of the displacement (that is in the loading direction) due to the bending of the complete cell face, Δx, and taking the displacements produced by the two half-beams is
Δx = 2 δ cos θ
Substituting for δ and I gives Δx as
$\Delta x = \frac{F}{{{E_{\rm{S}}}w}} \cdot {\left( {\frac{l}{t}} \right)^3}{\cos ^2}\theta$
This can be expressed as a strain by dividing this downward displacement by the original vertical height of the honeycomb structure, that is two vertical half-faces and the vertical component of the diagonal face, giving the strain, ε, under an imposed load F as
$\varepsilon = \frac{F}{{{E_{\rm{S}}}w}} \cdot {\left( {\frac{l}{t}} \right)^3}\frac{{{{\cos }^2}\theta }}{{l{\rm{ }}(1 + \sin \theta )}}$
Now the applied force F acts over an area w l cosθ, so that the stress, σ, can be expressed as $\sigma = \frac{F}{{w{\rm{ }}l\cos \theta }}$
The Young modulus of the honeycomb in this direction, E (= σ/ε), is therefore $E = {E_{\rm{S}}}{\rm{ }}{\left( {\frac{t}{l}} \right)^3}\frac{{(1 + \sin \theta )}}{{{{\cos }^3}\theta }}$
As the cells are regular hexagons θ = 30° and this becomes $E = \frac{4}{{\sqrt 3 }}{E_{\rm{S}}}{\rm{ }}{\left( {\frac{t}{l}} \right)^3}$
## Derivation of yield stress in the honeycomb
Here we estimate the stress at which the honeycomb starts to deform irreversibly where some of the faces are aligned parallel to the loading axis and the predominant contribution to the strain comes from the bending of the diagonal faces. (see here) The thickness of the cell wall is t, its through-thickness width is w and the length of each of the hexagonal faces is l.
From beam bending theory, the maximum stress, σmax, in a cantilevered beam of length L due to a force, W, applied normal to the beam is
${\sigma _{{\rm{max}}}} = \frac{{6{\rm{ }}W{\rm{ }}L}}{{w{\rm{ }}{t^2}}}$
Now in our situation W = F cos θ and L = l/2, which gives smax as
${\sigma _{{\rm{max}}}} = \frac{{3{\rm{ }}F{\rm{ }}l}}{{w{\rm{ }}{t^2}}} \cdot \cos \theta$
We assume that yielding will start when σmax = σY, giving the applied force, F, at which yielding starts as
$F = \frac{{w{\rm{ }}{t^2}}}{{3{\rm{ }}l{\rm{ }}\cos \theta }} \cdot {\sigma _{\rm{Y}}}$
Now the applied force F acts over an area wl cos θ, so that the stress, σ, can be expressed as
$\sigma = \frac{F}{{w{\rm{ }}l\cos \theta }}$
Substituting for F gives the applied stress at which the honeycomb starts to yield in terms of the yield stress of the cell wall material, σY
$\sigma = \frac{1}{3} \cdot {\left( {\frac{t}{l}} \right)^2} \cdot \frac{1}{{{{\cos }^2}\theta }} \cdot {\sigma _{\rm{Y}}}$
For a regular hexagon, where θ = 30°, this becomes
$\sigma = \frac{4}{9} \cdot {\left( {\frac{t}{l}} \right)^2} \cdot {\sigma _{\rm{Y}}}$
At this value of σ, yielding begins at the surfaces of the cell wall, at the cantilevered end of the beam where the stresses will be greatest. For plastic yielding to spread through the thickness of the cell wall requires that the stress is increased by a factor of 1.5, giving a macroscopic flow stress of
$\sigma = \frac{2}{3} \cdot {\left( {\frac{t}{l}} \right)^2} \cdot {\sigma _{\rm{Y}}}$
## Derivation of Young modulus in highly porous foam
Here we estimate the Young modulus of an open-cell foam, represented schematically as shown below, where each cubic cell has a side length l and is made of struts with a square cross-section of thickness t, whose Young modulus is ES. The derivation is similar to that used for the honeycomb. However because the cell is cubic rather than hexagonal, θ = 0, giving the deflection of a single half-beam, δ, as
$\delta = \frac{1}{{24}}\frac{{F{l^3}}}{{{E_{\rm{S}}}I}}$
where I is the second moment of area, where I = t4/12. This gives the deflection in the loading direction, Δx, as
$\Delta x = 2\delta = \frac{F}{{{E_{\rm{S}}}t}}{\rm{ }}{\left( {\frac{l}{t}} \right)^3}$
This gives the strain, ε, as
$\varepsilon = \frac{F}{{{E_{\rm{S}}}}}{\rm{ }} \cdot \frac{{{l^2}}}{{{t^4}}}$
The stress, σ, arising from the applied force F is
$\sigma = \frac{F}{{{l^2}}}$
This gives an expression for the Young modulus of the open-cell foam, E
$E = {E_{\rm{S}}}{\rm{ }}{\left( {\frac{t}{l}} \right)^4}$
Now
$\frac{\rho }{{{\rho _{\rm{S}}}}} \propto {\left( {\frac{t}{l}} \right)^2}$
The relative elastic modulus of the open-cell foam, E/ES, is therefore related to the relative density, ρ/ρS, according to
$\frac{E}{{{E_{\rm{S}}}}} = k{\rm{ }}{\left( {\frac{\rho }{{{\rho _{\rm{S}}}}}} \right)^2}$
where k is a constant approximately equal to 1.
Academic consultant: Bill Clegg and Athina Markaki (University of Cambridge)
Content development: Duncan McNicholl and David Brook
Photography and video: Brian Barber and Carol Best
Web development: David Brook and Lianne Sallows
This DoITPoMS TLP was funded by the UK Centre for Materials Education, the Worshipful Company of Armourers and Brasiers', and the Department of Materials Science and Metallurgy, University of Cambridge.
|
2022-06-29 19:23:08
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.6663365960121155, "perplexity": 823.3729484206638}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-27/segments/1656103642979.38/warc/CC-MAIN-20220629180939-20220629210939-00230.warc.gz"}
|
https://gmatclub.com/forum/m05-183695.html
|
It is currently 22 Nov 2017, 02:57
GMAT Club Daily Prep
Thank you for using the timer - this advanced tool can estimate your performance and suggest more practice questions. We have subscribed you to Daily Prep Questions via email.
Customized
for You
we will pick new questions that match your level based on your Timer History
Track
every week, we’ll send you an estimated GMAT score based on your performance
Practice
Pays
we will pick new questions that match your level based on your Timer History
Events & Promotions
Events & Promotions in June
Open Detailed Calendar
M05-35
new topic post reply Question banks Downloads My Bookmarks Reviews Important topics
Author Message
TAGS:
Hide Tags
Math Expert
Joined: 02 Sep 2009
Posts: 42302
Kudos [?]: 133014 [0], given: 12402
Show Tags
16 Sep 2014, 00:26
Expert's post
7
This post was
BOOKMARKED
00:00
Difficulty:
95% (hard)
Question Stats:
46% (02:36) correct 54% (02:31) wrong based on 141 sessions
HideShow timer Statistics
Three copy machines are making copies of the same document. Copier A makes 12 copies per minute, Copier B makes 7, and Copier C makes 19. It costs 8 cents/copy for Copier A, 5 cents/copy for Copier B, and 11 cents/copy for Copier C. If a separate attendant has to be hired for each copier and be paid $30 per hour (you have to pay$60 if the attendant works for one hour and one minute), which copier alone will be the most efficient choice to make 1,200 copies?
A. Copier A
B. Copier B
C. Copier C
D. A and B
E. A and C
[Reveal] Spoiler: OA
_________________
Kudos [?]: 133014 [0], given: 12402
Math Expert
Joined: 02 Sep 2009
Posts: 42302
Kudos [?]: 133014 [1], given: 12402
Show Tags
16 Sep 2014, 00:26
1
KUDOS
Expert's post
Official Solution:
Three copy machines are making copies of the same document. Copier A makes 12 copies per minute, Copier B makes 7, and Copier C makes 19. It costs 8 cents/copy for Copier A, 5 cents/copy for Copier B, and 11 cents/copy for Copier C. If a separate attendant has to be hired for each copier and be paid $30 per hour (you have to pay$60 if the attendant works for one hour and one minute), which copier alone will be the most efficient choice to make 1,200 copies?
A. Copier A
B. Copier B
C. Copier C
D. A and B
E. A and C
To find the total cost of copying 1,200 pages, we need to know: 1. How long it takes for an attendant to copy 1,200 pages on each copier. 2. How much it costs per job on each of the machines.
Cost of Time:
A $$\frac{1200}{12} = 100$$ mins (exact) costs: $60 B $$\frac{1200}{7} = 170$$ mins (approx) costs:$90
C $$\frac{1200}{19} = 63$$ mins (approx) costs: $60 Price per Job: A $$1200*0.08=96$$ B $$1200*0.05=60$$ C $$1200*0.11=132$$ Total cost including attendant's pay: A $$96+60 = 156$$ B $$60+90 = 150$$ C $$132+60 = 192$$ The best choice is Copier B. Answer: B _________________ Kudos [?]: 133014 [1], given: 12402 Intern Joined: 11 Jan 2013 Posts: 5 Kudos [?]: 1 [0], given: 6 Re: M05-35 [#permalink] Show Tags 16 Jan 2015, 12:13 Hi Bunuel, In this case...we figured out the costs associated with the individual printers, and that itself took some time. What about options D and E? are there some generalizations regarding the rate of 2 workers together such as: unless the rate of 2 workers together working together is more than double the rate of any of the individuals....they are better off working alone? would this be a fair generalization? Thanks, Kudos [?]: 1 [0], given: 6 Manager Joined: 12 Sep 2010 Posts: 241 Kudos [?]: 30 [1], given: 27 Concentration: Healthcare, General Management Re: M05-35 [#permalink] Show Tags 21 Aug 2015, 13:01 1 This post received KUDOS amishra1 wrote: Hi Bunuel, In this case...we figured out the costs associated with the individual printers, and that itself took some time. What about options D and E? are there some generalizations regarding the rate of 2 workers together such as: unless the rate of 2 workers together working together is more than double the rate of any of the individuals....they are better off working alone? would this be a fair generalization? Thanks, You can eliminate D and E immediately because the question asks "which copier alone will be the most efficient choice to make 1,200 copies?" This problem is not difficult to understand. However, you have to recognize that it will be very time consuming. Depending on where you are at on the test(in terms of the time you have left and your mental state), you might need to make a random guess among A, B, and C to move on. Kudos [?]: 30 [1], given: 27 Intern Joined: 01 Aug 2015 Posts: 2 Kudos [?]: [0], given: 3 Re: M05-35 [#permalink] Show Tags 20 Oct 2015, 03:09 Hi Bunnel, I didnt get how yoy got$90 for 170min? can you please explain me to clear my doubt?
Kudos [?]: [0], given: 3
Current Student
Joined: 12 Aug 2015
Posts: 300
Kudos [?]: 583 [0], given: 1474
Concentration: General Management, Operations
GMAT 1: 640 Q40 V37
GMAT 2: 650 Q43 V36
GMAT 3: 600 Q47 V27
GPA: 3.3
WE: Management Consulting (Consulting)
Show Tags
12 Nov 2015, 22:41
I don't get why this problem is 95% hard rated. "Alone" eliminates right away D and E. Machine C is way too slow and expensive so choice C out. You can spend a minute and calculate to rule out whether B or A however at short reasoning B looks preferable.
Why? Because it is faster when the time matters - attendat gets paid for the time spent and every additional minute could count. Although B is more costly in terms of $/1 page - in time constraints I would bet on B without calculation and continue. _________________ KUDO me plenty Kudos [?]: 583 [0], given: 1474 Manager Joined: 08 Jul 2015 Posts: 58 Kudos [?]: 20 [0], given: 51 GPA: 3.8 WE: Project Management (Energy and Utilities) Re: M05-35 [#permalink] Show Tags 26 May 2016, 02:48 chaitanyakankanaka wrote: Hi Bunnel, I didnt get how yoy got$90 for 170min? can you please explain me to clear my doubt?
chaitanyakankanaka:
Every 60mins the attendant get $30, next min he'll get extra$30 (cost@60mins = $30, cost@61mins = 30 + 30 =$60)
So, 170mins = 60m + 60m + 50m = 30 + 30 + 30 = $90. _________________ [4.33] In the end, what would you gain from everlasting remembrance? Absolutely nothing. So what is left worth living for? This alone: justice in thought, goodness in action, speech that cannot deceive, and a disposition glad of whatever comes, welcoming it as necessary, as familiar, as flowing from the same source and fountain as yourself. (Marcus Aurelius) Kudos [?]: 20 [0], given: 51 Intern Joined: 26 Dec 2016 Posts: 20 Kudos [?]: 3 [0], given: 13 Re: M05-35 [#permalink] Show Tags 10 Feb 2017, 04:23 shasadou wrote: I don't get why this problem is 95% hard rated. "Alone" eliminates right away D and E. Machine C is way too slow and expensive so choice C out. You can spend a minute and calculate to rule out whether B or A however at short reasoning B looks preferable. Why? Because it is faster when the time matters - attendat gets paid for the time spent and every additional minute could count. Although B is more costly in terms of$/1 page - in time constraints I would bet on B without calculation and continue.
Why is Machine C way too slow ? I think it's the fastest one with 19 copies per minute ...
Kudos [?]: 3 [0], given: 13
Intern
Joined: 22 Jan 2017
Posts: 36
Kudos [?]: 9 [0], given: 5
Show Tags
30 Sep 2017, 13:44
1200/19 is approximately 63 not approximately 61. Also, to be more GMAT like, it should be explicitly stated that rate units stay the same the whole time. Other than that, good question and one that really tests how quickly you can do arithmetic.
Kudos [?]: 9 [0], given: 5
Math Expert
Joined: 02 Sep 2009
Posts: 42302
Kudos [?]: 133014 [0], given: 12402
Show Tags
01 Oct 2017, 04:53
grassmonkey wrote:
1200/19 is approximately 63 not approximately 61. Also, to be more GMAT like, it should be explicitly stated that rate units stay the same the whole time. Other than that, good question and one that really tests how quickly you can do arithmetic.
_______________
Edited. Thank you.
_________________
Kudos [?]: 133014 [0], given: 12402
Re: M05-35 [#permalink] 01 Oct 2017, 04:53
Display posts from previous: Sort by
M05-35
new topic post reply Question banks Downloads My Bookmarks Reviews Important topics
Moderators: Bunuel, chetan2u
Powered by phpBB © phpBB Group | Emoji artwork provided by EmojiOne Kindly note that the GMAT® test is a registered trademark of the Graduate Management Admission Council®, and this site has neither been reviewed nor endorsed by GMAC®.
|
2017-11-22 09:57:21
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.4530705511569977, "perplexity": 11278.3798098033}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-47/segments/1510934806543.24/warc/CC-MAIN-20171122084446-20171122104446-00397.warc.gz"}
|
https://answers.ros.org/answers/189996/revisions/
|
# Revision history [back]
Ubuntu 14.04 - ROS Indigo
This is because the depth_registration option is not selected.
Start rqt_reconfigure
rosrun rqt_reconfigure rqt_reconfigure
Select the driver from the left side and activate the depth_registration option.
You should now be able to see the registered rgbd pointcloud in rviz.
|
2021-06-21 11:14:39
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.2525615394115448, "perplexity": 14438.035814319937}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-25/segments/1623488269939.53/warc/CC-MAIN-20210621085922-20210621115922-00412.warc.gz"}
|
https://www.physicsforums.com/threads/conducting-hollow-tube.155686/
|
# Conducting Hollow Tube
stylez03
## Homework Statement
A very long conducting tube (hollow cylinder) has inner radius a and outer radius b. It carries charge per unit length +alpha , where alpha is a positive constant with units of C/m. A line of charge lies along the axis of the tube. The line of charge has charge per unit length +alpha .
What is the charge per unit length on the inner surface of the tube?
What is the charge per unit length on the outer surface of the tube?
I've found the electric field where r < a, a < r < b, r > b already but I'm not sure how to apply that to the follow questions.
|
2022-12-07 23:07:00
|
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8146572113037109, "perplexity": 331.177175802203}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-49/segments/1669446711221.94/warc/CC-MAIN-20221207221727-20221208011727-00705.warc.gz"}
|
https://www.qb365.in/materials/stateboard/Frequently-asked-five-mark-questions-basic-concepts-of-chemistry-and-chemical-calculations-7616.html
|
Frequently asked five mark questions Basic Concepts of Chemistry and Chemical Calculations
11th Standard
Reg.No. :
•
•
•
•
•
•
Chemistry
Answer any 20 of the following questions
Time : 01:30:00 Hrs
Total Marks : 100
Part - A
23 x 5 = 115
1. Balance the following equation by ion-electron method In acidic medium.
$Mn{O}_4^-+I^-\rightarrow MnO_2+I_2$
2. Balance the following equation by ion-electron method In acidic medium.
$Mn{O}_4^-+Fe^{2+}\rightarrow Mn^{2+}+Fe^{3+}$
3. Balance the following equation by ion-electron method In acidic medium.
$Cr{(OH)}_4^-+H_2O_2\rightarrow Cr{O}_4^{2-}$
4. (a) Define equivalent mass of an oxidising agent.
(b) How would you calculate the equivalent mass of potassium permanganate?
5. (a) Define equivalent mass of an reducing agent.
(b) How would you determine the equivalent mass of Ferrous sulphate?
6. A compound on analysis gave the following percentage composition: C=24.47%, H = 4.07 %, CI = 71.65%. Find out its empirical formula.
7. A laboratory analysis of an organic compound gives the following mass percentage composition: C = 60%, H = 4.48% and remaining oxygen.
8. An insecticide has the following percentage composition by mass: 47.5% C, 2.54% H, and 50.0% Cl. Determine its empirical formula and molecular formulae. Molar mass of the substance is 354.5g mol-1
9. Calculate the percentage composition of the elements present in magnesium carbonate. How many Kg of CO2 can be obtained from 100 Kg of is 90% pure magnesium carbonate.
10. Urea is prepared by the reaction between ammonia and carbon dioxide.
2NH3(g) + CO2(g) $\rightarrow$ (NH4)2CO(ag) + H2O(I)
In one process, 637.2 g of NH3 are allowed to react with 1142 g of CO2
(a) Which of the two reactants is the limiting reagent?
(b) Calculate the mass of (NH4)2CO formed.
(c) How much of the excess reagent in grams is left at the end of the reaction?
11. (a) Define oxidation number.
(b) What are the rules used to assign oxidation number?
12. Balance the following equation by oxidation number method.
C6H6 + O2$\rightarrow$CO2 + H2O
13. Balance the following equation by oxidation number method.
KMnO4 + HCI $\rightarrow$ KCl + MnCl2 + H2O + Cl2
14. Explain the steps involved in ion-electron method for balancing redox reaction.
15. Define the following (a) equivalent mass of an acid (b) equivalent mass of a base (c) equivalent mass of an oxidising agent (d) equivalent mass of a reducing agent.
16. Balance the following equation by oxidation number method.
KMnO4 + FeSO4 + H2SO4 $\rightarrow$K2SO4 + MnSO4 + Fe2(SO4)3 + H2O
17. Balancing of the molecular equation in alkaline medium.
MnO2 + O2 + KOH$\rightarrow$K2MnO4 + H2O
18. Write balanced equation for the oxidation of Ferrous ions to Ferric ions by permanganate ions in acid solution. The permanganate ion forms Mn2+ ions under these conditions.
19. A flask A contains 0.5 mole of oxygen gas. Another flask B contains 0.4 mole of ozone gas. Which of the two flasks contains greater number of oxygen atoms.
20. (a) Formulate possible compounds of 'CI' in its oxidation state is: 0, -1, +1, +3, +5, +7
(b) H2O2 act as an oxidising agent as well as reducing agent where as O3 act as only oxidizing agent. Prove it.
21. The Mn3+ ion is unstable in solution and undergoes disproportionation to give Mn2+, MnO2 and H+ ion. Write a balanced ionic equation for the reaction.
22. Chlorine is used to purify drinking water. Excess of chlorine is harmful. The excess chlorine is removed by treating with sulphur dioxide. Present a balanced equation for the reaction for this redox change taking place in water.
23. ${ 2NH }_{ 3 }\left( g \right) +{ CO }_{ 2 }\left( g \right) \rightarrow \underset{Urea}{H_2N}-\overset { \underset { || }{ O } }{ C } -{ NH }_{ 2 }\left( aq \right) +{ H }_{ 2 }O(I)$
(i) If the entire quantity of all the reactants is not consumed in the reaction which is the limiting reagent?
(ii) Calculate the quantity of urea formed and unreacted quantity of the excess reagent. The balanced equation is
$\overset { { 2NH }_{ 3 }+{ CO }_{ 2 } }{ \underset { { H }_{ 2 }NCON{ H }_{ 2 }+{ H }_{ 2 }O }{ \downarrow } }$
|
2019-10-15 19:28:25
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.6566452980041504, "perplexity": 5805.50158959534}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-43/segments/1570986660231.30/warc/CC-MAIN-20191015182235-20191015205735-00064.warc.gz"}
|
https://socratic.org/questions/how-do-you-combine-12a-7-35a-a-1-5a
|
# How do you combine 12a - 7/(35a) - a - 1/(5a)?
Mar 4, 2018
$11 a - \frac{2}{5 a}$
#### Explanation:
$12 a - \frac{7}{35 a} - a - \frac{1}{5 a}$ actually contains two different variables:
$a$ and $\frac{1}{a}$.
$12 a - \frac{7}{35 a} - a - \frac{1}{5 a}$
$= 12 a - a - \frac{7}{35 a} - \frac{1 \left(7\right)}{\left(5 a\right) \left(7\right)}$
$= 11 a - \frac{7}{35 a} - \frac{7}{35 a}$
$= 11 a - \frac{14}{35 a}$
$= 11 a - \frac{2}{5 a}$
Mar 4, 2018
convert to whole numbers and then combine.
#### Explanation:
$\frac{7}{35 a}$ can be simplified to $\frac{1}{5 a}$ then we have another $\frac{1}{5 a}$ so we add those 2 and deal with the whole number separately. Thus we get $12 a - a - \frac{1}{5 a} - \frac{1}{5 a} = 11 a - \frac{2}{5 a}$.
Mar 4, 2018
$= \frac{55 {a}^{2} - a - 2}{5 a}$
#### Explanation:
You have to add the fractions by finding a common denominator first and using the equivalent fractions.
$12 a - \frac{7}{35 a} - a - \frac{1}{5 a} \text{ } \leftarrow L C D = 35 a$
The $L C D = 35 a$
$= \frac{12 a}{1} \times \frac{35 a}{35 a} - \frac{7}{35 a} - \frac{a}{1} \times \frac{35 a}{35 a} - \frac{1}{5 a} \times \frac{7}{7}$
$= \frac{420 {a}^{2}}{35 a} - \frac{7}{35 a} - \frac{35 {a}^{2}}{35 a} - \frac{7}{35 a}$
$= \frac{420 {a}^{2} - 7 - 35 {a}^{2} - 7}{35 a}$
$= \frac{385 {a}^{2} - 7 a - 14}{35 a}$
There is common factor of $7$
$= \frac{\cancel{7} \left(55 {a}^{2} - a - 2\right)}{{\cancel{35}}^{5} a}$
$= \frac{55 {a}^{2} - a - 2}{5 a}$
Note that it would have been better to simplify right at the beginning!
$12 a - \frac{\cancel{7}}{{\cancel{35}}^{5} a} - a - \frac{1}{5 a} \text{ } \leftarrow L C D = 5 a$
$= \frac{12 a}{1} - \frac{1}{5 a} - \frac{a}{1} - \frac{1}{5 a}$
$= \frac{60 {a}^{2} - 1 - 5 {a}^{2} - 1}{5 a}$
$= \frac{55 {a}^{2} - a - 2}{5 a}$
|
2021-12-07 00:00:50
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 27, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8838134407997131, "perplexity": 1086.8155985434025}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-49/segments/1637964363327.64/warc/CC-MAIN-20211206224536-20211207014536-00147.warc.gz"}
|
https://space.stackexchange.com/tags/atmosphere/hot
|
# Tag Info
Accepted
### Could we breathe an atmosphere that is not nitrogen based?
We can breathe pure oxygen for unlimited time if the pressure is not too high; about 0.4 bar is okay. Breathing pure oxygen at 1 bar is possible for some hours, but a longer time may damage the lungs. ...
• 46.6k
### Why did it take so long to notice that the ozone layer had holes in it? Which satellite provided the data?
I believe the discovery was made by orbiting satellite, but I'm not sure which one. That is not the case. Look at the author affiliation for the article to which you linked. The three authors of that ...
• 64.2k
Accepted
### What will be the effect if we stand on Jupiter?
(*) Jupiter, for all intents and purposes, doesn't have a solid surface to stand on. Not any more than you could say that Earth's atmosphere has it, before you hit Terra Firma. It's an enormous ball ...
• 75.5k
### Why is the breathing atmosphere of the ISS a standard atmosphere (at 1 atm containing nitrogen)?
Am I not considering something? Yes. You are not considering Mir, Soyuz, and the Space Shuttle. The International Space Station is a multinational program, jointly led by the US and Russia. While ...
• 64.2k
Accepted
### Why not increase contact surface when reentering the atmosphere?
I've done a lot of work on this subject with researchers and engineers at JPL, NASA Langley, and NASA Ames. There are some interesting things that come out of high-fidelity CFM (Computational Fluid ...
• 17.9k
### Why didn’t the Spacecraft used for the Apollo 11 mission melt in the Earth’s Atmosphere?
Although the temperature at altitude can be several thousands of degrees, the atmosphere is so thin it does not transfer heat efficiently. Wikipedia explains it very well - The highly diluted gas ...
• 933
Accepted
### Is it harder to enter an atmosphere perpendicular or at an angle
“Bouncing off the atmosphere” is a misleading turn of phrase. When returning to the Earth from the Moon, a spacecraft is on an elliptical orbit with the high end somewhere around the moon’s altitude ...
• 160k
Accepted
### What impact will the deorbiting of thousands of satellites have on the atmosphere?
Not much research has been done on this question in recent years, but some researchers are worried enough to research into wooden satellites. The question on the environmental impact of deorbiting ...
• 11.2k
Accepted
### How do we know what the atmospheric pressure on Mars is?
TFB's answer is correct that both Vikings made barometric measurements (and it is what the question asked for!), but it's worth noting that the atmosphere had been measured before surface instruments ...
• 7,406
Accepted
### Could the Moon keep an atmosphere?
It can keep an atmosphere, and in fact does. The atmosphere is something akin to a high grade Earth-based vacuum. But that's probably not what you are looking for. Okay, so what would happen with, ...
• 118k
### What impact will the deorbiting of thousands of satellites have on the atmosphere?
The mass of Earth's atmosphere is 5E+18 kg and the Troposphere alone has 3/4's of that. With an average height of 13 km that makes its volume $4 \pi r^2 h$ or about 6.6E+18 m^3. If we break up one ...
• 148k
### Why didn’t the Spacecraft used for the Apollo 11 mission melt in the Earth’s Atmosphere?
It's not the temperature that matters, it's the heat transfer. The density of the atmosphere up in the thermosphere is very very thin. There simply isn't nearly enough mass to transfer any ...
• 11.2k
### Are rockets faster than airplanes?
Rockets are much faster than airplanes for most of their flight. Here's a graph of a Space Shuttle launch: The red line is speed. It's in ft/s, 1000 ft/s is 1097 km/h. So At about 45 seconds, the ...
• 121k
Accepted
### Are there any known atmospheres that would support traditional combustion engines?
The short answer is no -- an internal combustion engine needs to pull oxygen from the air to operate, and no solid bodies in the solar system have that kind of atmosphere. Venus' atmosphere is ...
• 160k
### Why is the breathing atmosphere of the ISS a standard atmosphere (at 1 atm containing nitrogen)?
Rory mentions oxygenation rate which is an excellent point but there's additional reasons why not keep ISS atmosphere at a lower pressure - thermal convection and air cycling. Pressure at roughly one ...
• 75.5k
### Would the national flag planted by astronauts on Mars need an upper horizontal pole like the ones on the Moon?
Short answer: Yes. Mars is not windy enough to properly wave most flags. Long answer: In storm conditions, a flag constructed out of a very light material would be able to properly wave. If we take a ...
• 15.1k
### Is it more challenging to put an airship in the Uranian than in the Venusian atmosphere?
Buoyancy is a big problem. To stay aloft, the average density of the balloon envelope, lifting gas and gondola must be <= the density of the surrounding atmosphere. The pressure inside a balloon ...
• 9,929
Accepted
### Why do some meteors explode in air?
The Chelyabinsk meteor was travelling at over 65,000 km/h when it hit the brunt of the atmosphere 23 km high in the air. This is 60 times the speed on sound! NASA estimates that the meteor's mass at ...
• 6,370
### If someone built a vacuum tunnel through the atmosphere, could you have an orbit with a sea level perigee?
No, unless your structure is located directly on the equator and your satellite follows a perfectly circular orbit, atmospheric "orbits" aren't possible, even in a vacuum tunnel. Because the ...
• 15.1k
Accepted
### How could a hot lander enter Titan's atmosphere without setting its hydrocarbons ablaze?
In order for a combustion process to happen, you do not only need fuel, you also need an oxidizer. On Earth, that is usually the oxygen in the air. In Titan's atmosphere, there is no oxygen. This ...
Accepted
### Maximum survivable atmospheric pressure
Based on saturation diving operations, it looks like the limits are as follows: Compressed air: Nitrogen narcosis limits you to around four times Earth's atmospheric pressure. Any gas mix: Hydreliox ...
• 11.5k
Accepted
### Bounce off the atmosphere at reentry?
Yes, a capsule cannot literally bounce off the atmosphere and its kinetic energy must be reduced by an encounter with the atmosphere, rather it would just pass through the atmosphere and back into ...
• 4,153
### In simple terms, how does the way space suits manage breathable gas differ from how scuba gear does it?
Intravehicular spacesuits are worn inside the cabin in case of emergencies, particularly during ascent and descent. The Mercury suits were manufactured by Goodrich. Nearly all other IV suits have ...
• 46.5k
Accepted
### Air in International Space Station
Several approaches are taken. Cargo vehicles bring up Oxygen and other atmospheric components (Nitrogen, etc). The Russian segments life support system works different and independant of the US side,...
• 76.5k
Accepted
### What concentration of oxygen in a planetary atmosphere would be indicative of life?
I would argue that no specific level of molecular or atomic oxygen in atmosphere is indicative of carbon-based life (i.e. life as we know it on Earth). A planet could have oxygen rich atmosphere which ...
• 75.5k
Accepted
### Is the "airship to orbit" mission profile feasible?
Nearly all balloons that have been constructed have been for flights from the surface to altitude. That requires a structure that can survive tethered at the surface in a range of wind speeds in high-...
• 344
### Why does Titan have an atmosphere?
The mass of Titan is 1,345 · 1023 kg, but the mass of the Moon is 7,349 · 1022 kg. The gravity at the surface is 1,35 m/s² for Titan and 1,62 m/s² for the the Moon. But the surface temperatures are ...
• 46.6k
Accepted
### Self-sustainable Hermetically Sealed System: Humans & Plants
This is being done with biosphere 2 and several similar projects and found to be very complicated. Specifically in a small sealed system there is little buffering or inertia available if one element ...
• 17.1k
Launch vehicle operators (or at least the major ones) all seem to drop their fairings such that the heat produced by the remaining atmosphere remains below 1135 W/m$^2$. Not all the operators provide ...
|
2022-07-03 23:15:43
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.45752188563346863, "perplexity": 1824.4186814559596}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-27/segments/1656104277498.71/warc/CC-MAIN-20220703225409-20220704015409-00429.warc.gz"}
|
http://mymathforum.com/math/261312-language-math-logic-subsets-3.html
|
My Math Forum > Math Language, Math, Logic and Subsets
Math General Math Forum - For general math related discussion and news
November 15th, 2015, 10:38 PM #21 Global Moderator Joined: Dec 2006 Posts: 20,966 Thanks: 2216 It's evidently infinite. If it were finite, that would mean there were only, say, n positive integers, which would make n+1 a bit hard to explain!
November 16th, 2015, 12:54 AM #22 Math Team Joined: Nov 2014 From: Australia Posts: 689 Thanks: 244 Most mathematicians prefer indeterminate to undefined. The reason we say that $\dfrac{0}{0}$ is indeterminate is because it can be "equal" to any real number. You'll need some understanding of limits if you want a thorough explanation. There are plenty of indeterminate values in maths: $\dfrac{x}{0}$ (for any x), $\infty - \infty$ and $\dfrac{\infty}{\infty}$ are some of them. Strangely, $1^\infty$ is also an indeterminate form.
November 16th, 2015, 02:07 PM #23 Member Joined: Nov 2015 From: New York State Posts: 41 Thanks: 0 OK, x/0 = indeterminate 1^infinity = indeterminate infinity - infinity = indeterminate infinity / infinity = indeterminate So x/0 = 1^infinity = infinity - infinity = infinity / infinity
November 16th, 2015, 09:14 PM #24 Math Team Joined: Nov 2014 From: Australia Posts: 689 Thanks: 244 No. "Indeterminate" is not a number, so all those equals signs you used are wrong.
November 17th, 2015, 02:52 AM #25 Member Joined: Nov 2015 From: New York State Posts: 41 Thanks: 0 x/x=1 0/0=1 0/0=0 Given x/x=1 you can not have both 0/0=1 and 0/0=0.
November 17th, 2015, 03:46 AM #26 Math Team Joined: Dec 2013 From: Colombia Posts: 7,681 Thanks: 2659 Math Focus: Mainly analysis and algebra $x/0$ is only indeterminate for $x=0$. For all other values of $x$ it is undefined.
November 17th, 2015, 03:49 AM #27 Math Team Joined: Dec 2013 From: Colombia Posts: 7,681 Thanks: 2659 Math Focus: Mainly analysis and algebra You should note that language is neither fixed nor precise - unlike mathematics which both (barring new discoveries). There is thus no 1-to-1 mapping between the two.
Tags language, logic, math, subsets
Thread Tools Display Modes Linear Mode
Similar Threads Thread Thread Starter Forum Replies Last Post shunya Applied Math 1 October 2nd, 2013 01:14 AM dekker11 Applied Math 4 February 21st, 2012 11:01 AM police1.police2 Algebra 1 December 16th, 2011 12:57 PM RocLobStar New Users 2 April 25th, 2010 12:20 PM random_thinker Applied Math 11 June 27th, 2009 01:11 PM
Contact - Home - Forums - Cryptocurrency Forum - Top
|
2019-09-17 04:16:21
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.774500846862793, "perplexity": 6926.323196709037}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-39/segments/1568514573052.26/warc/CC-MAIN-20190917040727-20190917062727-00144.warc.gz"}
|
http://www.ams.org/mathscinet-getitem?mr=1750311
|
MathSciNet bibliographic data MR1750311 11T55 (11P05) Gallardo, Luis On the restricted Waring problem over \$\bold F_{2^n}[t]\$$\bold F_{2^n}[t]$. Acta Arith. 92 (2000), no. 2, 109–113. Journal
For users without a MathSciNet license , Relay Station allows linking from MR numbers in online mathematical literature directly to electronic journals and original articles. Subscribers receive the added value of full MathSciNet reviews.
|
2017-01-24 12:51:58
|
{"extraction_info": {"found_math": true, "script_math_tex": 1, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9982691407203674, "perplexity": 8583.119280973062}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560284405.58/warc/CC-MAIN-20170116095124-00243-ip-10-171-10-70.ec2.internal.warc.gz"}
|
http://www.math.ucr.edu/home/baez/rolling/rolling_4.html
|
## Rolling Circles and Balls (Part 4)
#### John Baez
So far in this series we've been looking at what happens when we roll circles on circles:
• In Part 1 we rolled a circle on a circle that's the same size.
• In Part 2 we rolled a circle on a circle that's twice as big.
• In Part 3 we rolled a circle inside a circle that was 2, 3, or 4 times as big.
In every case, we got lots of exciting math and pretty pictures. But all this pales in comparison to the marvels that occur when we roll a ball on another ball!
You'd never guess it, but the really amazing stuff happens when you roll a ball on another ball that's exactly 3 times as big. In that case, the geometry of what's going on turns out to be related to special relativity in a weird universe with 3 time dimensions and 4 space dimensions! Even more amazingly, it's related to a strange number system called the split octonions.
The ordinary octonions are already strange enough. They're an 8-dimensional number system where you can add, subtract, multiply and divide. They were invented in 1843 after the famous mathematician Hamilton invented a rather similar 4-dimensional number system called the quaternions. He told his college pal John Graves about it, since Graves was the one who got Hamilton interested in this stuff in the first place... though Graves had gone on to become a lawyer, not a mathematician. The day after Christmas that year, Graves sent Hamilton a letter saying he'd found an 8-dimensional number system with almost all the same properties! The one big missing property was the associative law for multiplication, namely:
$$(ab)c = a(bc)$$
The quaternions obey this, but the octonions don't. For this and other reasons, they languished in obscurity for many years. But they eventually turned out to be the key to understanding some otherwise inexplicable symmetry groups called 'exceptional groups'. Later still, they turned out to be important in string theory!
I've been fascinated by this stuff for a long time, in part because it starts out seeming crazy and impossible to understand... but eventually it makes sense. So, it's a great example of how you can dramatically change your perspective by thinking for a long time. Also, it suggests that there could be patterns built into the structure of math, highly nonobvious patterns, which turn out to explain a lot about the universe.
About a decade ago I wrote a paper summarizing everything I'd learned so far:
But I knew there was much more to understand. I wanted to work on this subject with a student. But I never dared until I met John Huerta, who, rather oddly, wanted to get a Ph.D. in math but work on physics. That's generally not a good idea. But it's exactly what I had wanted to do as a grad student, so I felt a certain sympathy for him.
And he seemed good at thinking about how algebra and particle physics fit together. So, I decided we should start by writing a paper on 'grand unified theories' — theories of all the forces except gravity:
The arbitrary-looking collection of elementary particles we observe in nature turns out to contain secret patterns — patterns that jump into sharp focus using some modern algebra! Why do quarks have weird fractional charges like 2/3 and -1/3? Why does each generation of particles contain two quarks and two leptons? I can't say we really know the answer to such questions, but the math of grand unified theories make these strange facts seem natural and inevitable.
The math turns out to involve rotations in 10 dimensions, and 'spinors': things that only come around back to the way they started after you turn them around twice. This turned out to be a great preparation for our later work.
As we wrote this article, I realized that John Huerta had a gift for mathematical prose. In fact, we recently won a prize for this paper! In two weeks we'll meet at the big annual American Mathematical Society conference and pick it up.
John Huerta wound up becoming an expert on the octonions, and writing his thesis about how they make superstring theory possible in 10-dimensional spacetime:
The wonderful fact is that string theory works well in 10 dimensions because the octonions are 8-dimensional! Suppose that at each moment in time, a string is like a closed loop. Then as time passes, it traces out a 2-dimensional sheet in spacetime, called a worldsheet:
In this picture, 'up' means 'forwards in time'. Unfortunately this picture is just 3-dimensional: the real story happens in 10 dimensions! Don't bother trying to visualize 10 dimensions, just count: in 10-dimensional spacetime there are 10 - 2 = 8 extra dimensions besides those of the string's worldsheet. These are the directions in which the string can vibrate. Since the octonions are 8-dimensional, we can describe the string's vibrations using octonions! The algebraic magic of this number system then lets us cook up a beautiful equation describing these vibrations: an equation that has 'supersymmetry'.
For a full explanation, read John Huerta's thesis. But for an easy overview, read this paper we published in Scientific American:
This got included in a collection called The Best Writing on Mathematics 2012, further confirming my opinion that collaborating with John Huerta was a good idea.
Anyway: string theory sounds fancy, but for many years I'd been tantalized by the relationship between the octonions and a much more prosaic physics problem: a ball rolling on another ball. I had a lot of clues saying these should be a nice relationship... though only if we work with a mutant version of the octonions called the 'split' octonions.
You probably know how we get the complex numbers by taking the ordinary real numbers and throwing in a square root of -1. But there's also another number system, far less popular but still interesting, called the split complex numbers. Here we throw in a square root of 1 instead. Of course 1 already has two square roots, namely 1 and -1. But that doesn't stop us from throwing in another!
This 'split' game, which is a lot more profound than it sounds at first, also works for the quaternions and octonions. We get the octonions by starting with the real numbers and throwing in seven square roots of -1, for a total of 8 dimensions. For the split octonions, we start with the real numbers and throw in three square roots of -1 and four square roots of 1. The split octonions are surprisingly similar to the octonions. There are tricks to go back and forth between the two, so you should think of them as two forms of the same underlying thing.
Anyway: I really liked the idea of finding the split octonions lurking in a concrete physics problem like a ball rolling on another ball. I hoped maybe this could shed some new light on what the octonions are really all about.
James Dolan and I tried hard to get it to work. We made a lot of progress, but then we got stuck, because we didn't realize it only works when one ball is 3 times as big as the other! That was just too crazy for us to guess.
In fact, some mathematicians had known about this for a long time. Things would have gone a lot faster if I'd read more papers early on. By the time we caught up with the experts, I'd left for Singapore, and John Huerta, still back in Riverside, was the one talking with James Dolan about this stuff. They figured out a lot more.
Then Huerta got his Ph.D. and took a job in Australia, which is as close to Singapore as it is to almost anything. I got a grant from the Foundational Questions Institute to bring John to Singapore and figure out more stuff about the octonions and physics... and we wound up writing a paper about the rolling ball problem:
Whoops! I haven't introduced G2 yet. It's one of those 'exceptional groups' I mentioned: the smallest one, in fact. Like the octonions themselves, this group comes in a few different but closely related 'forms'. The most famous form is the symmetry group of the octonions. But in our paper, we're more interested in the 'split' form, which is the symmetry group of the split octonions. The reason is that this group is also the symmetry group of a ball rolling without slipping or twisting on another ball that's exactly 3 times as big!
The fact that the same group shows up as the symmetries of these two different things is a huge clue that they're deeply related. The challenge is to understand the relationship.
There are two parts to this challenge. One is to describe the rolling ball problem in terms of split octonions. The other is to reverse the story, and somehow get the split octonions to emerge naturally from the study of a rolling ball!
In our paper we tackled both parts. Describing the rolling ball problem using split octonions had already been done by other mathematicians, for example here:
• Andrei Agrachev, Rolling balls and octonions.
• Aroldo Kaplan, Quaternions and octonions in mechanics.
• Robert Bryant and Lucas Hsu, Rigidity of integral curves of rank 2 distributions.
• Gil Bor and Richard Montgomery, G2 and the "rolling distribution".
We do however give a simpler explanation of why this description only works when one ball is 3 times as big as the other.
The other part, getting the split octonions to show up starting from the rolling ball problem, seems to be new to us. We show that in a certain sense, quantizing the rolling ball gives the split octonions! Very roughly, split octonions can been as quantum states of the rolling ball.
At this point I've gone almost as far as I can without laying on some heavy math. In theory I could show you pretty animations of a little ball rolling on a big one, and use these to illustrate the special thing that happens when the big one is 3 times as big. In theory I might be able to explain the whole story without many equations or much math jargon. That would be lots of fun...
... for you. But it would be a huge amount of work for me. So at this point, to make my job easier, I want to turn up the math level a notch or two. And this is a good point for both of us take a little break.
In the next and final post in this series, I'll sketch how the problem of a little ball rolling on a big stationary ball can be described using split octonions... and why the symmetries of this problem give a group that's the split form of G2... if the big ball has a radius that's 3 times the radius of the little one!
I will not quantize the rolling ball problem — for that, you'll need to read our paper.
|
2017-09-24 22:56:19
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.5494617223739624, "perplexity": 548.122223072063}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-39/segments/1505818690228.57/warc/CC-MAIN-20170924224054-20170925004054-00149.warc.gz"}
|
https://electronics.stackexchange.com/questions/140965/best-resistance-for-red-blue-3-lead-led
|
# Best resistance for RED/BLUE 3 lead LED
I have a project where a PCB has been manufactured with an error, where a 3 lead BI color LED (red/blue) has a single resistor to limit the current for both colors. I can fix this by hacking apart the PCB but I would like to know what resistor value I could potentially use to satisfy both the RED and BLUE colors of the LED.
This is the datasheet for the LED: http://www.unique-leds.com/images/datasheets/CG/D3009R1B2SBDC.pdf
As per the datasheet the RED has a min/max forward voltage for 1.9/2.5 while the blue has a min/max of 2.9/3.5 If I were to take an average of the mins and maxes and then the average of that:
((1.9+2.9)/2) = 2.4v min
((2.5+3.5)/2) = 3.0v max
((2.4+3.0)/2) = 2.7v averages
and use this to calculate the resistance needed to satisfy both colors:
R = (5v-2.7v)/0.02a
R = 115ohms
Would this be the correct approach to solve my problem or is there a different way I should go about this. Is this even possible without negatively effecting one LED color or the other?
I can manage two separate resistors if I tear up the solder mask and hack it until it works but if I can use the already provided through hole it would make life easier.
Overall I am asking what is the best way to calculate the resistance that would satisfy two colors, RED and BLUE,in a single LED package with 3 leads using a single resistor.
According to your datasheet, the claimed brightness is the same at 20mA for either LED. So if you calculate a resistor to allow 20mA to flow through the BLUE LED then the current through the RED LED will be excessive and its life will be shortened.
Suggest you try it with a resistor calculated to allow 20mA for the RED LED (say 150 ohms) and see if the brightness is visually okay on the blue. It may be just fine. The current through the BLUE LED will then be about 15mA, which should be plenty bright. I suspect this is all you have to do.
If it isn't you could do something else that wouldn't involve hacking the board- you could parallel the RED LED with a resistor that would steal some of the current. Perhaps a small surface-mount resistor between the LED leads. So suppose the resistor you use is 100 ohms, then the current through the BLUE LED will be about 18mA, but the current through the RED LED would be more like 28mA. You could put a 220 ohm resistor in parallel with the RED LED to shunt away 10mA. But I doubt this is necessary- a 50% increase in brightness is really not that visually noticeable.
• Thank you for the response. I will try running the blue LED with the resistor calculated for the red side and see if I like the brightness. If not I have a few options to fix the PCB or the LEDS in such a way to make it work, but it is a bit more work then there should be. Thanks again! – randy newfield Dec 1 '14 at 1:38
A bi-color LED with three leads is just two LEDs sharing either a cathode or an anode. If you want only one resistor, then you can have only one color on at a time. You might do something like this:
simulate this circuit – Schematic created using CircuitLab
If you try to turn both on at the same time, then only the one with the lower forward voltage (red, in your case) will turn on.
If you exceed the maximum current for an LED, it overheats and is damaged. Less current just makes the LED less bright, but does not damage it. So, the thing to is calculate the resistor you might use for each one individually, then use the biggest resistor of all the options.
If the difference in brightness is unacceptable, or if you need the ability to have both colors on at once, you must have two resistors.
• Thank you for the response. I only need one LED lit at a given time as determined by a switch. As per the other answer I determined that I should use the maximum RED resistance and see if the blue brightness is acceptable. Thank you again kindly for your response. – randy newfield Dec 1 '14 at 1:41
|
2021-06-18 03:36:46
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.5774112939834595, "perplexity": 659.2962287451259}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-25/segments/1623487634616.65/warc/CC-MAIN-20210618013013-20210618043013-00242.warc.gz"}
|
https://tex.stackexchange.com/questions/338362/including-an-image-on-the-section-title-page-of-beamer
|
# Including an image on the section title page of beamer
I'm making a presentation on beamer and I was able to create a frame with the section title with the code: (I'm sorry for the code, didn't quite understand how to insert it correctly)
%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%Section title frame
\AtBeginSection[]{
\begin{frame}
\vfill
\centering
\end{beamercolorbox}
\vfill
\end{frame}
}
%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%Section title frame
but now I would like to include an image on the page as well, not the same one for every section, but a particular one for each section.
Thank you
An easy solution would be to store the image name in a macro and redefine it where needed.
\documentclass{beamer}
%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%Section title frame
\AtBeginSection[]{
\begin{frame}
\vfill
\centering
\includegraphics[width=4cm]{\secimage}
\end{beamercolorbox}
\vfill
\end{frame}
}
%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%Section title frame
\newcommand{\secimage}{example-image-a}
\begin{document}
\section{A}
{
\renewcommand{\secimage}{example-image-b}
\section{B}
}
\section{C}
\end{document}
|
2019-05-20 06:39:13
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8850041627883911, "perplexity": 1139.74910073643}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-22/segments/1558232255773.51/warc/CC-MAIN-20190520061847-20190520083847-00079.warc.gz"}
|
http://openstudy.com/updates/4ecac34de4b09553999ebd3b
|
## meyruhstfu94 4 years ago Find in the missing step: sec^4 x -2 sec^2 x+1=(sec^2x-1)^2 =????????? =tan^4 x
(tan^2x)^2
[ sec^2x-1= tan^2x ]
3. meyruhstfu94
how is that?
4. meyruhstfu94
is it because tan^2x=sec^2x-1?
Yes
its an identity =)
7. meyruhstfu94
okay thanks
8. darthsid
So, the best way to do this is to go back to basics of a right triangle!! |dw:1321912053720:dw| H = hypotenuse, B = base, P = perpendicular $\sec(x) = \frac{H}{B}$ $\sec^2(x) = \frac{H^2}{B^2}$ $Sec^2(x) - 1 = \frac{H^2}{B^2} - 1 = \frac{H^2 - B^2}{B^2}$ Now, according to the Pythagoras theorem, $H^2 = P^2 + B^2$ Which means $H^2 - B^2 = P^2$ So, $Sec^2(x) - 1 = \frac{H^2 - B^2}{B^2} = \frac{P^2}{B^2}$ $\frac{P}{B} = tan(x)$ Thus, you get your answer :)
|
2016-05-07 00:34:43
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.5050602555274963, "perplexity": 3722.2538823948976}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2016-18/segments/1461864953696.93/warc/CC-MAIN-20160428173553-00040-ip-10-239-7-51.ec2.internal.warc.gz"}
|
http://www.gradesaver.com/textbooks/math/algebra/intermediate-algebra-6th-edition/chapter-1-vocabulary-check-page-41/8
|
# Chapter 1 - Vocabulary Check: 8
reciprocals
#### Work Step by Step
No work is needed. $a$ $\ne$ 0 because it will create an undefined fraction.
After you claim an answer you’ll have 24 hours to send in a draft. An editor will review the submission and either publish your submission or provide feedback.
|
2017-02-21 05:55:46
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.4944351017475128, "perplexity": 2343.643750898092}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 5, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-09/segments/1487501170651.78/warc/CC-MAIN-20170219104610-00111-ip-10-171-10-108.ec2.internal.warc.gz"}
|
https://tutorme.com/tutors/47790/interview/
|
TutorMe homepage
Subjects
PRICING
COURSES
Start Free Trial
Anirban S.
Physics tutor for 5 years
Tutor Satisfaction Guarantee
Physics (Electricity and Magnetism)
TutorMe
Question:
During a thunderstorm, a lightning has struck your car. Is it advisable to leave the car and run or stay inside it?
Anirban S.
If lightning had struck your car, there will be charges accumulated. Now, we know that the body of the car will behave as a conductor. Charges always reside on the surface of the conductor and hence, there will be charges on the body of the car. In case, we want to open the door and run, the potential difference developed will cause charges to flow through our body, which can kill us. So, it is always advisable to sit inside the car in such a situation.
Algebra
TutorMe
Question:
A rectangular field has a perimeter of 104 meters. The length of the field is 12 meters more than its width. Find the length and the width of this field.
Anirban S.
Let the length of the field be l and its breadth be b. Given, perimeter of the field = 2(l + b) = 104 m Also, we have l = 12 + b So, we have two linear equations and two variables. We solve them to get the value of l and b. Substituting l from the second equation into the first equation, we will have $$$$12+2b = 52$$$$ $$$$\implies b = 20$$$$ Pluuging this in the second equation, we get l = 32 Hence, the length and breadth of the rectangular field is 32m and 20m respectively.
Physics
TutorMe
Question:
Suppose you drop a feather and a stone from the top of a tower, which will reach the ground first?
Anirban S.
If we neglect forces due to resistance, in both the cases, the force on the body is due to gravity. According to Newton's law, the force on the body is F=mg. Hence, acceleration g is same for both feather and a stone. So, using equation of motion $$t=\sqrt{\frac{2h}{g}}$$. Since h is same for both, the feather and the stone will reach at the same time. This is possible only when frictional or velocity-dependent forces are not present
Send a message explaining your
needs and Anirban will reply soon.
Contact Anirban
|
2018-12-16 18:31:13
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 4, "x-ck12": 0, "texerror": 0, "math_score": 0.7135540843009949, "perplexity": 503.7485198184136}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-51/segments/1544376827963.70/warc/CC-MAIN-20181216165437-20181216191437-00134.warc.gz"}
|
https://codereview.stackexchange.com/questions/36647/sass-code-structure-readability
|
# SASS Code Structure / Readability
I'm trying out a new approach to SASS stylesheets, which I'm hoping will make them more organizined, maintainable, and readable.
I often feel like there is a thin line between code that is well structured and code that is entirely convoluted. I would appreciate any thoughts as to which side of the line this code falls.
I don't want to tell you too much more about what these styles are intended to produce -- my hope is that the code will explain this for itself. Also note that this is part of a larger project, so don't worry about missing dependencies, etc.
### Questions for review
1. How would you make this code easier to read/maintain?
2. Can you understand what these styles are trying to produce?
3. Is the purpose of the mixins/placeholders clear?
### File structure:
theme/sass/partials/
widget/
collapsable/
_appendicon.scss
_closeall.scss
_toggleswitch.scss
collapsable.scss
collapsablered.scss
_button.scss
### collapsable.scss
/**
* Collapsable widget.
*
* The widget has "open" and "closed" states.
* The widget has a Toggle Switch, which is visible in
* both open and closed states.
* All other content is hidden in the closed state.
*/
@mixin setOpenState {
&,
&.state-open {
@content;
}
}
@mixin setClosedState {
&.state-closed {
@content;
}
}
@mixin setToggleSwitchStyles {
&>h1:first-child, .collapseableToggle {
@content;
}
}
@import "collapsable/closeall";
@import "collapsable/appendicon";
@import "collapsable/toggleswitch";
%collapsable {
@include setOpenState {
@include setToggleSwitchStyles {
@extend %toggleSwitch;
}
}
@include setClosedState {
@extend %closeAllExceptToggle;
@include setToggleSwitchStyles {
@extend %toggleSwitchClosed;
}
}
}
### collapsablered.scss
@import "collapsable";
@import "../button";
%collapsableRed {
@extend %collapsable;
@include setOpenState {
@include setToggleSwitchStyles {
@extend %buttonWithRedBg;
}
}
@include setClosedState {
@include setToggleSwitchStyles {
@extend %buttonWithDarkBg;
}
}
}
### collapsable/_closeall.scss
%closeAllChildren {
* {
display: none;
}
}
%closeAllExceptToggle {
@extend %closeAllChildren;
@include setToggleSwitchStyles {
display: block;
.icon-sprite {
display: inline-block;
}
}
}
### collapsable/_appendicon.scss
@import "compass/utilities/general/clearfix";
@import "../../icon";
@mixin appendIcon {
@include pie-clearfix;
.icon-sprite {
margin-right: 5px;
vertical-align: -3px;
}
&:after {
content: '';
position: relative;
top: 2px;
float: right;
@content;
}
}
%withCloseIcon {
@include appendIcon {
@extend .icon-close; // defined in _icon.scss
}
}
%withOpenIcon {
@include appendIcon {
@extend .icon-rChevronDk; // defined in _icon.scss
top: 1px;
}
}
### collapsable/_toggleswitch.scss
%toggleSwitch {
cursor: pointer;
@extend %withCloseIcon;
}
%toggleSwitchClosed {
@extend %toggleSwitch;
@extend %withOpenIcon;
}
### partials/_button.scss
@import "typography";
%buttonWithRedBg {
@extend %textOnRedBg; // defined in _typography.scss
cursor: pointer;
&:hover {
background-color: $redDk; } &:active { background-color:$black;
}
}
%buttonWithDarkBg {
@extend %textOnDarkBg; // defined in _typography.scss
cursor: pointer;
&:hover {
background-color: #000;
}
&:active {
background-color: redDk; } } • There appears to be a CSS error in your icon-sprite class. You have vertical-align: -3px, but the vertical-align property only takes specific values like top/bottom/middle/text-top/etc. Also, is this code intended to be portable (Compass extension)? – cimmanon Dec 4 '13 at 16:12 • As I said, this is a component of a larger project, not a portable extension -- though I am concerned with modularity and reusability. Also, I'm looking for comments on readability/structure/etc., not syntax errors -- though vertical-align does take px arguments, for the record. – edan Dec 5 '13 at 3:26 ## 1 Answer Overall, your naming conventions are pretty good. I don't feel like I need to go look at mixins themselves to figure out what their purpose is. The extensive use of extends does concern me, since it can lead to larger CSS rather than smaller like you might expect (see: Mixin, @extend or (silent) class?). Your %textOnDarkBg and %textOnRedBg extend classes might be redundant. If you're not already using Compass, you might want to take a look at it. It offers a function as well as a mixin for setting a good contrasting color against your desired background color (see: http://compass-style.org/reference/compass/utilities/color/contrast/). Highly useful if your project is intended to be themed. Generally speaking, using colors for class names isn't very clear unless the content is about color (eg. a color wheel or a rainbow). What is red for? Is it for errors? Or maybe a call to action? The same thing goes for dark. Using inverted or closed might be better choices. If the site's design is already dark, a dark button probably doesn't make much sense. Your code only allows the user to have exactly 2 colors (default and red), which seems more limited than it needs to be. You could easily make it very flexible by making use of lists (or maps, which will be in the next version of Sass). Here's an example from my own project: // name dark lightdialog-help: #2E3192 #B9C2E1 !default; // purple
$dialog-info: #005FB4 #BDE5F8 !default; // blue$dialog-success: #6F7D03 #DFE5B0 !default; // green
$dialog-warning: #A0410D #EFBBA0 !default; // orange$dialog-error: #C41616 #F8AAAA !default; // red
$dialog-attributes: ( help nth($dialog-help, 1) nth($dialog-help, 2) , info nth($dialog-info, 1) nth($dialog-info, 2) , success nth($dialog-success, 1) nth($dialog-success, 2) , warning nth($dialog-warning, 1) nth($dialog-warning, 2) , error nth($dialog-error, 1) nth($dialog-error, 2) ) !default; @each$a in $dialog-attributes {$name: nth($a, 1);$color: nth($a, 2);$bg: nth($a, 3); %dialog-colors.#{$name} {
color: $color; background-color:$bg;
}
%dialog-colors-inverted.#{$name} { color:$bg;
background-color: $color; } %badge-colors.#{$name} {
background-color: $color; color:$background-color;
}
%button-colors.#{$name} { @include button($base: $bg) { @include button-text($color, inset);
@include button-states;
}
}
%button-colors-inverted.#{$name} { @include button($base: $color) { @include button-text($bg, inset);
@include button-states;
}
}
%button-colors-faded.#{$name} { @include button($base: fade(\$bg, 10%)) {
color: #CCC;
@include button-states;
}
}
}
In case you're wondering why I'm using multiple classes, I've setup a short demo: http://sassmeister.com/gist/7792677
• I'm glad the structure and use of the mixins is easy to grok. The use of @extend vs @mixin is still a bit of a mystery to me, though I've been trying to use @extend to denote a type of style (noun), and @include to do something to a style (verb). I think you're right about the color naming. I split out collapsableRed because I wanted to make it easy to extend the base collapsable widget for different themes. It would probably be better to use a named theme instead of a color name (eg myAwesomeThemeCollapsable). I wouldn't use .42pxWideHeader instead of .sidebarHeader – edan Dec 5 '13 at 19:18
• And yes, I am using Compass. The %textOnWhateverColorBg runs into the same problem as %collapsableWhateverColor -- I shouldn't be naming a class with a specific style value. I'll have to think of a better name. Maybe %textColorsActive and %textColorsInactive would be more descriptive. – edan Dec 5 '13 at 19:23
|
2021-01-23 00:27:27
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.25853341817855835, "perplexity": 6405.824740539646}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-04/segments/1610703531702.36/warc/CC-MAIN-20210123001629-20210123031629-00180.warc.gz"}
|
https://www.intmath.com/blog/videos/friday-math-movie-math-test-anxiety-1253
|
# Friday Math Movie - Math Test Anxiety
By Murray Bourne, 27 Jun 2008
This week's movie, Math Test Anxiety is a weird Japanese anime.
The video cleverly captures the nightmare that many students experience in the days before a mathematics test. Note how the stress manifests itself in the student's rejection of his mother's kindness.
Some of this movie is disturbing. It's not pretty or fun.
You have been warned.
### 5 Comments on “Friday Math Movie - Math Test Anxiety”
1. JackieB says:
Very strange. You did warn me/us though.
2. Peter says:
Yeh, Zac - that's one weird movie.
I like the funny movies better, but this one expressed the test angst well, as you said.
3. Chi says:
This is from the anime series Paranoia Agent by Satoshi Kon. If you see the rest of his work, this isn't as disturbing as it could be.
4. Murray says:
Thanks for the background, Chi.
5. Ralph Spencer says:
Math ain't that bad, rather it ain't bad at all. It applies in daily life. Practice well and you score full. The movie's just some kinda - CRAP.
### Comment Preview
HTML: You can use simple tags like <b>, <a href="...">, etc.
To enter math, you can can either:
1. Use simple calculator-like input in the following format (surround your math in backticks, or qq on tablet or phone):
a^2 = sqrt(b^2 + c^2)
(See more on ASCIIMath syntax); or
2. Use simple LaTeX in the following format. Surround your math with $$ and $$.
$$\int g dx = \sqrt{\frac{a}{b}}$$
(This is standard simple LaTeX.)
NOTE: You can mix both types of math entry in your comment.
## Subscribe
* indicates required
From Math Blogs
|
2020-11-30 05:01:00
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 1, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.7990214824676514, "perplexity": 14625.520406263699}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-50/segments/1606141205147.57/warc/CC-MAIN-20201130035203-20201130065203-00544.warc.gz"}
|
http://www.koreascience.or.kr/article/JAKO201022442401816.page
|
# 초포화계획을 평가하기 위한 그래픽방법
Kim, Youn-Gil;Jang, Dae-Heung
김영일;장대흥
• Accepted : 20090900
• Published : 2010.02.28
• 26 5
#### Abstract
The orthogonality is an important property in the experimental designs. We usually use supersaturated designs in case of large factors and small runs. These supersaturated designs do not satisfy the orthogonality. Hence, we need the means for the evaluation of the degree of the orthogonality of given supersaturated designs. We usually use the numerical measures as the means for evaluating the degree of the orthogonality of given supersaturated designs. We can use the graphical methods for evaluating the degree of the orthogonality of given supersaturated designs.
#### Keywords
Supersaturated designs;orthogonality;orthogonality evaluation scatterplot matrix;r-plot
#### References
1. 장대흥 (2004). 근사직교배열의 직교성의 정도를 평가하기 위한 그래픽방법, <품질경영학회지>, 32, 220-228.
2. Balkin, S. D. and Lin, D. K. J. (1998). A graphical comparison of supersaturated designs, Communications in Statistics-Theory and Methods, 27, 1289-1303. https://doi.org/10.1080/03610929808832159
3. Booth, K. H. V. and Cox, D. R. (1962). Some systematic supersaturated designs, Technometrics, 4, 489-495. https://doi.org/10.2307/1266285
4. Bruno, M. C., Dobrijevic, M., Luu, P. T. and Sergent, M. (2009). A new class of supersaturated designs: Application to a sensitivity study of a photochemical model, Chemometrics and Intelligent Laboratory Systems, 95, 86-93. https://doi.org/10.1016/j.chemolab.2008.09.001
5. Butler, N. A. (2009). Two-level supersaturated designs for $2^k$ runs and other cases, Journal of Statistical Planning and Inference, 139, 23-29. https://doi.org/10.1016/j.jspi.2008.05.013
6. Cela, R., Martinez, E. and Carro, A. M. (2000). Supersaturated experimental designs: New approaches to building and using it, Part I. Building optimal supersaturated designs by means of evolutionary algorithms, Chemometrics and Intelligent Laboratory Systems, 52, 167-182. https://doi.org/10.1016/S0169-7439(00)00091-5
7. Cela, R., Martinez, E. and Carro, A. M. (2001). Supersaturated experimental designs: New approaches to building and using it, Part II. Solving supersaturated designs by genetic algorithms, Chemometrics and Intelligent Laboratory Systems, 57, 75-92. https://doi.org/10.1016/S0169-7439(01)00127-7
8. Jang, D. H. (2002). Measures for evaluating non-orthogonality of experimental designs, Communications in Statistics-Theory and Methods, 31, 249-260. https://doi.org/10.1081/STA-120002649
9. Jones, B. A., Nachtsheim, C. J. and Ye, K. Q. (2009). Model-robust supersaturated and partially supersaturated designs, Journal of Statistical Planning and Inference, 139, 45-53. https://doi.org/10.1016/j.jspi.2008.05.015
10. Koukouvinos, C. and Mylona, K. (2009). Group screening method for the statistical analysis of $E(f_{NOD})-optimal$ mixed-level supersaturated designs, Statistical Methodology, 6, 380-388. https://doi.org/10.1016/j.stamet.2008.12.002
11. Koukouvinos, C., Mylona, K. and Simos, D. E. (2009). A hybrid SAGA algorithm for the construction of $E(s^2)-optimal$ cyclic supersaturated designs, Statistical Methodology, 6, 380-388. https://doi.org/10.1016/j.stamet.2008.12.002
12. Li, W. W. and Wu, C. F. J. (1997). Columnwise-pairwise algorithms with applications to the construction of supersaturated designs, Technometrics, 39, 171-179. https://doi.org/10.2307/1270905
13. Lin, D. K. J. (1995). Generating systematic supersaturated designs, Technometrics, 37, 213-225. https://doi.org/10.2307/1269622
14. Liu, M. Q. and Zhang L. (2009). An algorithm for constructing mixed-level k-circulant supersaturated designs, Computational Statistics and Data Analysis, 53, 2465-2470. https://doi.org/10.1016/j.csda.2008.12.009
15. Phoa, F. K. H., Pan, Y. H. and Xu, H. (2009). Analysis of supersaturated designs via Danzig selector, Journal of Statistical Planning and Inference, 139, 2362-2372. https://doi.org/10.1016/j.jspi.2008.10.023
16. Rais, F., Kamoun, A., Chaabouni, M., Bruno, C., Luu, P. T. and Sergent, M. (2009). Supersaturated design for screening factors influencing the preparation of sulfated amides of olive pomace oil fatty acids, Chemometrics and Intelligent Laboratory Systems, 99, 71-78. https://doi.org/10.1016/j.chemolab.2009.07.015
17. Sarkar, A., Lin, D. K. J. and Chatterjee, K. (2009). Probability of correct model identification in supersaturated designs, Statistics and Probability Letters, 79, 1224-1230. https://doi.org/10.1016/j.spl.2009.01.017
18. Wu, C. F. J. (1993). Construction of supersaturated designs through partially aliased interactions, Biometrika, 80, 661-669. https://doi.org/10.1093/biomet/80.3.661
#### Cited by
1. Visualization for Experimental Designs vol.24, pp.5, 2011, https://doi.org/10.5351/KJAS.2011.24.5.893
|
2019-04-22 06:42:20
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.7558980584144592, "perplexity": 14273.860616881753}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-18/segments/1555578544449.50/warc/CC-MAIN-20190422055611-20190422081611-00253.warc.gz"}
|
https://www.tutorialspoint.com/find-the-coordinates-of-a-point-a-where-ab-is-the-diameter-of-a-circle-whose-centre-is-2-3-and-b-is-1-4
|
# Find the coordinates of a point $A$, where $AB$ is the diameter of a circle whose centre is $(2, -3)$ and $B$ is $(1, 4)$.
#### Complete Python Prime Pack
9 Courses 2 eBooks
#### Artificial Intelligence & Machine Learning Prime Pack
6 Courses 1 eBooks
#### Java Prime Pack
9 Courses 2 eBooks
Given:
AB is diameter of a circle whose centre is $( 2,\ -3)$ and B is $( 1,\ 4)$.
To do:
We have to find the coordinates of point A.
Solution:
Let the centre be $O(2,\ -3)$ and coordinates of point A be $( x,\ y)$.
$AB$ is the diameter of the circle with centre $O$.
This implies,
$O$ is the mid-point of AB.
We know that,
Mid-point of two points $( x_{1},\ y_{1})$ and $( x_{2},\ y_{2})$ is,
$(x,y)=( \frac{x_{1}+x_{2}}{2},\ \frac{y_{1}+y_{2}}{2})$
Using the mid-point formula,
$( 2,\ -3)=( \frac{x+1}{2},\ \frac{y+4}{2})$
Equating the coordinates on both sides, we get,
$\frac{x+1}{2}=2$ and $\frac{y+4}{2}=-3$
$\Rightarrow x+1=2(2)$ and $y+4=-3(2)$
$\Rightarrow x+1=4$ and $y+4=-6$
$\Rightarrow x=4-1$ and $y=-6-4$
$\Rightarrow x=3$ and $y=-10$
The coordinates of point A are $(3,-10)$.
Updated on 10-Oct-2022 13:22:05
|
2022-12-01 06:16:08
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.5122828483581543, "perplexity": 1318.8131554543013}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-49/segments/1669446710801.42/warc/CC-MAIN-20221201053355-20221201083355-00728.warc.gz"}
|
https://brilliant.org/problems/thats-nifty/
|
# That's nifty
$11*121=1331$. Is it possible to exchange a pair of digits such that the product's value raises by one ?
×
|
2021-06-18 18:33:03
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 1, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.4518768787384033, "perplexity": 1825.416555336987}, "config": {"markdown_headings": true, "markdown_code": false, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-25/segments/1623487640324.35/warc/CC-MAIN-20210618165643-20210618195643-00617.warc.gz"}
|
https://cstheory.stackexchange.com/questions/47835/extending-cographs-with-product-operation
|
# Extending cographs with product operation
Let $$\mathcal{C}$$ be the class of undirected graphs defined inductively as follows:
• A single vertex is in $$\mathcal{C}$$;
• If $$G\in\mathcal{C}$$ then its complement $$\overline{G}$$ is in $$\mathcal{C}$$;
• If $$G,H\in\mathcal{C}$$ then their disjoint union $$G\oplus H$$ is in $$\mathcal{C}$$;
• If $$G,H\in\mathcal{C}$$ then their tensor product $$G\otimes H$$ is in $$\mathcal{C}$$.
This class of graphs appears in linear logic: this is how coherent spaces are defined.
I would like to now if this class have been studied from the point of view of graph theory. Does it have a name? Does it admit other characterizations?
If we restrict the definition to the first three lines we get cographs. But when we add the product operation we can create the 4-nodes path $$P_4$$, clique-width is then $$>2$$. But is the clique-width bounded for the whole class?
It was not clear for me how to generate paths of arbitrary lengths. This can be done by induction as follows, where the induction hypothesis is:
"$$\forall n$$, there is a graph $$G_n$$ of $$\mathcal{C}$$ containing $$P_n$$ as an induced subgraph"
• The paths $$P_2$$ and $$P_3$$ are clearly in $$\mathcal{C}$$. Take $$G_2$$ and $$G_3$$ to be respectively $$P_2$$ and $$P_3$$.
• For $$n\geq 4$$, we define $$G_n$$ as the tensor product of $$G_{n-1}$$ (obtained by IH) with the graph $$H_n$$ whose set of vertices is $$[1,n]$$ and which contains an edge between every pair of vertices except between $$n$$ and $$n-3$$. The graph $$H_n$$ is clearly a cograph thus belongs to $$\mathcal{C}$$. It is not very hard to see that $$G_{n-1}\otimes H_n$$ contains $$P_n$$ as an induced subgraph.
• Following your construction; i think for every graph G your class contains a graph H that contains G as an induced subgraph Nov 11, 2020 at 23:16
• That is right, interesting ! Nov 12, 2020 at 11:32
A path has cliquewidth $$3$$, but the tensor product of two paths of length $$n$$ will contain a $$\Omega(n) \times \Omega(n)$$ size grid as an induced subgraph. And $$n \times n$$ grids are known to have cliquewidth $$\Omega(n)$$ (see “The rank-width of the square grid“ by Vít Jelínek)
• It was not clear for me why $\mathcal{C}$ contains paths of arbitrary lengths. I updated my post to make this clear. Nov 10, 2020 at 9:48
|
2022-05-24 19:22:10
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 40, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8900396823883057, "perplexity": 193.4415418700549}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-21/segments/1652662573189.78/warc/CC-MAIN-20220524173011-20220524203011-00741.warc.gz"}
|
https://www.gamedev.net/forums/topic/449258-problems-concerning-rotation-around-a-specific-point-and-axis-help/
|
• 12
• 9
• 9
• 13
• 10
# Problems concerning rotation around a specific point and axis... Help!!
This topic is 3953 days old which is more than the 365 day threshold we allow for new replies. Please post a new topic.
## Recommended Posts
Hello all! I made a camera class, it has a camera eyepoint in the scene graph, and calling movement methods of this class will move the eyepoint accordingly.. for example calling pan(dx,dy) will call eye.TranslateLocal(dx,dy,0).. etc. The problem is with the orbit method.. I want the camera to rotate around its target while looking at it. horizontal rotation is no problem, it's just rotation around the parent Z axis, like this:
Public Sub Orbit(ByVal angle As Single)
mEye.LocalTransform *= Transform.Tranlation(-mLocalTarget) * Transform.RotationZ(-angle) * Transform.Tranlation(mLocalTarget)
End Sub
where the Transform class is similar to the matrix class of directx. now when I try to rotate vertically, I have to rotate around the target's location and the axis is the eyepoint's local x. first I tried to do this:
Public Sub Tilt(ByVal angle As Single)
Dim relation As Vector3 = mLocalTarget - mEye.LocalPosition
mEye.LocalTransform = Transform.Tranlation(-mLocalTarget) * Transform.Tranlation(relation) * Transform.RotationX(angle) * Transform.Tranlation(-relation) * Transform.Tranlation(mLocalTarget) * mEye.LocalTransform
End Sub
but this made the eyepoint rotate around a point somewhere between it and the target (I have no idea why.. I'm not that good in this kind of math)... Then I tried this:
Public Sub Tilt(ByVal angle As Single)
Dim relation As Vector3 = mLocalTarget - mEye.LocalPosition
Dim z As New Vector3(0, 0, 1)
Dim rNorm As Vector3 = relation
rNorm.Normalize()
'Calculate the current angle between the sight vector and the z vector
Dim current As Double = Math.Acos(Vector3.Dot(rNorm, z))
'if the angle is in [0,180] after rotation, apply the rotation
Dim res As Double = current + angle
If (res > 0) AndAlso (res < Math.PI) Then
Dim x As Vector3 = Vector3.Cross(z, relation)
x.Normalize()
Dim m As Matrix = Matrix.RotationAxis(x, angle)
relation = Vector3.TransformNormal(relation, m)
mEye.LocalRotationMatrix *= m
mEye.LocalPosition = mLocalTarget - relation
End If
End Sub
and this worked quite fine, untill it randomally starts to sway around with not obvious reason.. I spent days trying different ways and nothing worked. it's either a precision problem or something similar.. Please help!! I'm using managed directx9
##### Share on other sites
Cross posted with Math & Physics.
|
2018-03-20 23:53:35
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.3977872133255005, "perplexity": 6373.178301905343}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-13/segments/1521257647545.84/warc/CC-MAIN-20180320224824-20180321004824-00233.warc.gz"}
|
https://nrich.maths.org/7411
|
# Weekly Challenge 48: Quorum-sensing
##### Age 14 to 16 Short Challenge Level:
Rudolph's nose glows because it is home to a species of bacteria, Vibrio rudolphi, that luminesces when it reaches a certain population density. It detects the size of its population by quorum sensing: each bacterial cell releases a signal molecule, X, at a rate of $1$ per minute and if the concentration of X is greater than or equal to $10^{11}$ cells/ml, the bacteria will glow. X decays with a half life of ten minutes but the bacteria divide every 30 minutes.
Sadly, Rudolph catches a nasty cold, which, by the time he is better, has killed all of the bacterial cells in his nose except for one. Santa is worried: there are only 24 hours left until Christmas. Will Rudolph's nose be glowing again in time?
If you need any data that is not included, try to estimate it: Santa wants an answer now, so that he can make alternative plans if need be.
Did you know ... ?
The mathematics of rates and half-lives are of great importance in mathematical biology where growth factors are often in competition with decay factors.
You can find more short problems, arranged by curriculum topic, in our short problems collection.
|
2018-10-21 12:14:29
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.38232049345970154, "perplexity": 2083.289002654577}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-43/segments/1539583514005.65/warc/CC-MAIN-20181021115035-20181021140535-00479.warc.gz"}
|
https://stats.stackexchange.com/questions/264533/how-should-feature-selection-and-hyperparameter-optimization-be-ordered-in-the-m/323899
|
# How should Feature Selection and Hyperparameter optimization be ordered in the machine learning pipeline?
My objective is to classify sensor signals. The concept of my solution so far is : i) Engineering features from raw signal ii) Selecting relevant features with ReliefF and a clustering approach iii) Apply N.N, Random Forest and SVM
However I am trapped in a dilemma. In ii) and iii), there are hyperparameters like k-Nearest Neigbours for ReliefF or the window length, for which the sensor signal is evaluated, or the number of hidden units in each layer of N.N.
There are 3 Problems I see here : 1) Tuning feature selection parameters will influence the classifier performance 2) Optimizing hyperparameters of classifier will influence the choice of features. 3) Evaluating each possible combination of configuration is intractable.
So my questions are : a) Can I make a simplifying assumption, s.t. tuning feature selection parameters can be decoupled from tuning classifier parameters ? b) Are there any other possible solutions ?
• I think decoupling feature selection tuning and classifier tuning is valid, since the heuritsic for reliefF aims to maximize inter-class variance and minimize intra-class variance which also indicates a good classifier. Therefor tuning optimal parameters for reliefF also makes a good classifer more 'likely'. However having a mathematical formulation to back this idea up, would be very nice. – Grunwalski Mar 1 '17 at 9:08
• A specific variant of this question: Should feature selection be part of the crossvalidation routine (as in: #for each classifer hyperparam set: #for each k-fold CV run: 1) feature selection, 2) feature scaling, 3) classifier fit 4) predict on test set ? – Nikolas Rieble Jan 19 '18 at 8:59
• @NikolasRieble I just wrote an answer to the original question, and also included your question in the answer – Dennis Soemers Jan 19 '18 at 9:48
Like you already observed yourself, your choice of features (feature selection) may have an impact on which hyperparameters for your algorithm are optimal, and which hyperparameters you select for your algorithm may have an impact on which choice of features would be optimal.
So, yes, if you really really care about squeezing every single percent of performance out of your model, and you can afford the required amount of computation, the best solution is probably to do feature selection and hyperparamter tuning "at the same time". That's probably not easy (depending on how you do feature selection) though. The way I imagine it working would be like having different sets of features as candidates, and treating the selection of one set of features out of all those candidate sets as an additional hyperparameter.
In practice that may not really be feasible though. In general, if you cannot afford to evaluate all the possible combinations, I'd recommend:
1. Very loosely optimize hyperparameters, just to make sure you don't assign extremely bad values to some hyperparameters. This can often just be done by hand if you have a good intuitive understanding of your hyperparameters, or done with a very brief hyperparameter optimization procedure using just a bunch of features that you know to be decently good otherwise.
2. Feature selection, with hyperparameters that are maybe not 100% optimized but at least not extremely terrible either. If you have at least a somewhat decently configured machine learning algorithm already, having good features will be significantly more important for your performance than micro-optimizing hyperparameters. Extreme examples: If you have no features, you can't predict anything. If you have a cheating feature that contains the class label, you can perfectly classify everything.
3. Optimize hyperparameters with the features selected in the step above. This should be a good feature set now, where it actually may be worth optimizing hyperparams a bit.
To address the additional question that Nikolas posted in the comments, concering how all these things (feature selection, hyperparameter optimization) interact with k-fold cross validation: I'd say it depends.
Whenever you use data in one of the folds for anything at all, and then evaluate performance on that same fold, you get a biased estimate of your performance (you'll overestimate performance). So, if you use data in all the folds for the feature selection step, and then evaluate performance on each of those folds, you'll get biased estimates of performance for each of them (which is not good). Similarly, if you have data-driven hyperparameter optimization and use data from certain folds (or all folds), and then evaluate on those same folds, you'll again get biased estimates of performance. Possible solutions are:
1. Repeat the complete pipeline within every fold separately (e.g. within each fold, do feature selection + hyperparameter optimization and training model). Doing this means that k-fold cross validation gives you unbiased estimates of the performance of this complete pipeline.
2. Split your initial dataset into a ''preprocessing dataset'' and a ''train/test dataset''. You can do your feature selection + hyperparameter optimization on the ''preprocessing dataset''. Then, you fix your selected features and hyperparameters, and do k-fold cross validation on the ''train/test dataset''. Doing this means that k-fold cross validation gives you unbiased estimates of the performance of your ML algorithm given the fixed feature-set and hyperparameter values.
Note how the two solutions result in slightly different estimates of performance. Which one is more interesting depends on your use-case, depends on how you plan to deploy your machine learning solutions in practice. If you're, for example, a company that intends to have the complete pipeline of feature selection + hyperparameter optimization + training running automatically every day/week/month/year/whatever, you'll also be interested in the performance of that complete pipeline, and you'll want the first solution.
If, on the other hand, you can only afford to do the feature selection + hyperparameter optimization a single time in your life, and afterwards only somewhat regularly re-train your algorithm (with feature-set and hyperparam values fixed), then the performance of only that step will be what you're interested in, and you should go for the second solution
• Can you provide references as well? – Nikolas Rieble Jan 23 '18 at 9:34
• There are some pictures of a well-known book in this post: nodalpoint.com/not-perform-feature-selection . Those seem to agree with my ''possible solution 1''. I don't have a reference necessarily for the other case, other than... myself? I did provide my reasoning/motivation there, which in my opinion checks out, so that's the reference :D – Dennis Soemers Jan 23 '18 at 9:50
• That chapter of ESL should be 100% required reading for any predictive modeler. – Matthew Drury Jan 25 '18 at 20:18
• So regarding soln 1, how do you get your final feature set and model hyperparameters after running feature selection (fs) and hyperparam optimization (ho) in several iters of cv? As well, when we perform these in an iter of cv, do we run fs first, and then ho using those features? – sma Jun 14 '18 at 5:45
• @skim CV is generally used just to get a good estimate of performance. You typically wouldn't directly start using any of the models trained in one of the sets of $K - 1$ folds. If you find the performance as estimated through CV to be satisfactory, you'd run the complete pipeline once more on the full training dataset (including, again, feature selection and hyperparam tuning). The feature set + hyperparams + model you get from that is what you'd put "in production" – Dennis Soemers Jun 14 '18 at 8:06
No one mentioned approaches that make hyper-parameter tuning and feature selection the same so I will talk about it. For this case you should engineer all the features you want at the beginning and include them all.
Research now in the statistics community have tried to make feature selection a tuning criterion. Basically you penalize a model in such a way that it is incentivized to choose only a few features that help it make the best prediction. But you add a tuning parameter to determine how big of a penalty you should incur.
In other words you allow the model to pick the features for you and you more or less have control of the number of features. This actually reduces computation because you no longer have to decide which features but just how many features and the model does the rest.
So then when you do cross-validation on the parameter then you are effectively doing cross-validation on feature selection as well.
Already there are many ML models that incorporate this feature selection in some way or another.
• Doubly-regularized support vector machines which is like normal SVM but with feature selection
• Elastic net which deals with linear regression
• Drop-out regularization in neural networks (don't have reference for this one)
• Random forest normally does random subsets of the features so kind of handles feature selection for you
In short, people have tried to incorporate parameter tuning and feature selection at the same time in order reduce complexity and be able to do cross-validation
@DennisSoemers has a great solution. I'll add a two similar solutions that are a bit more explicit and based on Feature Engineering and Selection: A Practical Approach for Predictive Models by Max Kuhn and Kjell Johnson.
Kuhn uses the term resample to describe a fold of a dataset, but the dominant term on StackExchange seems to be fold, so I will use the term fold below.
Option 1 - nested search
If compute power is not a limiting factor, a nested validation approach is recommended, in which there are 3 levels of nesting:
1) the external folds, each fold with a different feature subset
2) the internal folds, each fold with a hyperparameter search
3) the internal folds of each hyperparameter search, each fold with a different hyperparameter set.
Here's the algorithm:
-> Split data into train and test sets.
-> For each external fold of train set:
-> Select feature subset.
-> Split into external train and test sets.
-> For each internal fold of external train set:
-> Split into internal train and test sets.
-> Perform hyperparameter tuning on the internal train set. Note that this
step is another level of nesting in which the internal train set is split
into multiple folds and different hyperparameter sets are trained and tested on
different folds.
-> Examine the performance of the best hyperparameter tuned model
from each of the inner test folds. If performance is consistent, redo
the internal hyperparameter tuning step on the entire external train set.
-> Test the model with the best hyperparameter set on the external test set.
-> Choose the feature set with the best external test score.
-> Retrain the model on all of the training data using the best feature set
and best hyperparameters for that feature set.
The -> Select feature subset step is implied to be random, but there are other techniques, which are outlined in the book in Chapter 11.
To clarify the -> Perform hyperparameter tuning step, you can read about the recommended approach of nested cross validation. The idea is to test the robustness of a training process by repeatedly performing the training and testing process on different folds of the data, and looking at the average of test results.
Option 2 - separate hyperparameter and feature selection search
-> Split data into hyperameter_train, feature_selection_train, and test sets.
-> Select a reasonable subset of features using expert knowledge.
-> Perform nested cross validation with the initial features and the
hyperparameter_train set to find the best hyperparameters as outlined in option 1.
-> Use the best hyperparameters and the feature_selection_train set to find
the best set of features. Again, this process could be nested cross
validation or not, depending on the computational cost that it would take
and the cost that is tolerable.
Here's how Kuhn and Johsnon phrase the process:
When combining a global search method with a model that has tuning parameters, we recommend that, when possible, the feature set first be winnowed down using expert knowledge about the problem. Next, it is important to identify a reasonable range of tuning parameter values. If a sufficient number of samples are available, a proportion of them can be split off and used to find a range of potentially good parameter values using all of the features. The tuning parameter values may not be the perfect choice for feature subsets, but they should be reasonably effective for finding an optimal subset.
Chapter 12.5: Global Search Methods
I think you are overthinking quite a bit there. Feature selection, which is part of feature engineering, is usually helpful but some redundant features are not much harmful in early stage of a machine learning system. So best practice is that you generate all meaningful features first, then use them to select algorithms and tune models, after tuning the model you can trim the feature set or decide to use new features.
The machine learning procedure is actually an iterating process, in which you do feature engineering, then try with some algorithms, then tune the models and go back until you are satisfied with the result.
• You mean it is trying untill it works :D – Grunwalski Apr 20 '17 at 12:57
• Trying in an ML procedure, not randomly. Actually ML is actually a bit of hacking per se. – THN Apr 20 '17 at 15:03
• My answer is from a practical machine learning perspective. Best practices in the field rely on techniques including bagging, boosting, weight decay, dropout, batch normalization, stochastic optimizer, and others. People who don't know these techniques will not necessarily comprehend and appreciate my answer. – THN May 17 at 8:20
|
2020-08-06 19:24:34
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.39198359847068787, "perplexity": 1176.4281380657906}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-34/segments/1596439737019.4/warc/CC-MAIN-20200806180859-20200806210859-00507.warc.gz"}
|
https://www.nature.com/articles/s41467-021-24531-9?error=cookies_not_supported
|
## Introduction
Bound states appear in superconductors at localized perturbations in the superconducting order parameter. Both Caroli de Gennes Matricon (CdGM) states at vortex cores1,2 and Yu-Shiba-Rusinov (YSR) states at magnetic impurities3,4,5 are examples of this phenomenon. YSR states provide mixed electron-hole excitations that serve to create the conditions needed for Majorana states. These are expected for instance at the ends of magnetic chains of YSR atoms6,7,8,9. On the other hand, CdGM states have been proposed to isolate and manipulate Majoranas in a topological superconductor10,11,12,13,14,15.
The nature of YSR and CdGM states is, however, quite different. YSR states are spin polarized and appear at a single or a few subgap energies and exhibit oscillations at the Fermi wavelength λF that can be resolved with atomic scale local density of states (LDOS) measurements16,17,18,19. By contrast, CdGM states are spin degenerate and form a quasi-continuum with a level separation Δ2/EF (where Δ is the superconducting gap and EF is the Fermi energy), which is usually small compared to Δ. Thus, their discreteness and their mixed electron-hole character only appears at very low temperatures or for Δ ≈ EF and in absence of scattering, in the so called quantum limit20. Otherwise, thermal excitations or defects produce dephasing resulting in an electron-hole symmetric LDOS pattern at vortex cores.
Thus, in most cases, CdGM states are electron-hole symmetric and their features in the LDOS extend to much larger distances than those of YSR states. Here, we ask the question if we can build a hybrid quantum system consisting in vortices close to magnetic impurities and transfer the quantum property of YSR states, i.e. their electron-hole asymmetry, into the more extended CdGM states far from the quantum limit.
As we show below, we indeed theoretically predict and experimentally observe electron-hole asymmetric features in the LDOS of vortices in presence of magnetic impurities. As we schematically represent in Fig. 1, a magnetic impurity close to a vortex core induces a coupling between CdGM states with n and n ± 1 angular momenta. This coupling produces a slight shift of the charge density of the positive (negative) energy excitations towards (outwards) the impurity with respect to their mean position, which remains even away of the quantum limit. The sign of the coupling changes when the bands at the Fermi energy have a hole (electron) character. The discrete nature of CdGM states is thus revealed in the vortex core LDOS: the difference between the electron and hole LDOS would exhibit an axial asymmetry as illustrated in Fig. 1d, e, with a larger LDOS close to the position of the magnetic impurity. We emphasize that, in contrast to the above mentioned oscillations at the tiny λF scale, the electron-hole asymmetric feature in the vortex LDOS occurs at a much larger length scale.
The transition metal dichalcogenide two-dimensional (2D)-layered superconductor 2H-NbSe2 is the first material where the vortex LDOS has been measured and one of the few where the nature of CdGM states has been extensively studied, both in experiment and theory21,22,23. YSR impurities have been also imaged in detail in this material18,24,25. Vortex cores in 2H-NbSe2 (Tc = 7.2 K) are highly anisotropic, with a characteristic sixfold star shape. Previous work imaged YSR impurities and vortex cores at the same time, but did not identify any particular connection18. On the other hand, when doped with S as in 2H-NbSe1.8S0.2 (Tc = 6.6 K) the vortex core CdGM states are in-plane isotropic, leading to round shaped, symmetric, vortex cores26. Using these two systems, we can study the interaction between YSR and CdGM states for in-plane isotropic (2H-NbSe1.8S0.2) and anisotropic (2H-NbSe2) vortices.
## Results
### Length scales for CdGM and YSR states
It is of interest to first analyze the different length scales associated with isolated CdGM and YSR states, particularly in the case of 2H-NbSe1.8S0.2 (Fig. 2). As we show in Fig. 2a, CdGM states provide a zero bias peak at the center of the vortex that decays with distance at a scale, which is generally larger than the coherence length (of approximately 10 nm in 2H-NbSe2 and 7 nm in 2H-NbSe1.8S0.2 as obtained from Hc2(T)) and is magnetic field dependent26,27. The vortex core size at the magnetic fields considered here is of ξV ≈ 30 nm26. The zero bias peak splits when leaving the vortex core, as shown in previous work20,21,22,23,26,28,29. The sixfold anisotropy characteristic of vortex cores in 2H-NbSe2 is washed out by the S substitutional disorder in 2H-NbSe1.8S0.2. On the other hand, a YSR state in 2H-NbSe1.8S0.2 is shown in Fig. 2b. There is a conductance peak within the superconducting gap, which changes from positive to negative bias voltage values at a scale of order of λF (about 0.7 nm, see lower right inset in Fig. 2b and ref. 18). At the same time, the height of the peak decreases exponentially with distance (lower right inset of Fig. 2b). Thus, we see that the effect of YSR states is transposed two orders of magnitude in distance, from λF to ξV, and leads to the electron-hole asymmetric CdGM states we discuss below.
### Magnetism of substitutional Fe impurities
To better understand the magnetism at the Fe impurities we calculated the electronic structure and the spin-density isosurface, i.e., the difference between spin-up and spin-down charge densities (Fig. 3), by means of Hubbard-corrected density functional theory (DFT+U). We constructed a slab model with a 4 × 4 × 2 supercell that contains one isolated magnetic impurity at a Nb site. To model 2H-NbSe1.8S0.2, we introduced approximately 10% S atoms (115 Se and 13 S atoms) randomly distributed. Computational methods are detailed in the Supplementary Note 3. In 2H-NbSe2 we observe clearly a strong magnetic moment on the Fe atom (Fig. 3a). The same behavior is observed at the Fe site in 2H-NbSe1.8S0.2 with, however, a large antiferromagnetic coupling with immediately neighboring Se atoms that become spin-polarized (Fig. 3). Importantly, this coupling breaks the sixfold in-plane symmetry of the Se lattice. The introduction of S atoms lowers the symmetry from six- to threefold. We show in Fig. 3c, d the measured tunneling conductance map G(r) = G(x, y) on a YSR state on 2H-NbSe2 (Fig. 3c) and on 2H-NbSe1.8S0.2 (Fig. 3d). We see that the LDOS at YSR impurities is modified from a sixfold star shape in 2H-NbSe2 to a predominantly threefold star shape in 2H-NbSe1.8S0.2 and ascribe this effect to the symmetry breaking in the lattice induced by the S distribution in 2H-NbSe1.8S0.2, as suggested by our calculations (Fig. 3a, b). The threefold symmetry is smeared at the scale of ξV, leading to the observed round vortex cores shown in Fig. 2a.
### Interplay between CdGM and YSR states: electron-hole axial asymmetry in vortices
We show vortices in close proximity to YSR impurities in Fig. 4a, d. When we make the difference between images taken at positive and negative bias voltages, $$\frac{\delta G({{{\bf{r}}}},V)}{{G}_{0}}=\frac{G({{{\bf{r}}}},V)-G({{{\bf{r}}}},-V)}{{G}_{0}}$$ (with G0 the averaged tunneling conductance for bias voltages above the gap), we observe that vortex cores are not axially symmetric (Fig. 4b, e). In contrast (as we show in detail in the Supplementary Fig. 3c), $$\frac{\delta G({{{\bf{r}}}},V)}{{G}_{0}}$$ is axially symmetric in absence of YSR impurities.
As stated above, we trace the broken axial symmetry to the interplay between vortex and YSR states. We have calculated the perturbation to a rotationally symmetric vortex induced by a magnetic impurity located close to the vortex core. As shown in the Supplementary Note 1, we start with a 2D superconductor described by a Bogoliubov-de Gennes Hamiltonian. We find discrete energy levels En and the shape of electron and hole wave functions $${\psi }_{n}^{+}$$, $${\psi }_{n}^{-}$$ of CdGM vortex bound states (with n the angular momentum number). Magnetic YSR impurities are characterized as usual by the exchange coupling J at the impurity sites30. This coupling leads to an effective Hamiltonian in the subspace spanned by the states ψn−1, ψn, ψn+1, with solution $${\tilde{\psi }}_{n}^{+}$$, $${\tilde{\psi }}_{n}^{-}$$. Without YSR impurities, the vortex core LDOS obtained is always axially symmetric, as found previously. There are slight, axially symmetric electron-hole variations at the Fermi wavelength λF scale, which are smeared out due to dephasing except in the quantum limit. The vortex core LDOS with YSR impurities, obtained from $${\tilde{\psi }}_{n}^{+}$$, $${\tilde{\psi }}_{n}^{-}$$, is, however, axially asymmetric. The asymmetry is due to the spatial shift in the perturbed CdGM states $${\tilde{\psi }}_{n}^{+}$$, $${\tilde{\psi }}_{n}^{-}$$ and is induced by the mixing between adjacent CdGM levels (n + 1 and n − 1). This asymmetry is roughly given by $$\frac{\delta G({{{\bf{r}}}},V)}{{G}_{0}}\propto | {\tilde{\psi }}_{n}^{+}{| }^{2}-| {\tilde{\psi }}_{n}^{-}{| }^{2}\propto \pm {J}^{2}{e}^{-4{r}_{p}/{\xi }_{V}}\cos (\theta -{\theta }_{p})$$, where θ is the polar angle with respect to the vortex center, rp and θp provide the length and the angle of the line joining the vortex center and the impurity position and the ± sign depends on whether the effective mass is negative or positive, i.e., whether the bands have a hole or an electron like character. The magnitude of the perturbation decays exponentially with the distance from the impurity to the vortex center.
As we show in Fig. 4, the observed LDOS asymmetry can be qualitatively reproduced using our theory (Fig. 4c, f). For that purpose we introduce an impurity distribution corresponding to the one in the experiments and add the contribution of each impurity to the asymmetry. Furthermore, we use an isotropic gap for 2H-NbSe1.8S0.2 and a sixfold anisotropic gap for 2H-NbSe2. Detailed parameters of the calculation are provided in the Supplementary Note 1. Here, we highlight that the exchange coupling J is negative, corresponding to the antiferromagnetic exchange found in Fig. 3a, b and that we can use the same value for all the impurities. In practice, due to the already mentioned distance dependence, the asymmetry $$\frac{\delta G({{{\bf{r}}}},V)}{{G}_{0}}$$ is however dominated by the few impurities, which are closest to the vortex core.
Let us note that vortices in 2H-NbSe2, with their characteristic strong six-fold star shape, present a rather involved shape of the asymmetry (Fig. 4f). This suggests that the spatial extension of CdGM states determines the overall shape of the asymmetry.
In all, we conclude from our combined theoretical and experimental work that YSR states produce electron-hole asymmetric vortex cores. The YSR states allow visualizing the discrete nature of CdGM levels and their electron-hole asymmetry is translated to large scales. Our theory also suggests that a superconductor with predominantly electron band character should lead to an opposite shift in the LDOS. For example, vortices have been observed in β − Bi2Pd31,32,33, which has predominantly electron character34,35. YSR states in β − Bi2Pd have been observed32 but their influence on CdGM states has not yet been addressed.
## Discussion
Bound states in vortex cores have been considered in the past mostly to address the influence of pair potential disturbances on vortex pinning36. Recent calculations find vortex core states induced by magnetic or nonmagnetic impurities, which result in small modifications of the superconducting gap parameter37. However, in this work we find that vortex positions and gap parameter do not exhibit visible changes at energies away from the gap edge in presence of magnetic impurities.
Unconventional d-wave, p-wave, f-wave or s ± superconductors often show pair breaking at atomic impurities and vortices at the same time38,39,40,41. Our results suggest that vortex bound states might be strongly influenced by pair breaking atomic size impurities. One can envisage experiments using atomic manipulation or deposition to place impurities at certain positions. The position of vortices is easily modified by changing the magnetic field. Groups of geometrically arranged YSR impurities, as those made in refs. 7,8,9 should lead to significant spatial distortions of the LDOS of vortex states and help identifying new or unconventional bound states.
## Methods
To produce the YSR states in 2H-NbSe2 and 2H-NbSe1.8S0.2 we introduce Fe impurities during sample growth (about 150 ppm), as identified after the experiment using inductively coupled plasma atomic analysis. In this diluted regime Fe impurities produce practically no changes in the residual resistivity or Tc both for 2H-NbSe1.8S0.2 or 2H-NbSe2. The amount of Fe impurities is sufficiently small as to leave the superconducting gap and vortex structure unaffected, but large enough to be easily detected in the area occupied by a single vortex. We use a scanning tunneling microscope (STM) to measure the LDOS as a function of the position at 0.8 K. Samples are cleaved at or below liquid He, to allow for a clean atomically flat surface and the tip is prepared in-situ42.
|
2023-03-22 22:47:30
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.6864139437675476, "perplexity": 1610.035330392927}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2023-14/segments/1679296944452.97/warc/CC-MAIN-20230322211955-20230323001955-00537.warc.gz"}
|
https://undergroundmathematics.org/calculus-meets-functions/r9671/suggestion
|
Review question
# When does this function of two variables have a minimum? Add to your resource collection Remove from your resource collection Add notes to this resource View your notes for this resource
Ref: R9671
## Suggestion
The variables $x$ and $y$ are such that $x^4y=8$. A third variable $z$ is defined by $z=x+y$.
Find the values of $x$ and $y$ that give $z$ a stationary value…
This looks tricky as $z$ is given as a function of two variables. Can we use the equation connecting $x$ and $y$ to turn $z$ into a function of just one variable?
How do we then calculate stationary points for $z$?
… and show that this value of $z$ is a minimum.
What does the second derivative tell us about a stationary point?
|
2018-09-24 16:21:32
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.49531328678131104, "perplexity": 285.0522914602312}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-39/segments/1537267160568.87/warc/CC-MAIN-20180924145620-20180924170020-00272.warc.gz"}
|
https://testbook.com/question-answer/in-an-otto-cycle-air-is-compressed-from-2-2-l-to--5e9dab98f60d5d2e9f7d7f55
|
# In an Otto cycle, air is compressed from 2.2 l to 0.26 l from an initial pressure of 1.2 kg/cm2. The net output/cycle is 440 kJ. What is the mean effective pressure of the cycle?
This question was previously asked in
ESE Mechanical 2016 Paper 1: Official Paper
View all UPSC IES Papers >
1. 227 kPa
2. 207 kPa
3. 192 kPa
4. 185 kPa
Option 1 : 227 kPa
Free
CT 3: Building Materials
2480
10 Questions 20 Marks 12 Mins
## Detailed Solution
Concept:
Mean Effective Pressure: It is defined as the ratio of the net work done to the displacement volume of the piston.
$${P_m} = \frac{W_{net}}{V_s}$$
Calculation:
Stroke volume = Volume before compression - Volume after compression
Vs = 2.2 - 0.26 = 1.94 l = 1.94 × 10-3 m3
Wnet = 440 kJ
Wnet = Pm × Vs
440 × 103 = Pm × 1.94 × 10-3
$${P_m} = \frac{{440 \times {{10}^3}}}{{1.94 \times {{10}^{ - 3}}}} = 227\;MPa$$
As answer is not given in options, it is safe to go with magnitude...hence we have to mark 1
|
2021-09-17 17:13:27
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.6323290467262268, "perplexity": 4107.977707290566}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-39/segments/1631780055684.76/warc/CC-MAIN-20210917151054-20210917181054-00642.warc.gz"}
|
http://harrisongoldste.in/languages/2017/05/25/elm-first-impressions.html
|
I like exploring new programming languages and paradigms in my spare time. Here are some of my thoughts on Elm.
Elm is a purely functional, strongly typed language for web development. It’s a very opinionated language, with a very powerful run-time that is designed to make writing web applications easy. There are some things that I really like about Elm, and some things that I find frustrating. Your mileage may vary.
## Pros
### The Elm Architecture
All Elm applications are written with the same general design pattern. The general structure is similar to things like Redux and Flux (which is actually based on Elm):
• model: A single object, encapsulating the entire state of the application.
• update: A pure function that takes a message and a model and produces a new model.
• view: A pure function that takes a model and produces instructions on how to render the application.
This pattern is called “The Elm Architecture”, and the run-time supports it directly. Once you specify these three components, the run-time sets up a model and renders a view. Then, it listens for messages from the view, passes each one to the update function, changes the model accordingly, and re-renders only the parts of the view that changed.
I really like this approach because it manages abstraction in a really intelligent way. On one hand, I have access to (and am expected to deal with) all of the application-specific parts of my project. As a programmer, I need to specify the application state, how that state changes, and what that state “looks like”. On the other hand, machinery that is especially general (the wiring) is taken out of the programmer’s control completely. (There isn’t a lot of configuration in Elm; in general, if the run-time want’s to handle something, you’re expected to let it.)
A nice side effect of this is that Elm is actually really fast. In some sense, the Architecture encompasses all of the slowest parts of the application—this makes it free to heavily optimize those pieces.
### Static Typing
The other major benefit of Elm is that it is statically typed. This means that the compiler (and not the Chrome developer console) catches your mistakes. I could go on for a long time about the benefits of a good type system, but I’ll leave that for another blog post.
## Cons
### No Type Classes
Since Elm looks so much like Haskell, I often expect it to behave like Haskell. While it does most of the time, sometimes it falls short. One large place this happens is with type classes; since Elm does not support type classes it misses out on some of the really nice features that come along with them.
For example, rather than use do notation to deal with monads, we need to explicitly bind arguments into monadic functions (in Elm, most types define a function called andThen for this purpose). Keep in mind that this problem is related to type classes because Haskell’s do is tied to the Monad type class; anything that implements Monad supports do notation.
Things like do notation would be a nice to have, but in the end, it isn’t such a big deal. One thing that is a big deal is how Elm deals with comparisons. In Haskell, we have Ord a which allows a user to define comparisons for their own types. Elm uses something called comparable, does the same job as Ord, without being a proper type class. Basically, a function
can take any argument at all, but a function
can only take an argument that permits comparisons. Unfortunately, the only types that are comparable are Int, Float, Time, Char, and String— that’s it. There’s no way to make a user defined type comparable, since comparable is just a built-in language construct and not a formal type class. This is especially frustrating since the built in type Dict (a dictionary based on a balanced binary tree) has the following interface:
The result is that no user defined types can ever be the key of a dictionary, even if there is a perfectly reasonable way to compare them.
## Conclusion
Overall, I really like Elm. It’s been fun to work with, and it’s definitely mature enough to be usable for some projects. It has some drawbacks, and I’d hesitate to put it into production just yet, but it’s certainly heading in the right direction.
|
2018-12-17 03:23:09
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.4309629499912262, "perplexity": 868.2892319001165}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-51/segments/1544376828056.99/warc/CC-MAIN-20181217020710-20181217042710-00607.warc.gz"}
|
https://shtools.oca.eu/shtools/pyplbar.html
|
Compute all the 4-pi (geodesy) normalized Legendre polynomials.
Usage
p = PlBar (lmax, z)
Returns
p : float, dimension (lmax+1)
An array of 4-pi (geodesy) normalized Legendre polynomials up to degree lmax. Degree l corresponds to array index l.
Parameters
lmax : integer
The maximum degree of the Legendre polynomials to be computed.
z : float
The argument of the Legendre polynomial.
Description
PlBar will calculate all of the 4-pi (geodesy) normalized Legendre polynomials up to degree lmax for a given argument. These are calculated using a standard three-term recursion formula. The integral of the geodesy-normalized Legendre polynomials over the interval [-1, 1] is 2.
|
2019-03-20 01:04:46
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9791364669799805, "perplexity": 3900.172481665264}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-13/segments/1552912202188.9/warc/CC-MAIN-20190320004046-20190320030046-00074.warc.gz"}
|
https://math.stackexchange.com/questions/1037985/functions-u-and-l-solution-of-a-differential-equation
|
# functions U and L solution of a differential equation
Solving this differential equation with an online calculator:
$$-(a z+b) y+(c z+d) y''+cy' = 0$$
I obtain something like:
$$y(z)=C_1 \exp\left(\frac{-\sqrt{a}z}{\sqrt{c}}\right) U(arg1,arg2,arg3)+C_2 \exp\left(\frac{-\sqrt{a}z}{\sqrt{c}}\right) L_{arg1}(arg2)$$
with arg are arguments of common functions.
I have two problems:
• I don't know the functions U and L
• if $c<0$ is it possible to find a real solution for $y(z)$ playing with the $C_1$ and $C_2$ parameters ?
• $U$ is the confluent hypergeometric function ; $L$ is the generalized Laguerre polynomial – Claude Leibovici Nov 25 '14 at 11:09
It seems that you were using wolframalpha. There are links about the functions $U$ and $L$ - for example, Confluent Hypergeometric Function of the Second Kind and Laguerre Polynomial .
In the case where $c=0$ we have to work with Airy functions, which are, essentially, the independent solutions of the the equation $y''(z)=zy$.
As for the existence of real solutions, consider the Cauchy problem for your equation with initial data $y(z_0)=y_0$. If $cz_0+d\ne 0$, then your problem is well-posed and therefore by Cauchy-Lipschitz-Lindelof theorem has local solution, which would be real. You can build a maximum real solution afterwards. I don't know if this maximum solution will be defined on an interval, half-line or the whole line.
• I already solved the case $c = 0$ with the Airy functions. The next step was indeed to work with the following conditions: -$c \ne 0$ -$y(z=0) = y_0$ -$y^{'}(z=z_0)=0$ I also have $c<0$ $a>0$ b and d are non-zero parameters. How to proceed ? The $C_1$ and $C_2$ constant may be foud analytically ? – user3473016 Nov 25 '14 at 11:20
|
2019-05-22 15:01:22
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8598240613937378, "perplexity": 229.6100142078648}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-22/segments/1558232256858.44/warc/CC-MAIN-20190522143218-20190522165218-00362.warc.gz"}
|
https://www.clinbiomech.com/article/S0268-0033(10)00228-7/fulltext
|
Research Article| Volume 26, ISSUE 1, P29-34, January 2011
Ok
# Neck motion patterns in whiplash-associated disorders: Quantifying variability and spontaneity of movement
Published:September 28, 2010
## Abstract
### Background
Whiplash-associated disorders have usually been explored by analyzing changes in the cervical motor system function by means of static variables such as the range of motion, whereas other behavioural features such as speed, variability or smoothness of movement have aroused less interest.
### Methods
Whiplash patients (n=30), control subjects (n=29) and a group of people faking the symptoms of whiplash-associated-disorders (Simulators, n=30) performed a cyclical flexion–extension movement. This movement was recorded by means of video-photogrammetry. The computed variables were: range of motion, maxima angular velocity and acceleration, and two additional variables that quantify the repeatability of a motion and its spontaneity. Two comparisons were made: Control vs. Patients and Patients vs. Simulators. At each comparison we used ANOVA to detect differences between groups and discriminant analysis to evaluate the ability of these variables to classify individuals.
### Findings
Comparison between Controls and Patients showed significant reductions in the range of motion, and both the maximum of angular velocity and acceleration in the Patients. The most efficient discriminant model only included the range of motion and maximum angular velocity. Comparison between Patients and Simulators showed a significant reduction in all measured variables in the Simulators. The best classification model was obtained with maximum angular velocity, spontaneity and repeatability of motion.
### Interpretation
Our results suggest that the pathological patterns differ from those of Controls in amplitude and speed of motion, but not in repeatability or spontaneity of movement. These variables are especially useful for detecting abnormal movement patterns.
## 1. Introduction
Whiplash-associated disorders (WAD) include a broad spectrum of illnesses related to cervical soft-tissue injury typically resulting from motor vehicle accidents. Due to the difficulties of identifying damage to bone and soft tissue causing chronic neck pain, WAD is usually described by its symptoms. The Quebec Task Force describes a wide range of associated symptoms (
• Spitzer W.
• Skovron M.
• Salmi L.
• Cassidy J.
• Duranceau J.
• Suissa S.
• et al.
Scientific monograph of the Quebec task force on whiplash-associated disorders: redefining “whiplash” and its management.
) that are the basis for defining clinical exploration procedures to evaluate the severity of WAD. The most common techniques are based on changes in the cervical motor system function. These changes include reduced neck movement, proprioception alterations and modification of motion patterns.
Some studies show the existence of a decreased range of motion in both active and passive tests (
• Feipel V.
• Rondelet B.
• LePallec J.
• DeWitte O.
• Rooze M.
The use of disharmonic motion curves in problems of the cervical spine.
,
• Dall'Alba P.T.
• Sterling M.M.
• Treleaven J.M.
• Edwards S.L.
• Jull G.A.
Cervical range of motion discriminates between asymtomatic persons ant those with whiplash.
). Thus, an impaired range of motion (RoM) can be useful for distinguishing between asymptomatic persons and those with persistent whiplash-associated disorders by using multivariate discriminant techniques (
• Dall'Alba P.T.
• Sterling M.M.
• Treleaven J.M.
• Edwards S.L.
• Jull G.A.
Cervical range of motion discriminates between asymtomatic persons ant those with whiplash.
,
• Sterling M.
• Jull G.
• Vicenzino B.
• Kenardy J.
• Darnell R.
Development of motor system dysfunction following whiplash injury.
).
Most of these studies analyze static position variables such as angular ranges of motion in different movements or variability in angular data. Nevertheless, kinematic variables associated with movement could provide more information to describe motor control disturbances. This approach has been explored by
• Feipel V.
• Rondelet B.
• LePallec J.
• DeWitte O.
• Rooze M.
The use of disharmonic motion curves in problems of the cervical spine.
, who suggested an increase in reaction time and a decrease in speed in pathological people. These results are confirmed in later studies in which the maximum speed of neck movement is an important variable for distinguishing between healthy and pathological groups (
• Öhberg F.
• Grip H.
• Wiklund U.
• Sterner Y.
• Karlsson J.
• Gerdle B.
Chronic whiplash associated disorders and neck movement measurements: an instantaneous helical axis approach.
,
• Grip H.
• Sundelin G.
• Gerdle B.
• Karlsson J.S.
Cervical helical axis characteristics and its center of rotation during active head and upper arm movements—comparisons of whiplash-associated disorders, non-specific neck pain and asymptomatic individuals.
).
Although all these objective measurements are useful for clinical applications, their reliability depends on patient cooperation in performing the tests, otherwise it becomes very difficult to determine the severity of the disorder (
• Dvir Z.
• Gal-Eshel N.
• Shamir B.
• Pevzber E.
• Peretz C.
Simulated pain and cervical motion in patients with chronic disorders of the cervical spine.
). On the other hand, the legal and economic consequences of such decisions increase the need for well-founded criteria to evaluate sincerity in performing the tests (
• Dvir Z.
• Prushansky T.
• Peretz C.
Maximal versus feigned active cervical motion in healthy patients: the coefficient of variation as an indicator for sincerity of effort.
). Surprisingly, analysis of the effect of patient cooperation has received little attention in biomechanical literature. The identification of abnormal patterns has been associated with intra-subject variability in RoM measurements (
• Dvir Z.
• Prushansky T.
• Peretz C.
Maximal versus feigned active cervical motion in healthy patients: the coefficient of variation as an indicator for sincerity of effort.
,
• Dvir Z.
• Gal-Eshel N.
• Shamir B.
• Pevzber E.
• Peretz C.
Simulated pain and cervical motion in patients with chronic disorders of the cervical spine.
,
• Prushansky T.
• Pevzner E.
• Gordon C.
• Dvir Z.
Performance of cervical motion in chronic whiplash patients and healthy subjects: the case of atypical patients.
). Another approach is that suggested by
• Feipel V.
• Rondelet B.
• LePallec J.
• DeWitte O.
• Rooze M.
The use of disharmonic motion curves in problems of the cervical spine.
who used the presence of hesitation or changes of velocity in movement performance to detect abnormal patterns of movement.
The objective of this paper is to quantify some of the features of neck motion patterns, such as variability and spontaneity of movement, in order to objectively evaluate behavioural aspects related to whiplash-associated disorders (WAD), including the possibility of a lack of cooperation by patients. We assume that the selection of a particular strategy affects the spontaneity and repeatability of the movement in cyclical motions. Therefore, these characteristics could be good indicators of behavioural aspects such as the exaggeration of symptoms. In order to confirm or reject this hypothesis we have developed an experiment that included healthy people and chronic WAD patients, as well as an additional group of people faking acute WAD symptoms. In this way we can analyze the differences between healthy and pathological patterns as well as the characteristics of anomalous motion patterns associated with non-spontaneous movements.
## 2. Methods
### 2.1 Dynamic model: harmonic oscillator
For repetitive movements such as those reproduced in our study, driven harmonic motion can be a simple and suitable reference for comparing motion patterns. From a kinematic point of view, the harmonic oscillator is described by the position variable and its derivatives. Assuming that the position variable is an angle (flexion–extension angle θ, for example), these variables can be expressed as:
$θ=Asin2πf t$
(1)
$θ˙=2πfAcos2πft$
(2)
$θ¨=−2πf2Asin2πft=−2πf2θ$
(3)
where A is the amplitude of cyclical motion and f is its frequency. $θ˙andθ¨$ are the angular velocity and acceleration respectively.
Given that the harmonic model does not require any specific control, spontaneous repetitive movements may be similar to a harmonic oscillator, whereas a deliberate controlled motion should differ from this model. Fig. 1 shows the similarities between the harmonic oscillator and the spontaneous neck flexion–extension movement of a healthy person. In Fig. 1a we have represented $θ˙vsθ$. The ideal harmonic model must fit an ellipse (see Eqs. ((1), (2))). The actual movement of the control is similar to an ellipse but there is some dispersion due to natural intra-subject variability. In Fig. 1b we have represented $θ¨vsθ$. The ideal oscillator must fit a straight line with a negative slope (see Eq. (3)); for the actual motion, this linear behaviour remains within almost all the range of movement. These characteristics will be used to describe intra-subject movement variability as well as spontaneity measured as the fit between the motion performed and the harmonic model, or harmonicity (
• Fernandez L.
• Bootsma R.J.
Effects of biomechanical and task constraints on the organization of movement in precision aiming.
).
### 2.2 Sample of study
Eighty-nine volunteers participated in the study. The size of the sample met the criteria of
• Lachenbruch P.A.
• Goldstein M.
Discriminant analysis.
for a discriminant analysis with five independent variables and two groups (a minimum of 26 subjects per group). The subjects were classified into three different groups defined by the following selection criteria:
• Control group (Controls): this group consisted of 29 volunteers meeting the following criteria: absence of whiplash-associated disorders, absence of neurological antecedents and absence of osteo-articular disease.
• Chronic whiplash group (Patients): this sample (n=30) was recruited by the medical team of the Rehabilitation Unit of the ASEPEYO Hospital (San Cugat del Vallés, Spain). The criteria for inclusion were: patients affected by WAD with altered mobility of the neck, corresponding to degrees II and III of the Quebec Task Force Scale (
• Spitzer W.
• Skovron M.
• Salmi L.
• Cassidy J.
• Duranceau J.
• Suissa S.
• et al.
Scientific monograph of the Quebec task force on whiplash-associated disorders: redefining “whiplash” and its management.
), for more than 6 months and less than 1 year.
• Recovered WAD group (Simulators): this group (n=30) included people who had recovered from a WAD and who had not presented any symptoms during the previous 2 years. They were requested to reproduce voluntarily the same pattern of movement that they had had during the period with cervical pain. It has been assumed that people with a satisfactory recovery from a WAD were more likely to feign the painful pattern well. The subjects were recruited from the IBV database.
In order to control the potential effects of age and gender, all groups were balanced by these variables (see Table 1). All the subjects signed an informed consent form for participation in the study, which was approved by the Ethics Committee of the Universidad Politécnica de Valencia.
Table 1Sample of participants in the study.
GroupAge-groupMaleFemaleTotal
Controls(20–30)4610
(31–40)549
(41–50)5510
Total151529
Patients(20–30)369
(31–40)5611
(41–50)7310
Total151530
Simulators(20–30)5611
(31–40)459
(41–50)5510
Total141630
### 2.3 Experimental setup
People sat down on an adjustable chair designed to immobilize the trunk. Trunk mobility was limited by means of a set of straps on the shoulder and around the thorax and pelvis as described in
• Baydal-Bertomeu J.
• García-Mas M.
• Poveda R.
• Belda J.
• Garrido-Jaén D.
• Vivas M.J.
• Vera P.
• López J.
Determination of simulation patterns of cervical pain from kinematical parameters of movement.
. In this way we characterized neck motion by measuring head movement. Head position and movements were recorded by means of a video-photogrammetry system (Kinescan-IBV;
• Page A.
• De Rosario H.
• Mata V.
• Hoyos J.V.
• Porcar R.
Effect of marker cluster design on the accuracy of human movement analysis using stereophotogrammetry.
) from the coordinates of a set of reflective markers located on a helmet.
At the beginning of the tests, the subjects were instructed on the kind of motion to be performed. Then they performed some non-controlled movements in order to familiarize themselves with the equipment and to practise the motion.
In order to have a reference position to measure angles, a calibration phase was performed prior to each measurement session. In this session people sat on the chair and looked at a 3×8 cm mirror placed 2.5 m in front of the chair at eye height (measured by means of a Martin anthropometer). Two additional markers were placed in the ears in order to define an anatomical medio-lateral axis. After the calibration phase, the additional markers were removed.
In the measurement phase, the subject was requested to perform repetitive flexion–extension cycles at a self-selected speed for 30 s. Measurement sessions started and finished with the subject in the reference position.
### 2.4 Data processing and statistical analysis
We computed the finite displacement from point coordinate data by using the algorithms described in
• Page A.
• De Rosario H.
• Mata V.
• Atienza C.
Experimental analysis of rigid body motion. A vector method to determine finite and infinitesimal displacements from point coordinates.
. The results were the angular displacements expressed as the attitude vector (
• Woltring H.J.
3-d attitude representation: a standardization proposal.
). The projection of the attitude vector on the medio-lateral axis provided a measurement of the flexion–extension angle.
Angular velocity and angular acceleration were estimated by numerical differentiation of the flexion–extension angle using a local smoothing technique (
• Page A.
• Candelas P.
• Belmar F.
On the use of local fitting techniques for the analysis of physical dynamic systems.
). From the smoothed angles, angular velocity and acceleration, we computed the following variables:
• Range of motion (RoM): angular excursion of the motion.
• Maximum angular velocity (MAV), measured as percentile 95 of angular velocity during the test.
• Maximum angular acceleration (MAA), measured as percentile 95 of angular acceleration during the test.
• Phase area ratio (PAR): defined by
$PAR=100×SPSM$
(5)
where SM is the area delimited by the mean cycle of the $θ˙vsθ$ diagram; SP is the area delimited by the mean cycle±1 standard deviation (Fig. 2). In the ideal case with no variability, SP is null and then PAR=0. In real movements some variability is present and then SP>0. Therefore, PAR quantifies the intra-subject variability across cycles; its meaning is similar to a coefficient of variation, but includes information on angles and speed performance.
• Harmonicity (HARM): is the absolute value of the correlation coefficient between$θ¨andθ$. Thus HARM quantifies the fit between the actual movement and the simple harmonic motion.
The statistical analysis was done using the software SPSS 16.0 (SPSS Inc., Chicago, IL). We performed a descriptive analysis of the selected variables, as well as a comparison between groups (Controls vs. Patients and Patients vs. Simulators, respectively) by means of an ANOVA. The ANOVA provides a good description of the mean differences between groups but it does not allow us to quantify the similarities or differences between each individual pattern and its group. This kind of description was done by means of a discriminant analysis in order to analyze the capability of the whole set of kinematic variables to classify individuals. Two classifications were considered: Controls vs. Patients and Patients vs. Simulators. The most significant variables in each model were selected by forward stepwise analyses. These models were compared with the simplest one obtained by using only the RoM that is the most widely used variable in the literature.
For each model analysis we calculated sensitivity and specificity as:
$Sensitivity=100×TNTN+FP$
$Specificity=100×TPTP+FN$
where TN = true negatives; FP = false positives; TP = true positives and FN = false negatives. In both models we used a leave-one-out classification method. Note that in the Controls vs. Patients classification the positive cases are the patients because the aim of the model is to identify people with WAD symptoms. In the Patients vs. Simulators classification the aim is to identify non-spontaneous patterns, therefore the cases here are simulators.
## 3. Results
Fig. 3 depicts a comparison of the diagrams $θ˙vsθ$ and $θ¨vsθ$ corresponding to a typical control subject, a patient and a simulator. These diagrams show the main features of each pattern of motion that are summarized in Table 2.
Table 2Descriptive analysis of the variables in the study. The listed p-values correspond to two separate comparisons by means of ANOVA: Controls vs. Patients and Patients vs. Simulators, respectively.
VariableControls mean(SD)P-Value Controls vs. PatientsPatients mean(SD)P-Value Patients vs. SimulatorsSimulators mean(SD)
RoM (°)119 (17)<0.00190 (22)<0.00155 (24)
MAV (°/s)149 (50)<0.00171 (22)<0.00129 (16)
MAA (°/s2)410 (200)<0.001168 (93)<0.00159 (36)
PAR (%)8.5 (2.6)0.7649.3 (2.5)<0.00117.0 (5.8)
HARM0.79 (0.09)0.9780.78 (0.1)<0.0010.54 (0.14)
Significant differences have been found between Patients and Simulators for all variables. RoM, MAV and MAA also show significant differences between Controls and Patients.
The mean of the range of motion (RoM) was significantly higher in Controls than in Patients and even more than in Simulators. Moreover, within-group variability was also different for each group, being highest in Simulators and lowest in Controls. Variable MAV showed a similar trend. The Controls presented MAV values which were significantly higher than those of Patients, and this latter was also higher than those of Simulators. Differences in the mean values of MAA were also evident among the three groups analyzed. Regarding the variable PAR, Controls and Patients presented very similar values whereas the Simulators mean was significantly higher. Finally, the variable HARM presented similar high values in Controls and Patients (0.79 and 0.78 respectively), but the values for Simulators were significantly lower (0.54).
Finally, Table 3, Table 4 show the results of the two sets of discriminant analyses. With regard to the classification between Controls and Patients (Table 3), the simplest model with only the RoM provided a modest classification with a good sensitivity of 86%, but with a specificity of only 70%. The best model included only two variables: RoM and MAV. In this model, specificity in classifying individuals increased from 70% to 93%, whereas sensitivity decreased slightly to 83%. Controls presented larger and faster movements than Patients (positive values of RoM and MAV coefficients in the discriminant functions), the MAV variable having more influence on the classification than the RoM (standardized coefficients were 0.72 and 0.52, respectively). Despite the significant differences of MAA between Controls and Patients, the MAA variable was not included in the model.
Table 3Results of discriminant analysis for classifying Controls vs. Patients. The first model included only the RoM as independent variable. The second one is the best model obtained by means of a stepwise procedure. We included the standardized coefficients of the discriminant function in order to describe the relative contribution of each independent variable to the discriminant function. The last row shows the classification equation obtained from the Fisher discriminant functions (for equal probability to belonging to each group, P=0.5).
Variables in the modelStandardized discriminant function coefficientsCanonical correlationSpecificity (%)Sensitivity (%)
RoM1.000.618670
RoM0.520.738393
MAV0.72
Classification equation
0.55 RoM+0.035 MAV<9.6Prob(Patient)>0.5
Table 4Results of discriminant analysis for classifying Patients vs. Simulators individuals. The first model included only the RoM as independent variable. The second one is the best model obtained by means of a stepwise procedure. We included the standardized coefficients of the discriminant function in order to describe the relative contribution of each independent variable to the discriminant function. The last row shows the classification equation obtained from the Fisher discriminant functions (for equal probability to belonging to each group, P=0.5).
Variables in the modelStandardized discriminant function coefficientsCanonical correlationSpecificity (%)Sensitivity (%)
RoM1.000.577380
MAV0.470.829787
PAR0.43
HARM0.46
Classification equation
0.67 MAV28 PAR+10.5 HARM<4.0Prob(Simulator)>0.5
Regarding the classification between Patients vs. Simulators, the results were quite different (Table 4). The first classification with only the variable RoM presented a modest classification with a specificity of 73% and a sensitivity of 80%. The best model included the variables MAV, HARM and PAR. This model increased sensitivity to 87% and specificity up to 97%. All three variables made similar contributions to the discriminant function. Patients were distinguished from Simulators by their higher speed of motion and harmonicity (positive coefficients in the standardized discriminant function) and their lower variability when repeating cycles of the movement (negative coefficient of PAR).
In both cases we obtained a classification equation from the Fisher discriminant functions (
• MacLachlan G.J.
Discriminant Analysis and Statistical Pattern Recognition.
). The differences between coefficients of standard discriminant functions and the classification coefficients are due to a change in the measurement scale (standardized and raw values, respectively).
## 4. Discussion
The aim of this paper was to quantify some features of neck motion patterns in order to objectively assess functional alterations associated to WAD, and evaluate behavioural aspects related to atypical motion performance. For this reason our study included three groups: Controls, Patients and another group of people who had recovered from a previous WAD with no current symptoms (Simulators).
The selection of a sample of appropriate “Simulators” is a critical question in the studies aimed to identify feigned or non-cooperative behaviour. In this study we have tried to reproduce this hypothetical situation by means of a sample of subjects who know the symptoms of WAD and who were requested to voluntarily reproduce the behaviour associated with pain. This strategy is similar to that used in previous papers in which patients or even healthy people are requested to exaggerate their symptoms or to feign the effect of an imagined pain, respectively (
• Dvir Z.
• Prushansky T.
• Peretz C.
Maximal versus feigned active cervical motion in healthy patients: the coefficient of variation as an indicator for sincerity of effort.
,
• Dvir Z.
• Gal-Eshel N.
• Shamir B.
• Pevzber E.
• Peretz C.
Simulated pain and cervical motion in patients with chronic disorders of the cervical spine.
,
• Dvir Z.
• Penso-Zabludowski E.
The effects of protocol and test situation on maximal vs. submaximal cervical motion: medicolegal implications.
,
• Sartori G.
• Forti S.
• Birbaumer N.
• Flor H.
A brief and unobtrusive instrument to detect simulation and exaggeration in patients with whiplash syndrome.
,
• Endo K.
• Suzuki H.
• Yamamoto K.
Consciously postural sway and cervical vertigo after whiplash injury.
).
The motion analyzed was a cyclical flexion–extension movement recorded by means of video-photogrammetry. However, the data analysis does not depend on this specific measurement technique and this study could be reproduced using any other instrument able to provide continuous measurement of the neck flexion–extension angles, such as electrogoniometers, electromagnetic or ultrasonic motion tracking systems.
We selected a continuous cyclical motion in order to analyze the dynamics of the movement i.e. the relationships between the angle variable and its derivatives. This strategy is common in motor coordination studies (
• Stergiou N.
Innovative Analyses of Human Movement: Analytical Tools for Human Movement Research.
), but not in the studies published on WAD which analyzed repetitions of single executions of neck motions (
• Dall'Alba P.T.
• Sterling M.M.
• Treleaven J.M.
• Edwards S.L.
• Jull G.A.
Cervical range of motion discriminates between asymtomatic persons ant those with whiplash.
,
• Sterling M.
• Jull G.
• Vicenzino B.
• Kenardy J.
• Darnell R.
Development of motor system dysfunction following whiplash injury.
,
• Öhberg F.
• Grip H.
• Wiklund U.
• Sterner Y.
• Karlsson J.
• Gerdle B.
Chronic whiplash associated disorders and neck movement measurements: an instantaneous helical axis approach.
and
• Grip H.
• Sundelin G.
• Gerdle B.
• Karlsson J.S.
Variations in the axis of motion during head repositioning. A1with whiplash-associated disorders or non-specific neck pain and healthy controls.
, to mention some examples). The use of relationships between angular displacement and velocity provides a simple way to quantify the variability of movement in a kinematic sense i.e. including the variability associated to position and speed. Moreover, the correlation between angle and angular acceleration provides a measure of the spontaneity of movement.
Controls and Patients differ by a clear reduction of the average RoM (from 119° to 90° respectively), the MAV (from 149°/s to 71°/s) and the MAA (from 410°/s2 to 168°/s2), but no significant differences have been found in PAR (8.5% vs. 9.3%) or in HARM (0.79 vs. 0.78). These results suggest that in cyclical movements WAD alterations affect mobility in the range of motion and speed but do not change the movement strategy substantially, as measured by PAR and HARM.
The decrease in the RoM of WAD patients has been reported in several previous studies (
• Dall'Alba P.T.
• Sterling M.M.
• Treleaven J.M.
• Edwards S.L.
• Jull G.A.
Cervical range of motion discriminates between asymtomatic persons ant those with whiplash.
,
• Sterling M.
• Jull G.
• Vicenzino B.
• Kenardy J.
• Darnell R.
Development of motor system dysfunction following whiplash injury.
,
• Öhberg F.
• Grip H.
• Wiklund U.
• Sterner Y.
• Karlsson J.
• Gerdle B.
Chronic whiplash associated disorders and neck movement measurements: an instantaneous helical axis approach.
,
• Prushansky T.
• Pevzner E.
• Gordon C.
• Dvir Z.
Performance of cervical motion in chronic whiplash patients and healthy subjects: the case of atypical patients.
,
• Grip H.
• Sundelin G.
• Gerdle B.
• Karlsson J.S.
Variations in the axis of motion during head repositioning. A1with whiplash-associated disorders or non-specific neck pain and healthy controls.
), although we have found higher values of the RoM in the Patients than those measured in previous studies. This difference could be due to the type of motion analyzed, a continuous and cyclical movement, which can induce larger amplitudes of motion than single trials of movements reported by other authors.
There are fewer studies analyzing the role of speed.
• Öhberg F.
• Grip H.
• Wiklund U.
• Sterner Y.
• Karlsson J.
• Gerdle B.
Chronic whiplash associated disorders and neck movement measurements: an instantaneous helical axis approach.
identified velocity as the most discriminant variable between controls and WAD patients.
• Grip H.
• Sundelin G.
• Gerdle B.
• Karlsson J.S.
Cervical helical axis characteristics and its center of rotation during active head and upper arm movements—comparisons of whiplash-associated disorders, non-specific neck pain and asymptomatic individuals.
analyzed the mean velocities and found significant differences between Controls and Patients. On the other hand,
• Sjölander P.
• Michaelson P.
• Jaric S.
• Djupsjöbacka M.
Sensorimotor disturbances in chronic neck pain—range, of motion peak velocity, smoothness of movement, and repositioning acuity.
found small non-significant differences, probably due to the reduced size of the sample analyzed. Our results agree with Öhberg's paper, although we have found smaller MAV values. These differences are probably due to the way in which the movement was performed: in the Öhberg study the subjects were asked to perform the movement as quickly as possible, while in our experiment each subject chose his or her preferred speed.
No studies have been found analyzing the acceleration of movement. Our results show a significant reduction in the acceleration of Patients vs. Controls. This reduction is consistent with a harmonic motion, in which slower movements with lower amplitude involve a reduction in acceleration (see Eq. (3)). Therefore, the information provided in MAA is redundant when RoM and MAV are taken into account and consequently MAA does not appear in the classification models. The interest in acceleration appears in the HARM variable, as a way of quantifying spontaneity of movement.
RoM variability has been studied in previous papers.
• Sjölander P.
• Michaelson P.
• Jaric S.
• Djupsjöbacka M.
Sensorimotor disturbances in chronic neck pain—range, of motion peak velocity, smoothness of movement, and repositioning acuity.
studied neck rotation and found a small but significant increase in the RoM variation coefficient in Patients.
• Prushansky T.
• Pevzner E.
• Gordon C.
• Dvir Z.
Performance of cervical motion in chronic whiplash patients and healthy subjects: the case of atypical patients.
defined a variation coefficient averaged across some movements and found a significant increase in the coefficient in Patients. In our study we did not find any significant increase of variability in Patients. These differences among results can be explained by the different methods for the measurement of variability. In our study, variability has been measured from the $θ˙θ$ diagram in some cycles of a continuous movement; the results suggest that this strategy produces more repeatable movements than simple repetitions of discontinuous movements.
• Feipel V.
• Rondelet B.
• LePallec J.
• DeWitte O.
• Rooze M.
The use of disharmonic motion curves in problems of the cervical spine.
found differences in movement spontaneity between Patients and Controls. For the analysis of spontaneity Feipel used a harmonic index obtained by a polynomial fitting.
• Sjölander P.
• Michaelson P.
• Jaric S.
• Djupsjöbacka M.
Sensorimotor disturbances in chronic neck pain—range, of motion peak velocity, smoothness of movement, and repositioning acuity.
used an index based on the jerk for the analysis of neck rotation. However, estimating the jerk from position variables (such as angles) requires the evaluation of the third derivative, and is subsequently very dependent on noise and the smoothing technique applied (
• Ramsay J.O.
• Silverman B.W.
Functional Data Analysis.
). This could be the reason why the Sjölander's results are not very conclusive. In our approach a simpler coefficient has been used quantifying the similarity between the movement and a harmonic oscillator. According to this coefficient, Controls and Patients show very similar behaviour in relation to movement harmonicity (HARM). However, there are wide differences between Patients and Simulators.
Few papers have been found analyzing the motion patterns of simulators.
• Dvir Z.
• Prushansky T.
• Peretz C.
Maximal versus feigned active cervical motion in healthy patients: the coefficient of variation as an indicator for sincerity of effort.
used the coefficient of variation in differentiating maximal from submaximal (feigned) cervical motion in healthy patients.
• Prushansky T.
• Pevzner E.
• Gordon C.
• Dvir Z.
Performance of cervical motion in chronic whiplash patients and healthy subjects: the case of atypical patients.
used variability to identify abnormal pathological motion patterns. According to our results, with respect to Patients, Simulators show a clear reduction in RoM, MVA and MAA, a clear increase in movement variability and a reduction in harmonicity. The reduction in RoM, MVA and MAA could be similar to a severely-injured patient. However, the increase in variability is much higher and the loss of harmonicity does not occur in all patients.
Most of the above-mentioned results are based on a comparison between groups by means of average values. This approach is useful for defining mean patterns of motion; however, its clinical usefulness is limited because it is unable to classify individual patterns of movement or to detect abnormal behaviour. Some classification models have been used for these purposes.
• Dall'Alba P.T.
• Sterling M.M.
• Treleaven J.M.
• Edwards S.L.
• Jull G.A.
Cervical range of motion discriminates between asymtomatic persons ant those with whiplash.
used a discriminant analysis model based on RoM variables for classifying healthy and WAD individuals.
• Dvir Z.
• Gal-Eshel N.
• Shamir B.
• Pevzber E.
• Peretz C.
Simulated pain and cervical motion in patients with chronic disorders of the cervical spine.
used a logistic model to distinguish maximal from submaximal efforts in patient performances.
• Prushansky T.
• Pevzner E.
• Gordon C.
• Dvir Z.
Performance of cervical motion in chronic whiplash patients and healthy subjects: the case of atypical patients.
proposed a logistic regression model based on a combined RoM and a mean coefficient of variation in order to classify WAD patients and controls; from this analysis they proposed a criterion to detect atypical WAD patterns based on specific cutoff values. Our discriminant classification models included both static variables (RoM) as well other kinematic variables in order to provide a description of neck movement patterns.
The model that best discriminates between Controls vs. Patients uses RoM and MAV, providing a sensitivity of 93% and a specificity of 83%. These results are similar to those obtained by
• Dall'Alba P.T.
• Sterling M.M.
• Treleaven J.M.
• Edwards S.L.
• Jull G.A.
Cervical range of motion discriminates between asymtomatic persons ant those with whiplash.
(95% sensitivity and 86% specificity), although there are some methodological differences in the model as well as in the classification process. Dall'Alba used a classification model with 20 variables, while in our study only two kinematic variables are used (RoM+MAV). The use of a higher number of variables in the classification could affect the reliability of the results. In addition, our study uses a cross-validated process to improve robustness.
With regard to the classification between Patients and Simulators, the model with MAV, PAR and HARM variables presents a specificity of 97% and a sensitivity of 87%. When Simulators try to feign a pathological pattern of movement they tend to exaggerate the loss of mobility and the reduction of angular velocity excessively. In addition, there is a significant increase in variability and a loss of harmonicity which is much higher than that found in Patients. These results suggest the possibility of objectively identifying non-spontaneous patterns of movement.
Classification models have previously been used for providing quantitative criteria or cutoff values to identify abnormal behaviour (
• Dvir Z.
• Gal-Eshel N.
• Shamir B.
• Pevzber E.
• Peretz C.
Simulated pain and cervical motion in patients with chronic disorders of the cervical spine.
,
• Prushansky T.
• Pevzner E.
• Gordon C.
• Dvir Z.
Performance of cervical motion in chronic whiplash patients and healthy subjects: the case of atypical patients.
). The discrimination model used in this paper leads to the classification equations shown in Table 3, Table 4. In spite of their potential interest these equations must be used with caution, because they are based on simple biomechanical tests. There is evidence of the role of psychological factors in chronic pain (
• Linton S.J.
A review of psychological risk factors in back and neck pain.
). These factors, as well as others related to functional scores and pain perception, should be considered in order to develop more comprehensive models able to provide a valid basis for clinical decisions.
## 5. Conclusions
Continuous cyclical movement trials provide relevant information on alteration in neck mobility and movement strategies associated with WAD. Mobility has been characterized by the angular position (RoM) and its derivatives (MAV and MAA). Furthermore, movement strategy has been characterized through intra-subject variability (PAR) and harmonicity (HARM). With these two sets of variables it is possible to characterize pathological patterns (reduction of mobility in Patients vs. Controls), but it is also possible to find differences between pathological patterns and the patterns of healthy subjects faking pathological symptoms. This possibility could be useful in developing clinical applications where the reliability of biomechanical tests requires patient cooperation.
## Acknowledgements
This research has been partially supported by the Spanish Government grant DPI2006-14722-C02-01, cofinanced by EU FEDER funds. We would like to thank Hospital Mutua Asepeyo of Sant Cugat del Vallés (Spain) for their collaboration.
## References
• Baydal-Bertomeu J.
• García-Mas M.
• Poveda R.
• Belda J.
• Garrido-Jaén D.
• Vivas M.J.
• Vera P.
• López J.
Determination of simulation patterns of cervical pain from kinematical parameters of movement.
in: Eizmendi G. Azkoitia J. Craddock G. Challenges for Assistive Technology, AAATE 07. IOS Press, Amsterdam2007: 429-433
• Dall'Alba P.T.
• Sterling M.M.
• Treleaven J.M.
• Edwards S.L.
• Jull G.A.
Cervical range of motion discriminates between asymtomatic persons ant those with whiplash.
Spine. 2001; 26: 2090-2094
• Dvir Z.
• Penso-Zabludowski E.
The effects of protocol and test situation on maximal vs. submaximal cervical motion: medicolegal implications.
Int. J. Leg. Med. 2003; 117: 350-355
• Dvir Z.
• Prushansky T.
• Peretz C.
Maximal versus feigned active cervical motion in healthy patients: the coefficient of variation as an indicator for sincerity of effort.
Spine. 2001; 26: 1680-1688
• Dvir Z.
• Gal-Eshel N.
• Shamir B.
• Pevzber E.
• Peretz C.
Simulated pain and cervical motion in patients with chronic disorders of the cervical spine.
Pain Res. Manage. 2004; 9: 131-136
• Endo K.
• Suzuki H.
• Yamamoto K.
Consciously postural sway and cervical vertigo after whiplash injury.
Spine. 2008; 33: E539-E542
• Feipel V.
• Rondelet B.
• LePallec J.
• DeWitte O.
• Rooze M.
The use of disharmonic motion curves in problems of the cervical spine.
Int. Orthop. 1999; 23: 205-209
• Fernandez L.
• Bootsma R.J.
Effects of biomechanical and task constraints on the organization of movement in precision aiming.
Exp. Brain Res. 2004; 159: 458-466
• Grip H.
• Sundelin G.
• Gerdle B.
• Karlsson J.S.
Variations in the axis of motion during head repositioning. A1with whiplash-associated disorders or non-specific neck pain and healthy controls.
Clin. Biomech. 2007; 22: 865-873
• Grip H.
• Sundelin G.
• Gerdle B.
• Karlsson J.S.
Cervical helical axis characteristics and its center of rotation during active head and upper arm movements—comparisons of whiplash-associated disorders, non-specific neck pain and asymptomatic individuals.
J. Biomech. 2008; 41: 2799-2805
• Lachenbruch P.A.
• Goldstein M.
Discriminant analysis.
Biometrics. 1979; 35: 69-85
• Linton S.J.
A review of psychological risk factors in back and neck pain.
Spine. 2000; 25: 1148-1156
• MacLachlan G.J.
Discriminant Analysis and Statistical Pattern Recognition.
John Wiley and Sons, New York1992
• Öhberg F.
• Grip H.
• Wiklund U.
• Sterner Y.
• Karlsson J.
• Gerdle B.
Chronic whiplash associated disorders and neck movement measurements: an instantaneous helical axis approach.
IEEE Trans. Inf. Technol. Biomed. 2003; 7: 274-282
• Page A.
• De Rosario H.
• Mata V.
• Hoyos J.V.
• Porcar R.
Effect of marker cluster design on the accuracy of human movement analysis using stereophotogrammetry.
Med. Biol. Eng. Comput. 2006; 44: 1113-1119
• Page A.
• Candelas P.
• Belmar F.
On the use of local fitting techniques for the analysis of physical dynamic systems.
Eur. J. Phys. 2006; 27: 273-279
• Page A.
• De Rosario H.
• Mata V.
• Atienza C.
Experimental analysis of rigid body motion. A vector method to determine finite and infinitesimal displacements from point coordinates.
J. Mech. Des. 2009; 131: 031005.1-301005.8
• Prushansky T.
• Pevzner E.
• Gordon C.
• Dvir Z.
Performance of cervical motion in chronic whiplash patients and healthy subjects: the case of atypical patients.
Spine. 2006; 31: 37-43
• Ramsay J.O.
• Silverman B.W.
Functional Data Analysis.
second ed. Springer, New York2005
• Sartori G.
• Forti S.
• Birbaumer N.
• Flor H.
A brief and unobtrusive instrument to detect simulation and exaggeration in patients with whiplash syndrome.
Neurosci. Lett. 2003; 342: 53-56
• Sjölander P.
• Michaelson P.
• Jaric S.
• Djupsjöbacka M.
Sensorimotor disturbances in chronic neck pain—range, of motion peak velocity, smoothness of movement, and repositioning acuity.
Man. Ther. 2008; 13: 122-131
• Spitzer W.
• Skovron M.
• Salmi L.
• Cassidy J.
• Duranceau J.
• Suissa S.
• et al.
Scientific monograph of the Quebec task force on whiplash-associated disorders: redefining “whiplash” and its management.
Spine. 1995; 20: 1-73
• Stergiou N.
Innovative Analyses of Human Movement: Analytical Tools for Human Movement Research.
Human Kinetics, Champaign2004
• Sterling M.
• Jull G.
• Vicenzino B.
• Kenardy J.
• Darnell R.
Development of motor system dysfunction following whiplash injury.
Pain. 2003; 103: 65-73
• Woltring H.J.
3-d attitude representation: a standardization proposal.
J. Biomech. 1994; 27: 1399-1414
|
2023-02-08 19:16:57
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 15, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.2841169536113739, "perplexity": 7977.129873385114}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2023-06/segments/1674764500904.44/warc/CC-MAIN-20230208191211-20230208221211-00182.warc.gz"}
|
http://www.gafferhq.org/documentation/0.61.13.0/Reference/NodeReference/GafferDispatch/Wedge.html
|
# Wedge¶
Causes upstream nodes to be dispatched multiple times in a range of Contexts, each time with a different value for a specified variable. This variable should be referenced in upstream expressions to apply variation to the tasks being performed. For instance, it could be used to drive a shader parameter to perform a series of “wedges” to demonstrate the results of a range of possible parameter values.
## user¶
Container for user-defined plugs. Nodes should never make their own plugs here, so users are free to do as they wish.
Input connections to upstream nodes which must be executed before this node.
Input connections to nodes which must be executed after this node, but which don’t need to be executed before downstream nodes.
Output connections to downstream nodes which must not be executed until after this node.
## dispatcher¶
Container for custom plugs which dispatchers use to control their behaviour.
## dispatcher.batchSize¶
Maximum number of frames to batch together when dispatching tasks. If the node requires sequence execution batchSize will be ignored.
## dispatcher.immediate¶
Causes this node to be executed immediately upon dispatch, rather than have its execution be scheduled normally by the dispatcher. For instance, when using the LocalDispatcher, the node will be executed immediately in the dispatching process and not in a background process as usual.
When a node is made immediate, all upstream nodes are automatically considered to be immediate too, regardless of their settings.
## variable¶
The name of the Context Variable defined by the wedge. This should be used in upstream expressions to apply the wedged value to specific nodes.
## indexVariable¶
The name of an index Context Variable defined by the wedge. This is assigned values starting at 0 and incrementing for each new value - for instance a wedged float range might assign variable values of 0.25, 0,5, 0.75 or 0.1, 0,2, 0.3 but the corresponding index variable would take on values of 0, 1, 2 in both cases.
The index variable is particularly useful for generating unique filenames when using a float range to perform wedged renders.
## mode¶
The method used to define the range of values used by the wedge. It is possible to define numeric or color ranges, and also to specify explicit lists of numbers or strings.
## floatMin¶
The smallest value of the wedge range when the mode is set to “Float Range”. Has no effect in other modes.
## floatMax¶
The largest allowable value of the wedge range when the mode is set to “Float Range”. Has no effect in other modes.
## floatSteps¶
The number of steps in the value range defined when in “Float Range” mode. The steps are distributed evenly between the min and max values. Has no effect in other modes.
## intMin¶
The smallest value of the wedge range when the mode is set to “Int Range”. Has no effect in other modes.
## intMax¶
The largest allowable value of the wedge range when the mode is set to “Int Range”. Has no effect in other modes.
## intStep¶
The step between successive values when the mode is set to “Int Range”. Values are generated by adding this step to the minimum value until the maximum value is exceeded. Note that if (max - min) is not exactly divisible by the step then the maximum value may not be used at all. Has no effect in other modes.
## ramp¶
The range of colours used when the mode is set to “Colour Range”. Has no effect in other modes.
## colorSteps¶
The number of steps in the wedge range defined when in “Colour Range” mode. The steps are distributed evenly from the start to the end of the ramp. Has no effect in other modes.
## floats¶
The list of values used when in “Float List” mode. Has no effect in other modes.
## ints¶
The list of values used when in “Int List” mode. Has no effect in other modes.
## strings¶
The list of values used when in “String List” mode. Has no effect in other modes.
|
2022-06-25 08:18:49
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.3459175229072571, "perplexity": 1489.575781171492}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 5, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-27/segments/1656103034877.9/warc/CC-MAIN-20220625065404-20220625095404-00317.warc.gz"}
|
https://astarmathsandphysics.com/university-maths-notes/matrices-and-linear-algebra/4641-degeneracy-in-transportation-problems.html
|
## Degeneracy in Transportation Problems
Degeneracy in Transportation problems arise when there are to many routes that are not used. If there are
$F$
factories and
$W$
warehouses, and the number of used routes from factories to warehouses in a solution is less than
$F+W-1$
, then this solution is degenerate.
We resolve the degeneracy by allocating a small shipment
$x$
to an unused route. We calculate the change in cost, and if it is negative, We maximise
$x$
such that no entries are negative and start again.
Example: A transportation problem has cost structure and trial solution below.
Key cost/units
Source $F_1$ $F_2$ $F_3$ Demand $W_2$ 0.90/0 1.00/5 1.00/0 5 Destination $W_2$ 1.00/20 1.40/0 0.80/0 20 $W_3$ 1.30/0 1.00/10 0.80/10 20 20 15 10 45
Let
$x$
be transported using previously unused route
$F_1W_1$
. So that demand and supply constraints are satisfied, and the table has no negative for the quantity transported along each route, we MUST have the table below.
Source $F_1$ $F_2$ $F_3$ Demand $W_2$ 0.90/x 1.00/5-x 1.00/0 5 Destination $W_2$ 1.00/20-x 1.40/x 0.80/0 20 $W_3$ 1.30/0 1.00/10 0.80/10 20 20 15 10 45
The change in cost is
$0.90x-1.00x+1.00x-1.40x=0.30x$
. This is an increase in cost. Evaluating the other unused routes, looking for a decrease in costs results in the final solution below.
Source $F_1$ $F_2$ $F_3$ Demand $W_2$ 0.90/0 1.00/0-x 1.00/0 5 Destination $W_2$ 1.00/20-x 1.40/09 0.80/0 20 $W_3$ 1.30/0 1.00/10 0.80/10 20 20 15 10 45
IN fact, repeating this for every unused route produces an increase in cost each time, so the trial solution is optimal. In general several iterations are required, and the final solution may not be unique.
|
2018-05-23 18:34:24
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.5411272048950195, "perplexity": 2513.2975117464566}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-22/segments/1526794865702.43/warc/CC-MAIN-20180523180641-20180523200641-00503.warc.gz"}
|
http://docs.itascacg.com/3dec700/3dec/block/doc/manual/block_manual/block_fish/block.gridpoint/fish_block.gridpoint.force.app.html
|
# block.gp.force.app
Syntax
v/f = block.gp.force.app(bgpp<,i>)
block.gp.force.app(bgpp<,i>) = v/f
Get/set the applied force at the gridpoint
Returns: v or f - applied force vector or component v or f - applied force vector or component bgpp - block gridpoint pointer i - optional index of component
## Component Access
f = block.gp.force.app.x(bdpp)
block.gp.force.app.x(bdpp) = f
Get/set the $$x$$-component of the applied force vector.
Returns: f - $$x$$-component of the applied force vector f - $$x$$-component of the applied force vector bggp - gridpoint pointer
f = block.gp.force.app.y(bdpp)
block.gp.force.app.y(bdpp) = f
Get/set the $$y$$-component of the applied force vector.
Returns: f - $$y$$-component of the applied force vector f - $$y$$-component of the applied force vector bggp - gridpoint pointer
f = block.gp.force.app.z(bdpp)
block.gp.force.app.z(bdpp) = f
Get/set the $$z$$-component of the applied force vector.
Returns: f - $$z$$-component of the applied force vector f - $$z$$-component of the applied force vector bggp - gridpoint pointer
|
2021-01-18 10:53:36
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 1, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.26735758781433105, "perplexity": 3882.667292193201}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-04/segments/1610703514495.52/warc/CC-MAIN-20210118092350-20210118122350-00577.warc.gz"}
|
http://www.ijpe-online.com/EN/10.23940/ijpe.20.02.p10.265283
|
Int J Performability Eng ›› 2020, Vol. 16 ›› Issue (2): 265-283.
• Orginal Article •
### Repeatedly Coding Inter-Packet Delay for Tracking Down Network Attacks
Lian Yua*(), Lei Zhanga, Cong Tana, Bei Zhaob, Chen Zhangb, and Lijun Liub
1. a School of Software and Microelectronics, Peking University, Beijing, 102600, China
b Design Institute, China Mobile Group, Beijing, 100080, China
• Submitted on ; Revised on ; Accepted on
• Contact: Lian Yu E-mail:lianyu@ss.pku.edu.cn
• Supported by:
This work is supported by the Ministry of Education-China Mobile (No. MCM20170406) and the National Natural Science Foundation of China (No. 61872011). The authors would also like to thank the anonymous reviewers for their invaluable comments.
Abstract:
Attacks against Internet service provider (ISP) networks will inevitably lead to huge social and economic losses. As an active traffic analysis method, network flow watermarking can effectively track attackers with high accuracy and a low false rate. Among them, inter-packet delay (IPD) embeds and extracts watermarks relatively easily and effectively, and it has attracted much attention. However, the performance of IPD is badly affected when networks have perturbations with high packet loss rate or packet splitting. This paper provides an approach to improve the robustness of IPD by repeatedly coding the inter-packet delay (RCIPD), which can smoothly handle situations with packet splitting and merging. This paper proposes applying the Viterbi algorithm to obtain the convolutional code of a watermark such that the impact of network perturbation on the watermark can be worked off; applying the harmony schema, which controls the rhythm and embeds RCIPD bits into network flow, to improve the invisibility of watermarking; and applying K-means to identify dynamically bits of the watermark that may change the intervals due to the latency of networks. A cyclic-similarity algorithm (CSA) is designed to separate the repeated coding and eventually obtain the watermark. Experiments are carried out to compare RCIPD with other three schemas. The results show that the proposed approach is more robust, especially in the case of packet splitting.
|
2023-03-30 18:31:25
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.17811980843544006, "perplexity": 2756.2668689739367}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2023-14/segments/1679296949355.52/warc/CC-MAIN-20230330163823-20230330193823-00017.warc.gz"}
|
https://www.aimsciences.org/article/doi/10.3934/dcds.2003.9.1571
|
Article Contents
Article Contents
# Modified wave operators for the coupled wave-Schrödinger equations in three space dimensions
• We study the scattering theory for the coupled Wave-Schrödinger equation with the Yukawa type interaction, which is certain quadratic interaction, in three space dimensions. This equation belongs to the borderline between the short range case and the long range one. We construct modified wave operators for that equation for small scattered states with no restriction on the support of the Fourier transform of them.
Mathematics Subject Classification: 35B40, 35P25, 35Q40.
Citation:
|
2023-01-31 10:22:49
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 1, "x-ck12": 0, "texerror": 0, "math_score": 0.37754693627357483, "perplexity": 795.501912648836}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2023-06/segments/1674764499857.57/warc/CC-MAIN-20230131091122-20230131121122-00783.warc.gz"}
|
http://www.givewell.org/aggregator/sources/7
|
Deworm the World | GiveWell
# Deworm the World
Exploring how to get real change for your dollar.
Updated: 15 min 57 sec ago
### Why we’re allocating discretionary funds to the Deworm the World Initiative
Wed, 08/30/2017 - 10:13
Many donors who give through GiveWell’s website choose to donate to support “Grants to recommended charities at GiveWell’s discretion,” rather than selecting a specific recommended charity or charities as the target of their gift.
We periodically grant these “discretionary funds” to what we see as the highest-value funding opportunities among our top charities. We last granted discretionary funds in April; then, we granted $4.4 million to the Against Malaria Foundation and$0.5 million to the Deworm the World Initiative.
Since we last allocated funds, we received an additional $1.25 million in discretionary funds that we recently granted out. We also hold roughly$1 million in discretionary funds that we plan to grant out in the next month or two. We plan to grant all of this funding to the Deworm the World Initiative.
Recommendation for donors
We continue to recommend that donors give 100% of their donation to the Against Malaria Foundation (AMF). In other words, although we’re choosing to grant discretionary funds to Deworm the World, we don’t believe that donors who rely on our recommendations should adjust their giving at this time. We explain the rationale for this below.
Summary
This post will discuss:
Deworm the World’s funding gap
Deworm the World recently told us that they have a pressing funding need. We model Deworm the World as the most cost-effective of our top charities (~10x as cost-effective as cash transfers). Last November, we recommended what we estimated was enough funding to make it 80 percent likely that Deworm the World would not be constrained by funding in the next year. (Due to donors following GiveWell’s recommendations, this recommendation generally tracks with what charities receive as a result.) The existence of a funding gap today thus surprised us.
We understand Deworm the World’s funding gap is driven by two primary factors. First, our estimate of Deworm the World’s room for more funding made assumptions about how much funding it would receive from other funders, and so far this year revenue has been lower than projections.
Second, global costs (salaries, office space, travel, etc. for staff based outside of countries where programs operate) were erroneously left out of our cost-effectiveness analysis for Deworm the World last year. Deworm the World did not include these costs in its list of ways it might spend money and we did not recognize that they were not built into the budget.
The existence of this gap, and some time-sensitive considerations discussed below, led us to decide to recommend the current discretionary funds go to Deworm the World. We also considered granting them to AMF, which is our current top recommendation for donors; we believe AMF has significant room for more funding. However, we decided that Deworm the World’s funding need was more pressing. We don’t expect this decision to change the total amount that AMF and Deworm the World each receive as a result of GiveWell’s recommendation in 2017, so the cost of not choosing AMF today is delaying when AMF receives this funding.
Benefits of granting discretionary funds to Deworm the World now
Deworm the World is currently in discussions with the government of Kenya to sign a memorandum of understanding (MOU) to continue its deworming program in Kenya for the next five years. Deworm the World had enough funding committed to the program to fund two years and believes that if it had a larger funding commitment, the government would see the MOU as a higher priority and might speed up the process to finalize the MOU. When we spoke with Deworm the World about additional funding in June and July, of particular concern were the upcoming elections in Kenya on August 8. Because elections tend to take government officials away from other work, Deworm the World feared that delaying the MOU process could delay its deworming work and, in a worst-case scenario, cause the 2017-2018 round of deworming in Kenya to be missed.
Though we understand from Deworm the World that the MOU was not signed prior to the elections, our expectation is that receiving this funding several months earlier than it otherwise would have will decrease the likelihood of Kenya missing a round of deworming.
In addition, Deworm the World may have opportunities this year to fund work in Pakistan and additional work in India (it currently works in only a portion of the states in India). Receiving additional funding now may allow Deworm the World to accelerate its work in those locations.
We have not completed a full room for more funding analysis of Deworm the World since last October and are unsure how much additional room for more funding Deworm the World has.
Risks of granting discretionary funds to Deworm the World now
We believe that granting GiveWell’s discretionary funds to Deworm the World now has the following risks:
• We may later come to believe that Deworm the World needed less than $2.25 million this year. We have not seen complete financial information from Deworm the World and we are unsure how much additional funding would be needed to bring them to the level of being 80 percent likely that they won’t be bottlenecked by funding (we generally target our most cost-effective top charities receiving this level of funding). Discretionary funds allocated now will reduce our estimation of Deworm the World’s 2017 end-of-year room for more funding, which we incorporate into our annual year-end recommendations. It’s possible, although we think highly unlikely, that once we see Deworm the World’s full financial information (which we expect to before our end-of-year recommendation decisions), we will conclude that the amount we want to recommend to them this year is less than$2.25 million.
We don’t think this is likely; our best guess is that granting funds to Deworm the World today will accelerate when they receive funds from GiveWell but not change the overall amount of funding they receive in 2017 due to our recommendation.
It’s also possible that receiving GiveWell discretionary funds now will, for example, allow Deworm the World to lay more groundwork in Pakistan this year and create new opportunities to fund deworming in Pakistan next year, thus increasing our estimate of Deworm the World’s room-for-more funding at the end of the year.
• If we decide at the end of the year that Deworm the World is no longer a top charity, we may not think it should have received $2.25 million a few months before. We think this is highly unlikely. • Funding for and implementation of Kenya’s deworming program is complex. It involves the government of Kenya, multiple funders (including GiveWell-influenced donors, the Children’s Investment Fund Foundation, and the END Fund), and the Deworm the World Initiative, among others. We’d recommend any donor considering major gifts to Deworm the World before we publish our updated review of Deworm the World in November contact us one-on-one to discuss these considerations. Recommendation to individuals We continue to recommend that donors give to our current top recommendation: the Against Malaria Foundation. We are not changing the recommendation to donors because: 1. We are unsure how much additional funding we would like to see Deworm the World receive this year. Making it our new recommendation for donors could drive enough funding in the next few months to overshoot the total Deworm the World could effectively use in the near future. 2. The main benefit of giving to Deworm the World now, rather than at the end of the year, is to accelerate the timeline for the MOU in Kenya and possibly work in Pakistan and India. We anticipate that much of the funding driven by a GiveWell recommendation to donors (e.g., as we currently recommend giving 100% to AMF) would not reach Deworm the World until about the time when we will have completed a full room for more funding analysis of Deworm the World’s work, so we see limited benefit in changing our recommendation now, while we do see the cost noted above. The post Why we’re allocating discretionary funds to the Deworm the World Initiative appeared first on The GiveWell Blog. ### Are GiveWell’s top charities the best option for every donor? Wed, 06/21/2017 - 12:15 We’re sometimes asked whether we think GiveWell’s top charities are the “best,” in some absolute sense of the word, or whether we’d ever advise that a donor give to an opportunity outside of our recommendations. This post aims to clarify how GiveWell thinks about different giving options and their suitability for different types of donors. We believe that GiveWell’s top charities offer donors an outstanding opportunity to do a lot of good and are the best option for most donors. However, some donors—those with a very high degree of trust in a particular individual or organization to make this decision, donors with lots of time (in excess of 50 hours per year, and likely more) to consider their giving decision, or donors whose values point strongly toward a particular cause outside of the ones GiveWell covers—may find opportunities to have a greater impact per dollar than GiveWell’s top charities. Note that we think these characteristics are likely to be necessary, but not sufficient, for finding these types of opportunities; we still expect good giving to be hard, and spending, for example, 50 hours per year on research isn’t necessarily going to yield better opportunities. In this post, we describe relevant considerations for donors in greater detail. Giving to GiveWell’s top charities GiveWell was founded to serve donors with limited amounts of time to make giving decisions. GiveWell’s co-founders, Elie Hassenfeld and Holden Karnofsky, were in this situation when they started GiveWell as a side project in 2006. They found that determining where to give effectively was a full-time project and quit their jobs to start GiveWell in 2007. GiveWell’s top charity recommendations serve all donors. We rely on evidence and detail our rationale for making a recommendation publicly, so donors can vet our work; a strength of our recommendations is their falsifiability. We believe our top charity recommendations serve donors who want to give as effectively as possible and have only limited time to determine where to donate, and (prior to GiveWell) no trusted person or entity to outsource their thinking to, particularly well. Our criteria and recommendations were designed with this type of donor in mind: • Our top charities are largely uncontroversial and relatively straightforward ways to do a lot of good—for example, by providing direct aid such as insecticide-treated nets to prevent malaria and cash transfers to very poor households. There is room for debate on the evidence behind these interventions and their cost-effectiveness, but the basic case for them—and the fact that they are likely to do more good than harm—is subject to little debate, so a donor can feel fairly confident in these basics without needing to do their own research. • GiveWell publishes the full details of our charity analyses so that donors can review and vet our work, and so that donors with very limited time can trust that any major problems would likely be caught by others (with more time). • Because we lay out the entire case for the charities online, donors can spot-check any particular part of it to get a sense of whether we’re thinking reasonably about the issues that seem most salient to them. • Our top charities have room for more funding. In other words, we believe additional marginal donations to these organizations enable them to do more good. Our guess is that most donors that use GiveWell fit this profile (want to give as effectively as possible and have only limited time to determine where to donate, and no other trusted person or entity to outsource their thinking to). Below, we discuss alternative donor profiles: (1) Donors with limited time and a high amount of trust in a person or organization to inform their giving decisions This group of donors has limited time to spend on making a giving decision and has an organization or person (other than GiveWell or GiveWell staff) they personally trust to make or inform this decision. In this case, they may defer to that person or organization’s recommendations. (2) Donors with lots of time Donors with a lot of time to spend on giving decisions (50+ hours per year) may be able to find opportunities that GiveWell hasn’t. For example, a donor might know someone who is starting a charity and feel, based on their research, that supporting their project at an early stage might be a particularly leveraged way to do good. A donor with lots of time may also be very familiar with a particular cause and feel highly confident in a particular organization and its need for funding. These donors may want to compare alternative opportunities to GiveWell’s top charities. They may also want to actively vet GiveWell’s recommendations as part of their research process. Donors with lots of time may also wish to apply a different strategy to their giving. GiveWell largely recommends charities where sufficient evidence exists to make a fairly robust estimate of the expected value of a donation. Donors with much more time to spend (maybe even significantly more than 50 hours per year) thinking about where to give may want to take a “hits-based giving” approach—having a high tolerance for philanthropic risk, so long as the overall expected value is sufficiently high. This is the approach the Open Philanthropy Project, which was incubated at GiveWell, has taken, and we believe doing this well requires a lot of work, as the Open Philanthropy Project discussed in a blog post last year (emphasis original): Aim for deep understanding of the key issues, literatures, organizations, and people around a cause, either by putting in a great deal of work or by forming a high-trust relationship with someone else who can. If we [the Open Philanthropy Project] support projects that seem exciting and high-impact based on superficial understanding, we’re at high risk of being redundant with other funders. If we support projects that seem superficially exciting and high-impact, but aren’t being supported by others, then we risk being systematically biased toward projects that others have chosen not to support for good reasons. By contrast, we generally aim to support projects based on the excitement of trusted people who are at a world-class level of being well-informed, well-connected, and thoughtful in relevant ways. Achieving this is challenging. It means finding people who are (or can be) maximally well-informed about issues we’ll never have the time to engage with fully, and finding ways to form high-trust relationships with them. As with many other philanthropists, our basic framework for doing this is to choose focus areas and hire staff around those focus areas. In some cases, rather than hiring someone to specialize in a particular cause, we try to ensure that we have a generalist who puts a great deal of time and thought into an area. Either way, our staff aim to become well-networked and form their own high-trust relationships with the best-informed people in the field. I [Open Philanthropy Project Executive Director Holden Karnofsky] believe that the payoff of all of this work is the ability to identify ideas that are exciting for reasons that require unusual amounts of thought and knowledge to truly appreciate. (3) Donors with values that differ from GiveWell staff Donors who hold different values than the majority of GiveWell staff, or who place more weight on a particular cause outside of the causes covered by GiveWell, may find other giving opportunities to be more attractive for reasons beyond the time/trust framework articulated earlier in this post. For example, individuals who place a very high value on farm animal welfare may wish to give a large proportion of their donation, if not all of their donation, to organizations working in that cause. We’re happy to speak with you about giving decisions. If you’re not sure which considerations apply to you, please reach out. We’re always happy to talk through giving decisions. The post Are GiveWell’s top charities the best option for every donor? appeared first on The GiveWell Blog. ### How thin the reed? Generalizing from “Worms at Work” Wed, 01/04/2017 - 14:37 Hookworm (AJC1/flickr) My last post explains why I largely trust the most famous school-based deworming experiment, in particular the report in Worms at Work about its long-term benefits. That post also gives background on the deworming debate, so please read it first. In this post, I’ll talk about the problem of generalization. If deworming in southern Busia County, Kenya, in the late 1990s permanently improved the lives of some children, what does that tell us about the impact of deworming programs today, from sub-Saharan Africa to South Asia? How safely can we generalize from this study? I’ll take up three specific challenges to its generalizability: • That a larger evidence base appears to show little short-term benefit from mass deworming—and if it doesn’t help much in the short run, how can it make a big difference in the long run? • That where mass deworming is done today, typically fewer children need treatment than in the Busia experiment. • That impact heterogeneity within the Busia sample—the same treatment bringing different results for different children—might undercut expectations of benefits beyond. For example, if examination of the Busia data revealed long-term gains only among children with schistosomiasis, that would devalue treatment for the other three parasites tracked. In my view, none of the specific challenges I’ll consider knocks Worms at Work off its GiveWell-constructed pedestal. GiveWell’s approach to evaluating mass deworming charities starts with the long-term earnings impacts estimated in Worms at Work. Then it discounts by roughly a factor of ten for lower worm burdens in other places, and by another factor of ten out of more subjective conservatism. As in the previous post, I conclude that the GiveWell approach is reasonable. But if I parry specific criticisms, I don’t dispel a more general one. Ideally, we wouldn’t be relying on just one study to judge a cause, no matter how compelling the study or how conservative our extrapolation therefrom. Nonprofits and governments are spending tens of millions per year on mass deworming. More research on whether and where the intervention is especially beneficial would cost only a small fraction of all those deworming campaigns, yet potentially multiply their value. Unfortunately, the benefits that dominate our cost-effectiveness calculations manifest over the long run, as treated children grow up. And long-term research tends to take a long time. So I close by suggesting two strategies that might improve our knowledge more quickly. Here are Stata files for the quantitative assertions and graphs presented below. Evidence suggests short-term benefits are modest Researchers have performed several systematic reviews of the evidence on the impacts of deworming treatment. In my research, I focused on three of those reviews. Two come from institutions dedicated to producing such surveys, and find that mass deworming brings little benefit, most emphatically in the short run. But the third comes to a more optimistic answer. The three are: • The Cochrane review of 2015, which covers 45 trials of the drug albendazole for soil-transmitted worms (geohelminths). It concludes: “Treating children known to have worm infection may have some nutritional benefits for the individual. However, in mass treatment of all children in endemic areas, there is now substantial evidence that this does not improve average nutritional status, haemoglobin, cognition, school performance, or survival.” • The Campbell review of 2016, which extends to 56 randomized short-term studies, in part by adding trials of praziquantel for water-transmitted schistosomiasis. “Mass deworming for soil-transmitted helminths …had little effect. For schistosomiasis, mass deworming might be effective for weight but is probably ineffective for height, cognition, and attendance.” • The working paper by Kevin Croke, Eric Hsu, and authors of Worms at Work. The paper looks at impacts only on weight, as an indicator of recent nutrition. (Weight responds more quickly to nutrition than height.) While the paper lacks the elaborate, formal protocols of the Cochrane and Campbell reviews, it adds value in extracting more information from available studies in order to sharpen the impact estimates. It finds: “The average effect on child weight is 0.134 kg.” Before confronting the contradiction between the first two reviews and the third, I will show you a style of reasoning in all of them. The figure below constitutes part of the Campbell review’s analysis of the impact of mass administration of albendazole (for soil-transmitted worms) on children’s weight (adapted from Figure 6 in the initial version): Each row distills results from one experiment; the “Total” row at the bottom draws the results together. The first row, for instance, is read as follows. During a randomized trial in Uganda run by Harold Alderman and coauthors, the 14,940 children in the treatment group gained an average 2.413 kilograms while the 13,055 control kids gained 2.259 kg, for a difference in favor of the treatment group of 0.154 kg. For comparability with other studies, which report progress on weight in other ways, the difference is then re-expressed as 0.02 standard deviations, where a standard deviation is computed as a sort of average of the 7.42 and 8.01 kg figures shown for the treatment and control groups. The 95% confidence range surrounding the estimate of 0.02 is written as [–0.00, 0.04] and is in principle graphed as a horizontal black line to the right, but is too short to show up. Because of its large sample, the Alderman study receives more weight (in the statistical sense) than any other in the figure, at 21.6% of the overall number. The relatively large green square in the upper right signifies this influence. In the lower-right of the figure, the bolded numbers and the black diamond present the meta-analytical bottom line: across these 13 trials, mass deworming increased weight by an average 0.05 standard deviations. The aggregate 95% confidence interval stretches from –0.02 to 0.11, just bracketing zero. The final version of the Campbell report expresses the result in physical units: an average gain of 0.09 kg, with a 95% confidence interval stretching from –0.09 kg to +0.28 kg. And so it concludes: “Mass deworming for soil-transmitted helminths with albendazole twice per year compared with controls probably leads to little to no improvement in weight over a period of about 12 months.” Applying similar methods to a similar pool of studies, the Cochrane review (Analysis 4.1) produces similar numbers: an average weight gain of 0.08 kg, with a 95% confidence interval of –0.11 to 0.27. This it expresses as “For weight, overall there was no evidence of an effect.” But Croke et al. incorporate more studies, as well as more data from the available studies, and obtain an average weight gain of 0.134 kg (95% confidence interval: 0.03 to 0.24), which they take as evidence of impact. How do we reconcile the contradiction between Croke et al. and the other two? We don’t. For no reconciliation is needed, as is made obvious by this depiction of the three estimates of the impact of mass treatment for soil-transmitted worms on children’s weight: Each band depicts one of the confidence intervals I just cited. The varied shading reminds us that within each band, probability is highest near the center. The bands greatly overlap, meaning that the three reviews hardly disagree. Switching from graphs to numerical calculations, I find that the Cochrane results reject the central Croke et al. estimate of 0.134 kg at p = 0.58 (two-tailed Z-test), which is to say, they do not reject with any strength. For Croke et al. vs. Campbell, p = 0.64. So the Croke et al. estimate does not contradict the others; it is merely more precise. The three reviews are best seen as converging to a central impact estimate of about 0.1 kg of weight gain. Certainly 0.1 kg fits the evidence better than 0.0 kg. If wide confidence intervals in the Cochrane and Campbell reviews are obscuring real impact on weight, perhaps the same happening with other outcomes, including height, hemoglobin, cognition, and mortality. Discouragingly, when I scan the Cochrane review’s “Summary of findings for the main comparison” and Campbell’s corresponding tables, confidence intervals for outcomes other than weight look more firmly centered on zero. That in turn raises the worry that by looking only at weight, Croke et al. make a selective case on behalf of deworming.[1] On the other hand, when we shift our attention from trials of mass deworming to trials restricted to children known to be infected—which have more power to detect impacts—it becomes clear that the boost to weight is not a one-off. The Cochrane review estimates that targeting treatment at kids with soil-transmitted worms increased weight by 0.75 kilograms, height by 0.25 centimeters, mid-upper arm circumference by 0.49 centimeters, and triceps skin fold thickness by 1.34 millimeters, all significant at p = 0.05. Treatment did not, however, increase hemoglobin (Cochrane review, “Data and Analyses,” Comparison 1). In this light, the simplest theory that is compatible with the evidence arrayed so far is that deworming does improve nutrition in infected children while leaving uninfected children unchanged; and that available studies of mass deworming tend to lack the statistical power to detect the diluted benefits of mass deworming, even when combined in a (random effects) meta-analysis. The compatibility of that theory with the evidence, by the way, exposes a logical fallacy in the Cochrane authors’ conclusion that “there is now substantial evidence” that mass treatment has zero effect on the outcomes of interest. Lack of compelling evidence is not compelling evidence of lack. Yet the Cochrane authors might be right in spirit. If the benefit of mass deworming is almost too small to detect, it might be almost too small to matter. Return to the case of weight: is ~0.1 kg a lot? Croke et al. contend that it is. They point out that “only between 2 and 16 percent of the population experience moderate to severe intensity infections in the studies in our sample that report this information,” so their central estimate of 0.134 could indicate, say, a tenth of children gaining 1.34 kg (3 pounds). This would cohere with Cochrane’s finding of an average 0.75 kilogram gain in trials that targeted infected children. In a separate line of argument, Croke et al. calculate that even at 0.134, deworming more cost-effectively raises children’s weight than school feeding programs do. But neither defense gets at what matters most for GiveWell, which is whether small short-term benefits make big long-term earnings gains implausible. Is 0.134 kg in weight gain compatible with 15% income gain 10 years later reported in Worms at Work? More so than it may at first appear, once we take account of two discrepancies embedded in that comparison. First, more kids had worms in Busia. I calculate that 27% of children in the Worms sample had moderate or serious infections, going by World Health Organization (WHO) guidelines, which can be viewed conservatively as double the 2–16% Croke et al. cite as average for the kids behind that 0.134 kg number.[2] So in a Worms-like setting, we should expect twice as many children to have benefited, doubling the average weight gain from 0.134 to 0.268 kg. Second, at 13.25 years, the Worms children were far older than most of the children in the studies surveyed by Croke et al. Subjects averaged 9 months of age in the Awasthi 2001 study, 12–18 months in Joseph 2015, 24 months in Ndibazza 201236 months in Willett 1979, and 2–5 years in Sur 2005. 0.268 kg means more for such small people. As Croke et al. point out, an additional 0.268 kg nearly suffices to lift a toddler from the 25th to the 50th percentile for weight gain between months 18 and 24 of life (girls, boys). In sum, the statistical consensus on short-term impacts on nutritional status does not render implausible the long-term benefits reported out of Busia. The verdict of Garner, Taylor-Robinson, and Sachdev—“no effect for the main biomedical outcomes…, making the broader societal benefits on economic development barely credible”—overreaches. In many places, fewer kids have worms than in Busia in 1998–99 If we accept the long-term impact estimates from Worms at Work, we can still question whether those results carry over to other settings. This is precisely why GiveWell deflates the earnings impact by two orders of magnitude in estimating the cost-effectiveness of deworming charities. One of those orders of magnitude arises from the fact that school-age children in Busia carried especially heavy parasite loads. Where loads are lighter, mass deworming will probably do less good. (The other order of magnitude reflects a more subjective worry that if Worms at Work were replicated in other places with similar parasite loads, it would fail to show any benefits there, a theme to which I will return at the end.) GiveWell’s cost-effectiveness spreadsheet does adjust for difference in worm loads between Worms and places where recommended charities support mass deworming today. So I spent some time scrutinizing this discount—more precisely, the discounts of individual GiveWell staffers. I worried in particular that the ways we measure worm loads could lead my colleagues to overestimate the need for and benefit from mass deworming. As a starting point, I selected a few data points from one of the metrics GiveWell has gathered, the fraction of kids who test positive for worms. This table shows the prevalence of worm infection, by type, in Busia, 1998–99, before treatment, and in program areas of two GiveWell-recommended charities: The first row, computed from the public Worms data set, reports that before receiving any treatment from the experiment, 81% of tested children in Busia were positive for hookworm, 51% for roundworm, 62% for whipworm, and 36% for schistosomiasis. 94% tested positive for at least one of those parasites. On average, each child carried 2.3 distinct types of worm. Then, from the GiveWell cost-effectiveness spreadsheet, come corresponding numbers for areas served by programs linked to the Schistosomiasis Control Initiative (SCI) and Deworm the World. Though approximate, the numbers suffice to demonstrate that far fewer children served by these charities have worms than in the Worms experiment. For example, the hookworm rate for Deworm the World is estimated at 24%, which is 30% of the rate of Busia in 1998–99. Facing less need, we should expect these charities’ activities to do less good than is found in Worms at Work. But that comparison would misrepresent the value of deworming today if the proportion of serious infections is even lower today relative to Busia. To get at the possibility, I made a second table for the other indicator available to GiveWell, which is the intensity of infection, measured in eggs per gram of stool: Indeed, this comparison widens the apparent gap between Busia of 1998–99 and charities of today. For example, hookworm prevalence in Deworm the World service areas was 30% of the Busia rate (24 vs. 81 out of every 100 of kids), while intensity was only 20% (115 vs. 568 eggs/gram). After viewing these sorts of numbers, the median GiveWell staffer multiplies the Worms at Work impact estimate by 14%—that is, divides it by seven. In aggregate, I think my coworkers blend the discounts implied by the prevalence and intensity perspectives.[3] To determine the best discount, we’d need to know precisely what characterized the children within the Worms experiment who most benefited over the long term—perhaps lower weight, or greater infection with a particular parasite species. As I will discuss below, such insight is easier imagined than attained. Then, if we had it, we would need to know the number of children in today’s deworming program areas with similar profiles. Obtaining that data could be a tall order in itself. To think more systematically about how to discount for differences in worm loads, within the limits of the evidence, I looked to some recent research that models how deworming affects parasite populations. Nathan Lo and Jason Andrews led the work (2015, 2016). With Lo’s help, I copied their approach in order to estimate how the prevalence of serious infection varies with the two indicators at GiveWell’s fingertips.[4] For my purposes, the approach introduces two key ideas. First, data gathered from many locales shows how, for each worm type, the average intensity of infection tends to rise as prevalence increases. Not surprisingly, where worm infection is more common, average severity tends to be higher too—and Lo and colleagues estimate how much so. Second is the use a particular mathematical family of curves to represent the distribution of children by intensity levels—how many have no infection, how many have 1-100 eggs/gram, how many are above 100 eggs/gram, etc. (The family, the negative binomial, is an accepted model for the prevalence of infectious diseases.) If we know two things about the pattern of infection, such as the fraction of kids who have it and their average intensity, we can mathematically identify a unique member of the family. And once a member is chosen, we can estimate the share of children with, for example, hookworm infections exceeding 2,000 eggs/gram, which is the WHO’s suggested minimum for moderate or heavy infection. The next two graphs examine how, under these modeling assumptions, the fraction of children with moderate/heavy infections varies in tandem with the two indicators at GiveWell’s disposal, which are prevalence of infection and average infection intensity: The important thing to notice is that the curves are much curvier in the first graph. There, for example, as the orange hookworm curve descends, it converges to the left edge just below 40%. This suggests that if a community has half as many kids with hookworm as in Busia—40% instead of about 80%—then it could have far less than half as many kids with serious infections—indeed, almost none. But the straighter lines in the second graph mean that a 50% drop in intensity (eggs/gram) corresponds to a 50% drop in the number of children with serious disease. While we don’t know exactly what defines a serious infection, in the sense of offering hope that treatment could permanently lift children’s trajectories, these simulations imply that it is reasonable for GiveWell to extrapolate from Worms at Work on the basis of intensity (eggs/gram). Returning to the intensity table above, I find that the Deworm the World egg counts, by worm type, average 16% of those in Busia. For the Schistosomiasis Control Initiative, the average ratio is 7% (and is 6% just for SCI’s namesake disease). These numbers say—as far as this sort of analysis can take us—that GiveWell’s 14% discounts are about right for Deworm the World, and perhaps ought to be halved for SCI. Halving is not as big a big change as it may seem; GiveWell has no illusions about the precision of its estimates, and performs them only to sense the order of magnitude of expected impact. Impact heterogeneity in the Worms experiment Having confronted two challenges to the generalizability of Worms at Work—that short-term non-impacts make long-term impacts implausible, and that worm loads are lower in most places today than they were in Busia in 1998–99—I turned to one more. Might there be patterns within the Worms at Works data that would douse hopes for impact beyond? For example, if only children with schistosomiasis experienced those big benefits, that would call into question the value of treating geohelminths (hookworm, roundworm, whipworm). Returning to the Worms at Work data, I searched for—and perhaps found—signs of heterogeneity in impact. I gained two insights thereby. The first, as it happens, is more evidence that is easier-explained if we assume that the Worms experiment largely worked, the theme of the last post. The second is a keener sense that there is no such thing as the “the” impact of an intervention, since it varies by person, time, and place. That heightened my nervousness about extrapolating from a single study. Beyond that general concern, I did not find specific evidence that would explicitly cast grave doubt on whole deworming campaigns. My hunt for heterogeneity went through two phases. In the first, motivated by a particular theory, I brought a narrow set of hypotheses to the data. In the second, I threw about 20 hypotheses at the data and watched what stuck: Did impact vary by sex or age? By proximity to Lake Victoria, where live the snails that carry Schistosoma mansoni? As statisticians put it, I mined the data. The problem with that is that since I tested about 20 hypotheses, I should expect about one to manifest as statistically significant just by chance (at p = 0.05). So the pattern I unearthed in the second phase should perhaps not be viewed as proof of anything, but as the basis for a hypothesis that, for a proper test, requires fresh data from another setting. Introducing elevation My search began this way. In my previous post, I entertained an alternative theory for Owen Ozier‘s finding that deworming indirectly benefited babies born right around the time of the original Worms experiment. Maybe, I thought, the 1997–98 El Nino, which brought heavy flooding to Kenya, exacerbated the conditions for the spread of worms, especially at low elevations. And perhaps by chance the treatment schools were situated disproportionately at high elevations, so their kids fared better. This could explain all the results in Worms and its follow-ups, including Ozier’s paper. But the second link in that theory proved weak, especially when defining the treatment group as groups 1 and 2 together, as done in Worms at Work. (Group 1 received treatment starting in 1998, group 2 in 1999, and group 3 in 2001, after the experiment ended.) Average elevation was essentially indistinguishable between the Worms at Work treatment and control groups. Nevertheless, my investigation of the first link in the theory led me to some interesting discoveries. To start, I directly tested the hypothesis that elevation mattered for impact by “interacting” elevation with the treatment indicator in a key Worms at Work regression. In the original regression, deworming is found to increase the logarithm of wage earnings by 0.269, meaning that deworming increased wage earnings by 30.8%. In the modified regression, the impact could vary with elevation in a straight-line way, as shown in this graph of the impact of deworming in childhood on log wage earnings in early adulthood as a function of school elevation: The grey bands around the central line show confidence intervals rather as in the earlier graph on weight gains. The black dots along the bottom show the distribution of schools by elevation. I was struck to find the impact confined to low schools. Yet it could be explained. Low schools are closer to Lake Victoria and the rivers that feed it; and their children therefore were more afflicted by schistosomiasis. In addition, geohelminths (soil-transmitted worms) might have spread more easily in the low, flat lands, especially after El Nino–driven floods. So lower schools may have had higher worm loads.[5] To fit the data more flexibly, I estimated the relationship semi-parametrically, with locally weighted regressions[6]. This involved analyzing whether among schools around 1140 meters, deworming raised wages; then the same around 1150 meters, and so on. That produced this Lowess-smoothed graph of the impact of deworming on log wage earnings: This version suggests that the big earnings impact occurred in schools below about 1180 meters, and possibly among schools at around 1250. (For legibility, I truncated the fit at 1270 meters; beyond which the confidence intervals explode for lack of much data.) Motivated by the theory that elevation mattered for impact because of differences in pre-experiment infection rates, I then graphed how those infections varied with elevation, among the subset of schools with the needed data.[7] Miguel and Kremer measure worm burdens in three ways: prevalence of any infection, prevalence of moderate or heavy infection, and intensity (eggs/gram). So I did as well. First, this graph shows infection prevalence versus school elevation, again in a locally smoothed way: Like the first table in this post, this graph shows that hookworms lived in nearly all the children, while roundworm and whipworm were each in about half. Not evident before is that schistosomiasis was common at low elevations, but faded higher up. Roundworm and whipworm also appear to fall as one scans from left to right, but then rebound around 1260 meters. The next graph is the same except that it only counts infections that are moderate or heavy according to WHO definitions[8]: Interestingly, restricting to serious cases enhances the similarity between the infection curves, just above, and the earlier semi-parametric graph of earnings impact versus elevation. The “Total” curve starts high, declines until 1200 meters or so, then peaks again around 1260. Last, I graphed Miguel and Kremer’s third measure of worm burden, intensity, against elevation. Those images resemble the graph above, and I relegate them to a footnote for concision.[9] These elevation-stratified plots teach three lessons. First, the similarity between the prevalence contours and the earnings impact contour shown earlier—high at the low elevations and then again around 1260 meters—constitutes circumstantial evidence for a sensible theory: children with the greatest worm burdens benefited most from treatment. Second, that measuring worm load to reflect intensity—moving to the graph just above from the one before—strengthens this resemblance and reinforces the notion of extrapolating from Worms at Work on the basis of intensity (average eggs/gram, not how many kids have any infection). Finally, these patterns buttress the conclusion of my last post, that the Worms experiment mostly worked. If we grant that deworming probably boosted long-term earnings of children in Busia, then it becomes unsurprising that it did so more where children had more worms. But if we doubt the Worms experiments, then these results become more coincidental. For example, if we hypothesize that flawed randomization put schools whose children were destined to earn more in adulthood disproportionately in the treatment group, then we need another story to explain why this asymmetry only occurred among the schools with the heaviest worm loads. And all else equal, per Occam’s razor, more-complicated theories are less credible. As I say, the evidence is circumstantial: two quantities of primary interest—initial worm burden and subsequent impact—relate to elevation in about the same way. Unfortunately, it is almost impossible to directly assess the relationship between those two quantities, to ask whether impact covaried with need. The Worms team did not test kids until their schools were about to receive deworming treatment “since it was not considered ethical to collect detailed health information from pupils who were not scheduled to receive medical treatment in that year.” My infection graphs are based on data collected at treatment-group schools only, just before they began receiving deworming in 1998 or 1999. Absent test results for control-group kids, I can’t run the needed comparison. Contemplating the exploration to this point, I was struck to appreciate that while elevation might not directly matter for the impacts of deworming, like a saw through a log, introducing it exposed the grain of the data. It gave me insight into a relationship that I could not access directly, between initial worm load and subsequent benefit. Mining in space After I confronted the impossibility of directly testing whether initial worm burden influenced impact, I thought of one more angle from which to attack the question, if obliquely. This led me, unplanned, to explore the data spatially. As we saw, nearly all children had geohelminths. So all schools were put on albendazole, whether during the experiment (for treatment groups) or after (control group). In addition, the pervasiveness of schistosomiasis in some areas called for a second drug, praziquantel. I sought to check whether the experiment raised earnings more for children in those areas. Such a finding could be read to say that schistosomiasis is an especially damaging parasite, making treatment for it especially valuable. Or, since the low-elevation schistosomiasis schools tended to have the highest overall worm burdens, it could be taken as a sign that higher parasite loads in general lead to higher benefit from deworming. Performing the check first required some educated guess work. The Worms data set documents which of the 50 schools in the treatment groups needed and received praziquantel, but not which of the 25 control group schools would have needed it in 1998–99. To fill in these blanks, I mapped the schools by treatment group and praziquantel status. Group 1 schools, treated starting in 1998, are green. Group 2 schools, treated starting in 1999, are yellow. And group 3 (schools not treated till 2001) are red. The white 0’s and 1’s next to the group 1 and 2 markers show which were deemed to need praziquantel, with 1 indicating need: Most of the 1’s appear in the southern delta and along the shore of Lake Victoria. By eyeballing the map, I could largely determine which group 3 schools also needed praziquantel. For example, those in the delta to the extreme southwest probably needed it since all their neighbors did. I was least certain about the pair to the southeast, which lived in a mixed neighborhood, as it were; I arbitrarily marked one for praziquantel and one not.[10] Returning to the Worms at Work wage earnings regression and interacting treatment with this new dummy for praziquantel need revealed no difference in impact between schools where only albendazole was deemed needed and given, and schools where both drugs were needed and given: Evidently, treatment for geohelminths and schistosomiasis, where both were needed, did not help future earnings much more or less than treatment for geohelminths, where only that was warranted. So the comparison generates no strong distinction between the worm types. After I mapped the schools, it hit me: I could make two-dimensional versions of my earlier graphs, slicing the data not by elevation, but by longitude and latitude. To start, I fed the elevations of the 75 schools, marked below with white dots, into my statistics software, Stata, and had it estimate the topography that best fit. This produced a depiction of the contours of the land in southern Busia County, with the brightest reds indicating the highest areas: (Click image for a larger version.) I next graphed the impact of deworming on log wage earnings. Where before I ran the Worms at Work wage earnings regression centering on 1140 meters, then 1150, etc., now I ran the regression repeatedly across a grid, each time giving the most weight to the nearest schools [11]: Two valleys of low impact dimly emerge, one toward the Lake in the south, one in the north where schools are higher up. Possibly these two troughs are linked to the undulations in my earlier, elevation-stratified graphs. Next, I made graphs like these for all 21 baseline variables that Worms checks for balance—such as fraction of students who are girls and average age. All the graphs are here. Now I wonder if this was a mistake. None of the graphs fit the one above like a key in lock, so I found myself staring at blobs and wondering which vaguely resembled the pattern I sought. I had no formal, pre-specified measure of fit, which increased uncertainty and discretion. Perhaps it was just a self-administered Rorschach test. Yet the data mining had the power to dilute any p values from subsequent formal tests. In the end, one variable caught my eye when mapped, and then appeared to be an important mediator of impact when entered into the wage earnings regression. It is: a child’s initial weight-for-age Z-score (WAZ), which measures a child’s weight relative to his or her age peers.[12] Here is the WAZ spatial plot. Compare it to the one just above. To my eye, where WAZ was high, subsequent impact was generally lower: (Since most children in this sample fell below the reference median, their weight-to-age Z-scores were negative, so in here average WAZ ranges between –1.3 and about –1.5.) Going back to two dimensions, this graph more directly checks the relationship I glimpsed above, by showing how the impact of deworming on wage earnings varied with children’s pre-treatment weight-to-age Z-score: It appears that only children below –2, which is the standard definition of “underweight,” benefited enough from deworming treatment that it permanently lifted their developmental trajectories. If the pattern is real, two dynamics could explain it. Children who were light for their age may have been so precisely because they carried more parasites, and were in deep need of treatment. Or perhaps other health problems made them small, which also rendered them less resilient to infection, and again more needful of treatment. The lack of baseline infection data for the control group prevents me from distinguishing between these theories. Struck by this suggestion that low initial weight predicted impact, and mindful of the meta-analytic consensus that deworming affects weight, I doubled back to the original Worms study to ask a final question. Were any short-term weight gains in Busia concentrated among kids who started out the most underweight? This could link short-term impacts on weight with long-term impacts on earnings, making both more credible. I made this graph of the one-year impact of deworming treatment on weight-for-age Z-score versus weight-for-age Z-score before treatment (1998)[13]: The graph seems to support my hypothesis. Severely underweight children (at or below –3) improve by about 0.2 points in Z-score. Underweight children (at or below –2) gain perhaps 0.1 on average. But there is a puzzling twist. While treatment raised weight among the most severely underweight children, it apparently reduced the weight of the heaviest children. (Bear in mind that in registering just above 0, the highest-WAZ children in Busia were merely surpassing 50th percentile in the global reference population.) Conceivably, certain worm infections cause weight gain, which is reversed by treatment; but here I am speculating. Statisticians might wonder if this graph reveals regression toward the mean. Just as the temperature must rise after the coldest day of the year and fall after the hottest, we could expect that the children who started the experiment the most underweight would become less so, and vice versa. But since the graph compares treatment and control schools, regression toward the mean only works as a theory if it occurred more in the treatment group. That would require a failure of randomization. The previous post argued that the imperfections in the Worms randomization were probably not driving the main results; but possibly they are playing a larger role in these second-order findings about heterogeneity of impact. Because of these doubts, and because I checked many hypotheses before gravitating to weight-for-age as a mediator of impact, I am not confident that physical health was a good predictor of the long-run impact of deworming on earnings. I view the implications of the last two graphs—that deworming increased weight in the short run and earnings in the long run only among the worst-off children—merely as intriguing. As an indicator of heavy worm burden or poor general health, low weight may have predicted impact. That hypotheses ought to probed afresh in other data, this time with pre-registered transparency. The results from such replication could then sharpen our understanding of how to generalize from Worms at Work. But I emphasize that my earlier findings revolving around elevation are more confident, because they came out of a small and theoretically motivated set of hypotheses. At elevations where worms were more prevalent, deworming did more long-term good. Conclusions I glean these facts: • Treatment of children known to carry worms improves their nutritional status, as measured by weight and height. • Typically, a minority of children in today’s deworming settings are infected, so impacts from mass deworming are smaller and harder to detect. • In meta-analyses, 95% confidence intervals for the impacts of mass deworming tend to contain zero. • In the case of weight—which is among the best-studied outcomes and more likely to respond to treatment in the short run—Croke et al. improve the precision of meta-analysis. Their results are compatible with others’ estimates, yet make it appear unlikely that average short-term impact of mass deworming is zero or negative. • Though the consensus estimate of about 0.1 kg for weight gain looks small, once one accounts for the youth and low infection rates of the children behind the number, it does not sit implausibly with the big long-term earnings benefit found in Worms at Work. • Extrapolating the Worms at Work results to other settings in proportion to infection intensity (eggs/gram) looks reasonable. This will adjust for the likelihood that as prevalence of infection falls, prevalence of serious infection falls faster. Extrapolating this way might leave GiveWell’s cost-effectiveness rating for the Deworm the World unchanged while halving that for the Schistosomiasis Control Initiative (which is not a lot in calculations that already contain large margins of error). • Within Busia, 1998–99, evidence suggests that the benefits of deworming were confined to children who were the worst off, e.g., who were more numerous at elevations with the most worm infections. • To speak to the theme of the previous post, this hint of heterogeneity is harder to explain if we believe randomization failure caused the Worms at Work results. • I did not find heterogeneity that could radically alter our appraisal of charities, such as signs that only treatment of schistosomiasis had long-term benefits. This recitation of facts makes GiveWell’s estimate of the expected value of deworming charities look reasonable. Yet, it is also unsatisfying. It is entirely possible that today’s deworming programs do much less, or much more, good than implied by the most thoughtful extrapolation from Worms at Work. Worms, humans, institutions, and settings are diverse, so impacts probably are too. And given the stakes in wealth and health, we ideally would not be in the position of relying so much on one study, which could be flawed or unrepresentative, my defenses notwithstanding. Only more research can make us more sure. If donors and governments are willing to spend nine-figure sums on deworming, they ought to devote a small percentage of that flow to research that could inform how best to spend that money. Unfortunately, research on long-term impacts can take a long time. In the hope of bringing relevant knowledge to light faster, here are two suggestions. All reasonable effort should be made to: • Gather and revisit underlying data (“microdata”) from existing high-quality trials, so that certain potential mediators of impact, such as initial worm load and weight, can be studied. This information could influence how we extrapolate from the studies we have to the contexts where mass deworming may be undertaken today. As a general matter, it cannot be optimal that only the original authors can test hypotheses against their data, as is so often the case. In practice, different authors test different outcomes measured different ways, reducing comparability across studies and eroding the statistical power of meta-analysis. Opportunities for learning left unexploited are a waste potentially measured in the health of children. • Turn past short-term studies into long-term ones by tracking down the subjects and resurvey them.[14] This is easier said than done, but that does not mean a priori that it would be a waste to push harder against this margin. Then, long-term research might not take quite so long. Addition, January 9, 2017: One other short-term source of long-term evidence is the impending analysis of the 2011–14 follow-up on the Worms experiment, mentioned in the previous post. If the analysis of the impacts on earnings—which GiveWell has not yet seen—reveals impacts substantially different from those found in the previous round, which are the basis for Worms at Work, this could greatly affect GiveWell’s valuations of deworming charities. Notes [1] Croke et al. do motivate their focus on weight in a footnote. Only three outcomes are covered by more than three studies in the Cochrane review’s meta-analyses: weight, height, and hemoglobin. Height responds less to recent health changes than weight, so analysis of impacts on height should have lower power. Hemoglobin destruction occurs most with hookworm, yet only one of the hemoglobin studies in the Cochrane review took place in a setting with significant hookworm prevalence. [2] I thank Kevin Croke for pointing out the need for this adjustment. [3] Columns S–W of the Parameters tab suggest several choices based on prevalence, intensity, or a mix. Columns Y–AC provide explanations. GiveWell staff may then pick from suggested values or introduce their own. [4] Lo et al. 2016 fit quadratic curves for the relationship between average infection intensity among the infected (in eggs/gram) and prevalence of any infection. The coefficients are in Table A2. If we then assume that the distribution of infection intensity is in the (two-parameter) negative binomial family, fixing two statistics—prevalence and average intensity as implied by its quadratic relationship with prevalence—suffices to determine the distribution. We can then compute the number of people whose infection intensity exceeds a given standard. In the usual conceptual framework of the negative binomial distribution, each egg per gram is considered a “success.” A fact about the negative binomial distribution that helps us determine the parameters is P = 1–(1 + M/r)^(–r), where M is average eggs/gram for the entire population, including the uninfected; r is the dispersion parameter, i.e., the number of failures before the trials stop; and P is prevalence of any infection, i.e., the probability of at least one success before the requisite number of failures. One conceptual problem in this approach is that intensity in eggs/gram is not a natural count variable despite being modeled as such. Changing the unit of mass in denominator, such as to 100 mg, will somewhat change the simulation results. In the graphs presented here, I work with 1000/24 = 41.67 grams as the denominator since that is a typical mass on the slide of a Kato-Katz test and 24 is thus a standard multiplier when performing the test. [5] I also experimented with higher-order polynomials in elevation. This hardly changed the results. [6] I rerun the Worms at Work regression repeatedly while introducing weights centered around elevations 1140, 1150, …, etc. meters. Following the default in Stata’s lowess command, the kernel is Cleveland’s bicube. The bandwidth is 50% of the sample elevation span. [7] The Worms research team tested random subsets of children at treatment schools just before they were treated, meaning that pre-treatment infection data are available for a third of schools (group 1) for early 1998 and another third (group 2) for early 1999. To maximize statistical power, I merge these pre-treatment samples. Ecological conditions changed between those two collection times, as the El Nino passed, which may well have affected worm loads. But pooling them should not cause bias if schools are reasonably well mixed in elevation, as they appear to be. Averages adjust for the stratification in the sampling of students for testing: 15 students were chosen for each school and grade. [8] Miguel and Kremer modify the World Health Organization’s suggested standards for moderate infection, stated with reference to eggs per gram of stool. To minimize my discretion, I follow the WHO standards exactly. [9] There are separate graphs for hookworm, roundworm, whipworm, and schistosomiasis. Here, the shades of grey do not signify levels of confidence about the true average value. Rather, they indicate the 10th, 20th, …, 90th percentiles in eggs per gram, while the black lines show medians (50th percentiles). [10] Among the group 3 schools, I marked those which school identifiers 108, 218, 205, 202, 189, 167, 212, 211 as warranting praziquantel. [11] The spatially smoothed impact regressions, and the spatially smoothed averages of baseline variables graphed next, are plotted using the same bandwidth and kernel as before, except that now distance is measured in degrees, in two dimensions. Since Busia is very close to the equator, latitude and longitude degrees correspond to the same distances. Locally weighted averages are computed at a 21×21 grid of points within the latitude and longitude spans of the schools. Points more than .05 degrees from all schools are excluded. Stata’s thin-plate-spline interpolation then fills in the contours. [12] Weight-for-age z scores are expressed relative to the median of a reference distribution, which I believe comes from samples of American children from about 50 years ago. The WHO and CDC provide reference tables. [13] The regressions behind the following two graphs incorporate all controls from the Baird et al. low wage earnings regression that are meaningful in this shorter-term context: all interactions of sex and standard (grade) dummies, zone dummies, and initial pupil population. [14] This idea is inspired by a paper by Kevin Croke, although that paper links a short-term deworming study to long-term outcomes at the parish level, not the individual level. The post How thin the reed? Generalizing from “Worms at Work” appeared first on The GiveWell Blog. ### Why I mostly believe in Worms Tue, 12/06/2016 - 09:20 The following statements are true: • GiveWell is a nonprofit dedicated to finding outstanding giving opportunities through in-depth analysis. Thousands of hours of research have gone into finding our top-ratepd charities.” • GiveWell recommends four deworming charities as having outstanding expected value. Why? Hundreds of millions of kids harbor parasitic worms in their guts[1]. Treatment is safe, effective, and cheap, so much so that where the worms are common, the World Health Organization recommends administering pills once or twice a year to all children without incurring the cost of determining who is infected. • Two respected organizations, Cochrane and the Campbell Collaboration, have systematically reviewed the relevant studies and found little reliable evidence that mass deworming does good. That list reads like a logic puzzle. GiveWell relies on evidence. GiveWell recommends mass-deworming charities. The evidence says mass deworming doesn’t work. How is that possible? Most studies of mass deworming track impact over a few years. The handful that look longer term find big benefits, including one in Kenya that reports higher earnings in adulthood. So great is that benefit that even when GiveWell discounts it by some 99% out of doubts about generalizability, deworming charities look like promising bets. Still, as my colleagues have written, the evidence on deworming is complicated and ambiguous. And GiveWell takes seriously the questions raised by the Cochrane and Campbell evidence reviews. Maybe the best discount is not 99% but 100%. That would make all the difference for our assessment. This is why, starting in October, I delved into deworming. In this post and the next, I will share what I learned. In brief, my confidence rose in that Kenya study’s finding of higher earnings in adulthood. I will explain why below. My confidence fell in the generalizability of that finding to other settings, as discussed in the next post. As with all the recommendations we make, our calculations may be wrong. But I believe they are reasonable and quite possibly conservative. And notice that they do not imply that the odds are 1 in 100 that deworming does great good everywhere and 99 in 100 that it does no good anywhere. It can instead imply that kids receiving mass deworming today need it less than those in the Kenya study, because today’s children have fewer worms or because they are healthy enough in other respects to thrive despite the worms. Unsurprisingly, I do not know whether 99% overshoots or undershoots. I wish we had more research on the long-term impacts of deworming in other settings, so that we could generalize with more nuance and confidence. In this post, I will first orient you with some conceptual and historical background. Then I’ll think through two concerns about the evidence base we’re standing on: that the long-term studies lack design features that would add credibility; and that the key experiment in Kenya was not randomized, as that term is generally understood. Background Conclusions vs. decisions There’s a deeper explanation for the paradox that opens this post. Back in 1955, the great statistician John Tukey gave an after-dinner talk called “Conclusions vs Decisions,” in which he meditated on the distinction between judging what is true—or might be true with some probability—and deciding what to do with such information. Modern gurus of evidence synthesis retain that distinction. The Cochrane Handbook, which guides the Cochrane and Campbell deworming reviews, is emphatic: “Authors of Cochrane reviews should not make recommendations.” Indeed, researchers arguing today over the impact of mass deworming are mostly arguing about conclusions. Does treatment for worms help? How much and under what circumstances? How confident are we in our answers? We at GiveWell—and you, if you’re considering our charity recommendations—have to make decisions. The guidelines for the GRADE system for rating the quality of studies nicely illustrates how reaching conclusions, as hard and complicated as it is, still leaves you several logical steps short of choosing action. Under the heading, “A particular quality of evidence does not necessarily imply a particular strength of recommendation,” we read: For instance, consider the decision to administer aspirin or acetaminophen to children with chicken pox. Observational studies have observed an association between aspirin administration and Reye’s syndrome. Because aspirin and acetaminophen are similar in their analgesic and antipyretic effects, the low-quality evidence regarding the potential harms of aspirin does not preclude a strong recommendation for acetaminophen. Similarly, high-quality evidence does not necessarily imply strong recommendations. For example, faced with a first deep venous thrombosis (DVT) with no obvious provoking factor patients must, after the first months of anticoagulation, decide whether to continue taking warfarin long term. High-quality randomized controlled trials show that continuous warfarin will decrease the risk of recurrent thrombosis but at the cost of increased risk of bleeding and inconvenience preferences. Because patients with varying values and preferences are likely to make different choices, guideline panels addressing whether patients should continue or terminate warfarin may—despite the high-quality evidence—offer a weak recommendation. I think some of the recent deworming debate has nearly equated the empirical question of whether mass deworming “works” with the practical question of whether it should be done. More than many participants in the conversation, GiveWell has seriously analyzed the logical terrain between the two questions, with a explicit decision framework that allows and forces us to estimate a dozen relevant parameters. We have found the decision process no more straightforward than the research process appears to be. You can argue with how GiveWell has made its calls (and we hope you will, with specificity), and such argument will probably further expose the trickiness of going from conclusion to decision. The rest of this post is about the “conclusions” side of the Tukey dichotomy. But having spent time with our spreadsheet helped me approach the research with a more discerning eye, for example, by sensitizing me to the crucial question of how to generalize from the few studies we have. The research on the long-term impacts of deworming Two studies form the spine of GiveWell’s support for deworming. Ted Miguel and Michael Kremer’s seminal Worms paper reported that after school-based mass deworming in southern Busia county, Kenya, in the late 1990s, kids came to school more. And there were “spillovers”: even kids at the treated schools who didn’t take the pills saw gains, as did kids at nearby schools that didn’t get deworming. However, children did not do better on standardized tests. In all treatment schools, children were given albendazole for soil-transmitted worms—hookworm, roundworm, whipworm. In addition, where warranted, treatment schools received praziquantel for schistosomiasis, which is transmitted through contact with water and was common near Lake Victoria and the rivers that feed it. Worms at Work, the sequel written with Sarah Baird and Joan Hamory Hicks, tracked down the (former) kids 10 years later. It found that the average 2.4 years of extra deworming given to treatment group children led to 15% higher non-agricultural earnings[2], while hours devoted to farm work did not change. The earnings gain appeared concentrated in wages (as distinct from self-employment income), which rose 31%.[3] That’s a huge benefit for a few dollars of deworming, especially if it accrued for years, and is what drives GiveWell’s recommendations of deworming charities. Four more studies track impacts of mass deworming over the long run: • In 2009–10, Owen Ozier surveyed children in Busia who were too young to have participated in the Kenya experiment, since they were not in school yet, but who might have benefited through the deworming of their school-age siblings and neighbors. (If your big sister and her friends don’t have worms, you’re less likely to get them too.) Ozier found that kids born right around the time of the experiment scored higher on cognitive tests years later. • The Worms team confidentially shared initial results from the latest follow-up on the original experiment, based on surveys fielded in 2011–14. Many of those former schoolchildren now have children of their own. The results shared are limited and preliminary, and I advised my colleagues to wait before updating their views based on this research. • Kevin Croke followed up on a deworming experiment that took place across the border in Uganda in 2000–03. (GiveWell summary here.) Dispensing albendazole (for soil-transmitted worms) boosted children’s scores on basic tests of numeracy and literacy administered years later, in 2010 and 2011. I am exploring and discussing the findings with Kevin Croke, and don’t have anything to report yet. • In a remarkable act of historical scholarship, Hoyt Bleakley tracked the impacts of the hookworm eradication campaign initiated by the Rockefeller Foundation in the American South a century ago. Though not a randomized experiment, his analysis indicates that children who benefited from the campaign went on to earn more in adulthood. These studies have increased GiveWell’s confidence in generalizing from Worms at Work—but perhaps only a little. Two of the four follow-up on the original Worms experiment, so they do not constitute fully independent checks. One other is not experimental. For now, the case for mass deworming largely stands or falls with the Worms and Worms at Work studies. So I will focus on them. Worm Wars A few years ago, the International Initiative for Impact Evaluation (3ie) funded British epidemiologists Alexander Aiken and Calum Davey to replicate Worms. (I served on 3ie’s board around this time.) With coauthors, the researchers first exactly replicated the study using the original data and computer code. Then they analyzed the data afresh with their preferred methods. The deeply critical write-ups appeared in the International Journal of Epidemiology in the summer of 2015. The next day, Cochrane (which our Open Philanthropy Project has funded) updated its review of the deworming literature, finding “quite substantial evidence that deworming programmes do not show benefit.” And so, on the dreary plains of academia, did the great worm wars begin. I read through the blogospheric explosion of debate.[4] Much of it is secondary for GiveWell, because it is about the reported bump-up in school attendance after deworming. That matters less to us than the long-term impact on earnings. Getting kids to school is only a means to other ends—at best. Similarly, much debate centers on those spillovers: all sides agree that the original Worms paper overestimated their geographic reach. But that is not so important when assessing charities that aim to deworm all (school-age) children in a region rather than a subset as in the experiment. I think GiveWell should focus on these three criticisms aired in the debate: • The Worms experiment and the long-term follow-ups lack certain design features that are common in epidemiology, with good reason, yet are rare in economics. For example, the kids in the study were not “blinded” through use of placebos to whether they were in a treatment or control group. Maybe they behaved differently merely because they knew they were being treated and observed. • The Worms experiment wasn’t randomized, as that term is usually meant. • Against the handful of promising (if imperfect) long-term studies are several dozen short-term studies, which in aggregate find little or no benefit for outcomes such as survival, height, weight, hemoglobin, cognition, and school performance. The surer we are that the short-term impacts are small, the harder it is to believe that the long-term impacts are big. I will discuss the first two criticisms in this post and the third in the next. “High risk of bias”: Addressing the critique from epidemiology Perhaps the most alarming charge against Worms and its brethren has been that they are at “high risk of bias” (Cochrane, Campbell, Aiken et al., Davey et al.). This phrase comes out of a method in epidemiology for assessing the reliability of studies. It is worth understanding exactly what it means. Within development economics, Worms is seminal because when it circulated in draft in 1999, it launched the field experimentation movement. But it is not as if development economists invented randomized trials. Long before the “randomistas” appeared, epidemiologists were running experiments to evaluate countless drugs, devices, and therapies in countries rich and poor. Through this experience, they developed norms about how to run an experiment to minimize misleading results. Some are codified in the Cochrane Handbook, the bible of meta-analysis, which is the process of systematically synthesizing the available evidence on such questions as whether breast cancer screening saves lives. The norms make sense. An experimental study is more reliable when there is: • Good sequence generation: The experiment is randomized. • Sequence concealment: No one knows before subjects enter the study who will be assigned to treatment and who to control. This prevents, for example, cancer patients from dropping out of a trial of a new chemotherapy when they or their doctors learn they’ve been put in the control group. • Blinding: During the experiment, assignment remains hidden from subjects, nurses, and others who deliver or sustain treatment, so that they cannot adjust their behavior or survey responses, consciously or otherwise. Sometimes this requires giving people in the control group fake treatment (placebos). • Double-blinding: The people who measure outcomes—who take blood pressure, or count the kids showing up for school—are also kept in the dark about who is treatment and who is control. • Minimized incomplete outcome data (in economics, “attrition”): If some patients on an experimental drug fare so poorly that they miss follow-up appointments and drop out of a study, they could make the retained patients look misleadingly well-off. • No selective outcome reporting: Impacts on all outcomes measured are reported—for otherwise we should become suspicious of omissions. Are the researchers hiding contrary findings, or mining for statistically significant impacts? One way researchers can reduce selective reporting and the appearance thereof is to pre-register their analytical plans on a website outside their control. Especially when gathering studies for meta-analysis, epidemiologists prize these features, as well as clear reporting of their presence or absence. Yet most of those features are scarce in economics research. Partly that is because economics is not medicine: in a housing experiment, to paraphrase Macartan Humphreys, an agency can’t give you a placebo housing voucher that leaves you sleeping in your car without your realizing it. Partly it is because these desirable features come with trade-offs: the flexibility to test un-registered hypotheses can let you find new facts; sometimes the hospital that would implement your experiment has its own views on how things should be done. And partly the gap between ideal and reality is a sign that economists can and should do better. I can imagine that, if becoming an epidemiologist involves studying examples of how the absence of such design features can mislead—even kill—people, then this batch of unblinded, un-pre-registered, and even un-randomized deworming studies out of economics might look passing strange.[5] So might GiveWell’s reliance upon them. The scary but vague term of art, “high risk of bias,” captures such worries. The term arises from the Cochrane Handbook, which, as I’ve mentioned, is the authoritative guide for the process of systematically synthesizing available research on a health-related question. The Handbook, like meta-analysis in general, strives for an approach that is mechanical in its objectivity. Studies are to be sifted, sorted, and assessed on observable traits, such as whether they are blinded. In providing guidance to such work, the Handbook distinguishes credibility from quality. “Quality” could encompass such traits as whether proper ethical review was obtained. Since Cochrane focuses on credibility, the handbook authors excluded “quality” from their nomenclature for study design issues. They settled on “risk of bias” as a core term, it being the logical antithesis of credibility. Meanwhile, while some epidemiologists have devised scoring systems to measure risk of bias—plus 1 point for blinding, minus 2 for lack of pre-registration, etc.—the Cochrane Handbook says that such scoring is “is not supported by empirical evidence.” So, out of a sort of humility, the Handbook recommends something simpler: run down a checklist of design features, and for each one, just judge whether a study has it or not. If it does, label it as having “low risk of bias” in that domain. Otherwise, mark it “high risk of bias.” If you can’t tell, call it “unclear risk of bias.” Thus, when a study earns the “high risk of bias” label, that means that it lacks certain design features that all concerned agree are desirable. Full stop. So while the Handbook’s checklist brings healthy objectivity to evidence synthesis, it also brings limitations, especially in our context: • Those unversed in statistics, including many decision-makers, may not appreciate that “bias” carries a technical meeting that is less pejorative than the everyday one. It doesn’t mean “prejudiced.” It means “gives an answer different from the true answer, on average.” So, especially in debates that extend outside of academia, the term’s use tends to sow confusion and inflame emotions. • The binaristic label “high risk of bias” may be humble in origins, but it does not come off as humble in use. At least to non-experts the pronouncement, “the study is at high risk of bias,” seem confident. But how big is the potential bias and how great the risk? More precisely, what is the probability distribution for the bias? No one knows. • While useful when distilling knowledge from reams of research, the objectivity of the checklist comes at a price in superficiality. And the trade-off becomes less warranted when examining five studies instead of 50. As members of the Worms team point out, some Cochrane-based criticisms of their work make less sense on closer inspection. For example, the lack of blinding in Worms “cannot explain why untreated pupils in a treatment school experienced sharply reduced worm infections.” As we will see, by probing beneath the surface of a study—engaging with its specifics, examining its data and code—one can learn much that can enhance or degrade credibility. • The checklist is incomplete. E.g., with an assist from Ben Bernanke, economics is getting better at transparency. Perhaps we should brand all studies for which data and code have not been publicly shared as being at “high risk of bias” for opacity. The controversy that ensued after the 3ie-funded replication of Worms generated a lot of heat, but light too. There were points of agreement. New analysts brought new insights. Speaking personally, exploring the public data and code for Worms and Worms at Work ultimately raised my trust in those studies, as I will explain. If it had done opposite, that too would have raised my confidence in whatever conclusion I extracted. Arguably, Worms is now the most credible deworming study, for no other has survived such scrutiny. So what is a decisionmaker to do with a report of “high risk of bias”? If the choice is between relying on “low risk” studies and “high risk” studies, all else equal, then the choice is clear: favor the “low risk” studies. But what if all the studies before you contain “high risk of bias”? That question may seem to lead us to an analytical cul-de-sac. But some researchers have pushed through it, with meta-epidemiology. A 1995 article (hat tip: Paul Garner) drew together 250 studies from 33 meta-analyses of certain interventions relating to pregnancy, labor, and delivery. They asked: do studies lacking blinding or other good features report bigger impacts? The answers were “yes” for sequence concealment and double-blinding and “not so much” for randomization and attrition. More studies have been done like that. And researchers have even aggregated those, which I suppose is meta-meta-epidemiology. (OK, not really.) One example cited by the Cochrane Handbook finds that lack of sequence concealment is associated with an average impact exaggeration of 10%, and, separately, that lack of double-blinding is associated with exaggeration by 22%.[6] To operationalize “high risk of bias,” we might discount the reported long-term benefits from deworming by such factors. No one knows if those discounts would be right. But they would make GiveWell’s ~99% discount—which can compensate for 100-fold (10000%) exaggeration—look conservative. The epidemiological perspective should alert economists to ways they can improve. And it has helped GiveWell appreciate limitations in deworming studies. But the healthy challenge from epidemiologists has not undermined the long-term deworming evidence as completely as it may at first appear. Why I pretty much trust the Worms experiment Here are Stata do and log files for the quantitative assertions below that are based on publicly available data. I happened to attend a conference on “What Works in Development” at the Brookings Institution in 2008. As economists enjoyed a free lunch, the speaker, Angus Deaton, launched a broadside against the randomization movement. He made many points. Some were so deep I still haven’t fully grasped them. I remember best two less profound things he said. He suggested that Abhijit Banerjee and Esther Duflo flip a coin and jump out of an airplane, the lucky one with a parachute, in order to perform a much-needed randomized controlled trial of this injury-prevention technology. And he pointed out that the poster child of the randomization movement, Miguel and Kremer’s Worms, wasn’t actually randomized—at least not as most people understood that term. It appears that that the charity that carried out the deworming for Miguel and Kremer would not allow schools to be assigned to treatment or control via rolls of a die or the computer equivalent. Instead, Deaton said, the 75 schools were listed alphabetically. Then they were assigned cyclically to three groups: the first school went to group 1, the second to group 2, the third to group 3, the fourth to group 1, and so on. Group 1 started receiving deworming treatment in 1998; group 2 in 1999; and group 3, the control, not until after the experiment ended in 2000. During the Q&A that day at Brookings, Michael Kremer politely argued that he could think of no good theory for why this assignment system would generate false results—why it would cause, say, group 1 students to attend school more for some reason other than deworming.[7] I think Deaton replied by citing the example of a study that was widely thought to be well randomized until someone showed that it wasn’t.[8] His point was that unless an experiment is randomized, you just can’t sure be that no causal demons lurk within. This exchange came to mind when I began reading about deworming. As I say, GiveWell is less interested in whether treatment for worms raised school attendance in the short run than whether it raised earnings in the long run. But those long-term results, in Worms at Work, depend on the same experiment for credibility. In contrast with the meta-analytic response to this concern, which is to affix the label “high risk of bias for sequence generation” and move on, I dug into the study’s data. What I attacked hardest was the premise that before the experiment began, the three school groups were statistically similar, or “balanced.” Mostly the premise won. Yes, there are reasons to doubt the Worms experiment… If I were the prosecutor in Statistical balance police v. Miguel and Kremer, I’d point out that: • Deaton had it wrong: schools were not alphabetized. It was worse than that, in principle. The 75 schools were sorted alphabetically by division and zone (units of local geography in Kenya) and within zones by enrollment. Thus, you could say, a study famous for finding more kids in school after deworming formed its treatment groups on how many kids were in school before deworming. That is not ideal. In the worst case, the 75 schools would have been situated in 25 zones, each with three schools. The cyclic algorithm would then have always put the smallest school in group 1, the middle in group 2, and the largest in group 3. And if the groups started out differing in size, they would probably have differed in other respects too, spoiling credibility. (In defense of Deaton, I should say that the authors’ description of the cyclical procedure changed between 2007 and 2014.) • Worms reports that the experimental groups did start out different in some respects, with statistical significance: “Treatment schools were initially somewhat worse off. Group 1 pupils had significantly more self-reported blood in stool (a symptom of schistosomiasis infection), reported being sick more often than Group 3 pupils, and were not as clean as Group 2 and Group 3 pupils (as observed by NGO field workers).” Now, in checking balance, Table I of Worms makes 42 comparisons: group 1 vs. group 3 and group 2 vs. group 3 for 21 variables. Even if balance were perfect, when imposing a p = 0.05 significance threshold, one should expect about 5% of the tests to show up as significant, or about two of 42. In the event, five show up that way. I confirmed with formal tests that these differences were unexpected in aggregate if the groups were balanced. • Moreover, the groups differed before the experiment in a way not previously reported: in school attendance. Again, this looks very bad, at least on the surface, since attendance is a major focus of Worms. According to school registers, attendance in grades 3–8 in early 1998 averaged 97.3%, 96.3%, and 96.9% in groups 1, 2, and 3 respectively. Notice that group 3’s rate put it between the two others. This explains why, when Worms separately compares groups 1 and 2 to 3, it does not find terribly significant differences (p = 0.4, 0.12). But the distance from group 1 to 2—which is not checked—is more significant (p = 0.02), as is that from group 1 to 2 and 3 averaged together (p = 0.06). In the first year of the experiment, only group 1 was treated. So if it started out with higher attendance, can we confidently attribute the higher attendance over the following year to deworming? Miguel and Kremer point out that school registers, from which those attendance rates come, “are not considered reliable in Kenya.” Indeed, at about 97%, the rates converge rather implausibly toward perfection. This is why the researchers measured attendance by independently sending enumerators on surprise visits to schools. They found attendance around 68–76% in the 1998 control group schools (bottom of Table VI). So should we worry about a tiny imbalance in nearly meaningless school-reported attendance? Perhaps so. I find that at the beginning of the experiment the school- and researcher-reported attendance correlated positively. Each 1% increase in a school’s self-reported attendance—equivalent to moving from group 2 to group 1—predicted a 3% increase in researcher-recorded attendance (p = 0.008), making the starting difference superficially capable of explaining roughly half the direct impact found in Worms. …but there are reasons to trust the Worms experiment too To start with, in response to the points above: • Joan Hamory Hicks, who manages much of the ongoing Worms follow-up project, sent me the spreadsheet used to assign the 75 schools to the three groups back in 1997. Its contents do not approximate the worst case I described, with three schools in each zone. There are eight zones, and their school counts range from four to 15. Thus, cyclical assignment did introduce substantial arbitrariness with respect to initial school enrollment. In some zones the first and smallest school went into group 1, in others group 2, in others group 3. • As for the documented imbalances, such as kids in group 1 schools being sick more often, Worms points out that these should probably make the study conservative: the groups that ultimately fared better started out worse off. • The Worms team began collecting attendance data in all three groups, in early 1998 before the first deworming visits took place. Those more-accurate numbers do not suggest imbalance across the three groups (p = 0.43). And the correlation of school-recorded attendance, which is not balanced, and researcher-recorded attendance, which is, is not especially dispositive. If you looked across a representative 75 New York City schools at two arbitrarily chosen variables were—say, fraction of students who qualify for free meals and average class size—they could easily be correlated too. Finally, when I modify a basic Miguel and Kremer attendance regression (Table IX, col. 1) to control for the imbalanced school-recorded attendance variable, it hardly perturbs the results (except by restricting the sample because of missing observations for this variable). If initial treatment-control differences in school-recorded attendance were a major factor in the celebrated impact estimates, we would expect that controlling for the former would affect the latter In addition, three observations more powerfully bolster the Worms experiment. First, I managed to identify the 75 schools and link them to a public database of primary schools in Kenya. (In email, Ted Miguel expressed concern for the privacy of the study subjects, so I will not explain how I did this nor share the school-level information I gained thereby, except the elevations discussed just below.) This gave me fresh school-level variables on which to test the balance of the Worms experiment, such as institution type (religious, central government, etc.) and precise latitude and longitude. I found little suggestion of imbalance on the new variables as a group (p= 0.7, 0.2 for overall differences between group 1 or 2 and group 3; p = 0.54 for a difference between groups 1 and 2 together and group 3, which is the split in Worms at Work). Then, with a Python program I wrote, I used the geo-coordinates of the schools to query Google for their elevations in meters above sea level. The hypothesis that the groups differed on elevation is rejected at p = 0.36, meaning once more that a hypothesis of balance on a new variable is not strongly rejected. And if we aggregate groups 1 and 2 into a single treatment group as in Worms at Work, p = 0.97. Second, after the Worms experiment finished in 2000—and all 75 schools were receiving deworming—Miguel and Kremer launched a second, truly randomized experiment in the same setting. With respect to earnings in early adulthood (our main interest), the new experiment generates similar, if less precise, results. The experiment took on a hot topic of 2001: whether to charge poor people for basic services such as schooling and health care, in order to make service provision more financially sustainable as well as more accountable to clients. The researchers took the 50 group 1 and group 2 schools from the first experiment and randomly split them into two new groups. In the new control group, children continued to receive deworming for free. In the new treatment group, for the duration of 2001, families were charged 30 shillings ($0.40) for albendazole, for soil-transmitted worms, and another 70 shillings ($0.90) for praziquantel, where warranted for schistosomiasis. In response to the “user fees,” take-up of deworming medication fell 80% in the treatment group (which therefore, ironically, received less treatment). In effect, a second and less impeachable deworming experiment had begun. Like the original, this new experiment sent ripples into the data that the Worms team collected as it tracked the former schoolchildren into adulthood. Because the user fee trial affected a smaller group—50 instead of 75 schools—for a shorter time—one year instead of an average 2.4 in the original experiment—it did not generate deworming impact estimates of the same precision. This is probably why Worms at Work gives those impact estimates less space than the ones derived from the original experiment. But they are there. And they tend to corroborate the main results. The regression that has anchored GiveWell’s cost-effectiveness analysis puts the impact of the first experiment’s 2.4 years of deworming on later wage earnings at +31% (p = 0.002). If you run the publicly available code on the publicly available data, you discover that the same regression estimates that being in the treatment arm of the second experiment cut wage earnings by 14% (albeit with less confidence: p = 0.08). The hypothesis that the two implied rates of impact are equal—31% per 2.4 years and 14% per 80% x 1 year—fits the data (p = 0.44). More generally, Worms at Work states that among 30 outcomes checked, in domains ranging from labor to health to education, the estimated long-term impacts of the two experiments agree in sign in 23 cases. The odds of that happening by chance alone are 1 in 383.[9] The third source of reinforcement for the Worms experiment is Owen Ozier’s follow-up. In 2009 and 2010, he and his assistants surveyed 2400 children in the Worms study area who were born between about 1995 and 2001. I say “about” because their birth dates were estimated by asking them how many years old they were, and if a child said in August 2009 that she was eight, that meant that she was born in 2000 or 2001. By design, the survey covered children who were too young to have been in school during the original Worms experiment, but who might have benefited indirectly, through the deworming of their older siblings and neighbors. The survey included several cognitive tests, among them Raven’s Matrices, which are best understood by looking at an example. This graph from the Ozier working paper shows the impact of Miguel and Kremer’s 1998–2000 deworming experiment on Raven’s Matrix scores of younger children, by approximate year of birth: To understand the graph, look at the right end first. The white bars extending slightly below zero say that among children born in 2001 (or maybe really 2002) those linked by siblings and neighbors to group 1 or group 2 schools scored slightly lower than those linked to group 3 schools—but not with any statistical significance. The effective lack of difference is easy to explain since by 2001, schools in all three groups were or had been receiving deworming. (Though there was that user fee experiment in 2001….) For children in the 2000 birth cohort, no comparisons are made, because of the ambiguity over whether those linked to group 3 were born in 2000, when group 3 didn’t receive deworming, or 2001, when it did. Moving to 1999, we find more statistically significant cognitive benefits for kids linked to the group 1 and 2 schools, which indeed received deworming in 1999–2000. Something similar goes for 1998. Pushing farther back, to children born before the experiment, we again find little impact, even though a few years after birth some would have had deworming-treated siblings and neighbors and some not. This suggests that the knock-on benefit for younger children was largely to confined to their first year of life. The evidence that health problems in infancy can take a long-term toll is interesting in itself. But it matters for us in another way too. Suppose you think that because the Worms experiment’s quasi-randomization failed to achieve balance, initial cross-group differences in some factor, visible or hidden, generated the Worms at Work results. Then, essentially, you must explain why that factor caused long-term gains in cognitive scores only among kids born during the experiment. If, say, children at group 1 schools were less poor at the onset of the experiment, creating the illusion of impact, we’d expect the kids at those schools to be less poor a few years before and after too. It’s not impossible to meet this challenge. I conjectured that the Worms groups were imbalanced on elevation, which differentially exposed them to the destructive flooding caused by the strong 1997–98 El Nino. But my theory foundered on the lack of convincing evidence of imbalance on elevation, which I described above. At any rate, the relevant question is not whether it is possible to construct a story for how poor randomization could falsely generate all the short- and long-term impacts found from the Worms experiment. It is how plausible such a story would be. The more strained the alternative theories, the more credible does the straightforward explanation become, that giving kids deworming pills measurably helped them. One caveat: GiveWell has not obtained Ozier’s data and code, so we have not vetted this study as much as we have Worms and Worms at Work. Summary I came to this investigation with some reason to doubt Worms and found more when I arrived. But in the end, the defenses persuade me more than the attacks. I find that: • The charge of “high risk of bias” is legitimate but vague. • Under a barrage of tests, the statistical balance of the experiment mostly survives. • The original experiment is corroborated by a second, randomized one. • There is evidence that long-term cognitive benefits are confined to children born right around the time of the experiment, a pattern that is hard to explain except as impacts of deworming. In addition, I plan to present some fresh findings in my next post that, like Ozier’s, seem to make alternative theories harder to fashion (done). When there are both reasons to doubt and reasons to trust an experiment, the right response is not to shrug one’s shoulders, or give each point pro and con a vote, or zoom out and ponder whether to side with economists or epidemiologists. The right response is to ask: what is the most plausible theory that is compatible with the entire sweep of the evidence? For me, an important criterion for plausibility is Occam’s razor: simplicity. As I see it now, the explanation that best blends simplicity and compatibility-with-evidence runs this way: the imbalances in the Worms experiment are real but small, are unlikely to explain the results, and if anything make those results conservative; thus, the reported impacts are indeed largely impacts. If one instead assumes the Worms results are artifacts of flawed experimental design, execution, and analysis, then one has to construct a complicated theory for why, e.g. the user fee experiment produces similar results, and why the benefits for non-school-age children appear confined to those born in the treatment groups around the time of differential treatment. I hope that anyone who disagrees will prove me wrong by constructing an alternative yet simple theory that explains the evidence before us. I’m less confident when it comes to generalizing from these experiments. Worms, Worms at Work, and Ozier tell us something about what happened after kids in one time and place were treated for intestinal helminths. What do those studies tell us about the effectiveness of deworming campaigns today, from Liberia to India? I’ll explore that next. Notes [1] The WHO estimates that 2 billion people carry soil-transmitted “geohelminths,” including hookworm, roundworm, and whipworm. Separately, it reports that 258 million people needed treatment for schistosomiasis which is transmitted by contact with fresh water. Children are disproportionately affected because of their play patterns and poorer hygiene. [2] Baird et al. (2016), Table IV, Panel A, row 3, estimates a 112-shilling increase over a control-group mean of 749/month. Panel B, row 1, suggest that the effect is concentrated in wage earnings. [3] Baird et al. (2016), Table IV, Panel A, row 1, col. 1, reports 0.269. Exponentiating that gives a 31% increase. [4] For an overview, I recommend Tim Harford’s graceful take. To dig in more, see the Worms authors’ reply and the posts by Berk Ozler, Chris Blattman, and my former colleagues Michael Clemens and Justin Sandefur. To really delve, read Macartan Humphreys, and Andrew Gelman’s and Miguel and Kremer’s responses thereto. [5] For literature on the impacts of these study design features on results, see the first 10 references of Schulz et al. 1995. [6] Figures obtained by dividing the “total” point estimates from the linked figures into 1. The study expresses higher benefits as lower risk estimates, in the sense that risk of bad outcomes is reduced. [7] The Baird et al. (2016) appendix defends the “list randomization” procedure more fully. [8] Deaton may have mentioned Angrist (1990) and Heckman’s critique of it. But I believe the lesson there is not about imperfect quasi-randomization but local average treatment effects. [9] For the cumulative distribution function of the binomial distribution, F(30,7,0.5) = .00261. The post Why I mostly believe in Worms appeared first on The GiveWell Blog. ### Our updated top charities for giving season 2016 Mon, 11/28/2016 - 22:58 We have refreshed our top charity rankings and recommendations. We now have seven top charities: our four top charities from last year and three new additions. We have also added two new organizations to our list of charities that we think deserve special recognition (previously called “standout” charities). Instead of ranking organizations, we rank funding gaps, which take into account both charities’ overall quality and cost-effectiveness and what more funding would enable them to do. We also account for our expectation that Good Ventures, a foundation we work closely with, will provide significant support to our top charities ($50 million in total). Our recommendation to donors is based on the relative value of remaining gaps once Good Ventures’ expected giving is taken into account. We believe that the remaining funding gaps offer donors outstanding opportunities to accomplish good with their donations.
Our top charities and recommendations for donors, in brief
Top charities
We are continuing to recommend the four top charities we did last year and have added three new top charities:
1. Against Malaria Foundation (AMF)
2. Schistosomiasis Control Initiative (SCI)
3. END Fund for work on deworming (added this year)
4. Malaria Consortium for work on seasonal malaria chemoprevention (added this year)
5. Sightsavers for work on deworming (added this year)
6. Deworm the World Initiative, led by Evidence Action
7. GiveDirectly
We have ranked our top charities based on what we see as the value of filling their remaining funding gaps. We do not feel a particular need for individuals to divide their allocation across all of the charities, since we are expecting Good Ventures will provide significant support to each. For those seeking our recommended allocation, we recommend giving 75% to the Against Malaria Foundation and 25% to the Schistosomiasis Control Initiative, which we believe to have the most valuable unfilled funding gaps.
Our recommendation takes into account the amount of funding we think Good Ventures will grant to our top charities, as well as accounting for charities’ existing cash on hand, and expected fundraising (before gifts from donors who follow our recommendations). We recommend charities according to how much good additional donations (beyond these sources of funds) can do.
Other Charities Worthy of Special Recognition
As with last year, we also provide a list of charities that we believe are worthy of recognition, though not at the same level (in terms of likely good accomplished per dollar) as our top charities (we previously called these organizations “standouts”). They are not ranked, and are listed in alphabetical order.
Below, we provide:
• An explanation of major changes in the past year that are not specific to any one charity. More
• A discussion of our approach to room for more funding and our ranking of charities’ funding gaps. More
• Summary of key considerations for top charities. More
• Detail on each of our new top charities, including an overview of what we know about their work and our understanding of each organization’s room for more funding. More
• Detail on each of the top charities we are continuing to recommend, including an overview of their work, major changes over the past year and our understanding of each organization’s room for more funding. More
• The process we followed that led to these recommendations. More
• A brief update on giving to support GiveWell’s operations vs. giving to our top charities. More
Conference call to discuss recommendations
We are planning to hold a conference call at 5:30pm ET/2:30pm PT on Thursday, December 1 to discuss our recommendations and answer questions.
If you’d like to join the call, please register using this online form. If you can’t make this date but would be interested in joining another call at a later date, please indicate this on the registration form.
Major changes in the last 12 months
Below, we summarize the major causes of changes to our recommendations (since last year).
Most important changes in the last year:
• We engaged with more new potential top charities this year than we have in several years (including both inviting organizations to participate in our process and responding to organizations that reached out to us). This work led to three additional top charities. We believe our new top charities are outstanding giving opportunities, though we note that we are relatively less confident in these organizations than in our other top charities—we have followed each of the top charities we are continuing to recommend for five or more years and have only began following the new organizations in the last year or two.
• Overall, our top charities have more room for more funding than they did last year. We now believe that AMF, SCI, Deworm the World, and GiveDirectly have strong track records of scaling their programs. Our new top charities add additional room for more funding and we believe that the END Fund and Malaria Consortium, in particular, could absorb large amounts of funding in the next year. We expect some high-value opportunities to go unfilled this year.
• Last year, we wrote about the tradeoff between Good Ventures accomplishing more short-term good by filling GiveWell’s top charities’ funding gaps and the long-term good of saving money for other opportunities (as well as the good of not crowding out other donors, who, by nature of their smaller scale of giving, may have fewer strong opportunities). Due to the growth of the Open Philanthropy Project this year and its increased expectation of the size and value of the opportunities it may have in the future, we expect Good Ventures to set a budget of $50 million for its contributions to GiveWell top charities. The Open Philanthropy Project plans to write more about this in a future post on its blog. Room for more funding analysis Types of funding gaps We’ve previously outlined how we categorize charities’ funding gaps into incentives, capacity-relevant funding, and execution levels 1, 2, and 3. In short: • Incentive funding: We seek to ensure that each top charity receives a significant amount of funding (and to a lesser extent, that charities worthy of special recognition receive funding as well). We think this is important for long-run incentives to encourage other organizations to seek to meet these criteria. This year, we are increasing the top charity incentive from$1 million to $2.5 million. • Capacity-relevant funding: Funding that we believe has the potential to create a significantly better giving opportunity in the future. With one exception, we don’t believe that any of our top charities have capacity-relevant gaps this year. We have designated the first$2 million of Sightsavers’ room for more funding as capacity-relevant because seeing results from a small number of Sightsavers deworming programs would significantly expand the evidence base for its deworming work and has the potential to lead us to want to support Sightsavers at a much higher level in the future (more).
• Execution funding: Funding that allows charities to implement more of their core programs. We separated this funding into three levels: level 1 is the amount at which we think there is a 50% chance that the charity will be bottlenecked by funding; level 2 is a 20% chance of being bottlenecked by funding, and level 3 is a 5% chance.
Ranking funding gaps
The first million dollars to a charity can have a very different impact from, e.g., the 20th millionth dollar. Accordingly, we have created a ranking of individual funding gaps that accounts for both (a) the quality of the charity and the good accomplished by its program per dollar, and (b) whether a given level of funding is capacity-relevant and whether it is highly or only marginally likely to be needed in the coming year.
The below table lays out our ranking of funding gaps. When gaps have the same “Priority,” this indicates that they are tied. When gaps are tied, we recommend filling them by giving each equal dollar amounts until one is filled, and then following the same procedure with the remaining tied gaps. See footnote for more.*
The table below includes the amount we expect Good Ventures to give to our top charities. For reasons the Open Philanthropy Project will lay out in another post, we expect that Good Ventures will cap its giving to GiveWell’s top charities this year at $50 million. We expect that Good Ventures will start with funding the highest-rated gaps and work its way down, in order to accomplish as much good as possible. Note that we do not always place a charity’s full execution level at the same rank and in some cases rank the first portion of a given charity’s execution level ahead of the remainder. This is because many of our top charities are relatively close to each other in terms of their estimated cost-effectiveness (and thus, the value of their execution funding). For reasons we’ve written about in the past, we believe it is inappropriate to put too much weight on relatively small differences in explicit cost-effectiveness estimates. Because we expect that there are diminishing returns to funding, we would guess that the cost-effectiveness of a charity’s funding gap falls as it receives more funding. Priority Charity Amount, in millions USD (of which, expected from Good Ventures*) Type Comment 1 Deworm the World$2.5 (all) Incentive – 1 SCI $2.5 (all) Incentive – 1 Sightsavers$2.5 (all) Incentive – 1 AMF $2.5 (all) Incentive – 1 GiveDirectly$2.5 (all) Incentive – 1 END Fund $2.5 (all) Incentive – 1 Malaria Consortium$2.5 (all) Incentive – 1 Other charities worthy of special recognition $1.5 (all) Incentive$250,000 each for six charities 3 SCI $6.5 (all) Fills rest of execution level 1 Highest cost-effectiveness of remaining level 1 gaps 4 AMF$8.5 (all) First part of execution level 1 Similar cost-effectiveness to END Fund and Sightsavers and greater understanding of the organization. Expect declining cost-effectiveness within Level 1, and see other benefits (incentives) to switching to END Fund and Sightsavers after this point. 5 END Fund $2.5 (all) Middle part of execution level 1 Given relatively limited knowledge of charity, capping total recommendation at$5 million 6 Sightsavers $0.5 (all) Fills rest of execution level 1 Similar cost-effectiveness to AMF and the END Fund 7 Deworm the World$2.0 (all) Fills execution level 2 Highest-ranked level 2 gap. Highest cost-effectiveness and confidence in organization 8 SCI $4.5 (all) First part of execution level 2 Highest cost-effectiveness of remaining level 2 gaps 9 Malaria Consortium$2.5 (all) Part of execution level 1 Given relatively limited knowledge of charity, capping total recommendation at $5 million 10 AMF$18.6 ($5.1) Part of execution level 1 Expect declining cost-effectiveness within level 1; ranked other gaps higher due to this and incentive effects 11 SCI$4.5 ($0) Fills execution level 2 Roughly expected to be more cost-effective than the remaining$49 million of AMF level 1
* Also includes $1 million that GiveWell holds for grants to top charities. More below. Summary of key considerations for top charities The table below summarizes the key considerations for our seven top charities. More detail is provided below as well as in the charity reviews. Consideration AMF Malaria Consortium Deworm the World END Fund SCI Sightsavers GiveDirectly Estimated cost-effectiveness (relative to cash transfers) ~4x ~4x ~10x ~4x ~8x ~5x Baseline Our level of knowledge about the organization High Relatively low High Relatively low High Relatively low High Primary benefits of the intervention Under-5 deaths averted and possible increased income in adulthood Possible increased income in adulthood Immediate increase in consumption and assets Ease of communication Moderate Strong Strong Strong Moderate Moderate Strongest Ongoing monitoring and likelihood of detecting future problems Moderate Moderate Strong Moderate Moderate Moderate Strongest Room for more funding, after expected funding from Good Ventures and donors who give independently of our recommendation High: less than half of Execution Level 1 filled High: not quantified, but could likely use significantly more funding Low: Execution Levels 1 and 2 filled High: half of Execution Level 1 filled Moderate: Execution Level 1 and some of Level 2 filled Moderate: Execution Level 1 filled Very high: less than 15% of Execution Level 1 filled Our recommendation to donors If Good Ventures uses a budget of$50 million to top charities and follows our prioritization of funding gaps, it will make the following grants (in millions of dollars, rounded to one decimal place):
• AMF: $15.1 • Deworm the World:$4.5
• END Fund: $5.0 • GiveDirectly:$2.5
• Malaria Consortium: $5.0 • SCI:$13.5
• Sightsavers: $3.0 • Grants to other charities worthy of special recognition:$1.5
We also hold about $1 million that is restricted to granting out to top charities. We plan to use this to make a grant to AMF, which is the next funding gap on the list after the expected grants from Good Ventures. We estimate that non-Good Ventures donors will give approximately$27 million between now and the start of June 2017; we expect to refresh our recommendations to donors in mid-June. Of this, we expect $18 million will be allocated according to our recommendation for marginal donations, while$9 million will be given based on our top charity list—this $9 million is considered ‘expected funding’ for each charity and therefore subtracted from their room for more funding.$18 million spans two gaps in our prioritized list, so we are recommending that donors split their gift, with 75% going to AMF and 25% going to SCI, or give to GiveWell for making grants at our discretion and we will use the funds to fill in the next highest priority gaps.
Details on new top charities
Before this year, our top charity list had remained nearly the same for several years. This means that we have spent hundreds of hours talking to these groups, reading their documents, visiting their work in the field, and modeling their cost-effectiveness. We have spent considerably less time on our new top charities, particularly Malaria Consortium, and have not visited their work in the field (though we met with Sightsavers’ team in Ghana). We believe our new top charities are outstanding giving opportunities, though we think there is a higher risk that further investigation will lead to changes in our views about these groups.
Four of our top charities, including two new top charities, support programs that treat schistosomiasis and soil-transmitted helminthiasis (STH) (“deworming”). We estimate that SCI and Deworm the World’s deworming programs are more cost effective than mass bednet campaigns, but our estimates are subject to substantial uncertainty. For Sightsavers and END Fund, our greater uncertainty about cost per treatment and prevalence of infection in the areas where they work leads us to the conclusion that the cost-effectiveness of their work is on par with that of bednets. It’s important to note that we view deworming as high expected value, but this is due to a relatively low probability of very high impact. Our cost-effectiveness model implies that most staff members believe you should use a multiplier of less than 1% compared to the impact (increased income in adulthood) found in the original trials—this could be thought of as assigning some chance that deworming programs have no impact, and some chance that the impact exists but will be smaller than was measured in those trials. Full discussion in this blog post. Our 2016 cost-effectiveness analysis is here.
This year, David Roodman conducted an investigation into the evidence for deworming’s impact on long-term life outcomes. David will write more about this in a future post, but in short, we think the strength of the case for deworming is similar to last year’s, with some evidence looking weaker, new evidence that was shared with us in an early form this year being too preliminary to incorporate, and a key piece of evidence standing up to additional scrutiny.
END Fund (for work on deworming)
Our full review of END Fund is here.
Overview
The END Fund (end.org) manages grants, provides technical assistance, and raises funding for controlling and eliminating neglected tropical diseases (NTDs). We have focused our review on its support for deworming.
About 60% of the treatments the END Fund has supported have been deworming treatments, while the rest have been for other NTDs. The END Fund has funded SCI, Deworm the World, and Sightsavers. We see the END Fund’s value-add as a GiveWell top charity as identifying and providing assistance to programs run by organizations other than those we separately recommend, and our review of the END Fund has excluded results from charities on our top charity list.
We have not yet seen monitoring results on the number of children reached in END Fund-supported programs. The END Fund has instituted a requirement that grantees conduct coverage surveys and the first results will be available in early 2017. While we generally put little weight on plans for future monitoring, we feel that the END Fund’s commitment is unusually credible because surveys are already underway or upcoming in the next few months, we are familiar enough with the type of survey being used (from research on other deworming groups) that we were able to ask critical questions, and the END Fund provided specific answers to our questions.
We have more limited information on some questions for the END Fund than we do for the top charities we have recommended for several years. We do not have a robust cost per treatment figure, and also have limited information on infection prevalence and intensity.
Funding gap
We estimate that the END Fund could productively use between $10 million (50% confidence) and$22 million (5% confidence) in the next year to expand its work on deworming. By our estimation, about a third of this would be used to fund other NTD programs.
This estimate is based on (a) a list of deworming funding opportunities that the END Fund had identified as of October and its expectation of identifying additional opportunities over the course of the year (excluding opportunities to grant funding to Deworm the World, SCI, or Sightsavers, which we count in those organizations’ room for more funding); and (b) our rough estimate of how much funding the END Fund will raise. The END Fund is a fairly new organization whose revenue comes primarily from a small number of major donors so it is hard to predict how much funding it will raise.
The END Fund’s list of identified opportunities includes both programs that END Fund has supported in past years and opportunities to get new programs off the ground.
Sightsavers (for work on deworming)
Our full review of Sightsavers is here.
Overview
Sightsavers (sightsavers.org) is a large organization with multiple program areas that focuses on preventing avoidable blindness and supporting people with impaired vision. Our review focuses on Sightsavers’ work to prevent and treat neglected tropical diseases (NTDs) and, more specifically, advocating for, funding, and monitoring deworming programs. Deworming is a fairly new addition to Sightsavers’ portfolio; in 2011, it began delivering some deworming treatments through NTD programs that had been originally set up to treat other infections.
We believe that deworming is a highly cost-effective program and that there is moderately strong evidence that Sightsavers has succeeded in achieving fairly high coverage rates for some of its past NTD programs. We feel that the monitoring data we have from SCI and Deworm the World is somewhat stronger than what we have from Sightsavers—in particular, the coverage surveys that Sightsavers has done to date were on NTD programs that largely did not include deworming. Sightsavers plans to do annual coverage surveys on programs that are supported by GiveWell-influenced funding.
We have more limited information on some questions for Sightsavers than we do for the top charities we have recommended for several years. We do not have a robust cost-per-treatment figure, though the information we have suggests that it is in the same range as the cost-per-treatment figures for SCI and Deworm the World. We also have limited information on infection prevalence and intensity in the places Sightsavers works. This limits our ability to robustly compare Sightsavers’ cost effectiveness to other top charities, but our best guess is that the cost-effectiveness of the deworming charities we recommend is similar.
Funding gap
We believe Sightsavers could productively use or commit between $3.0 million (50% confidence) and$10.1 million (5% confidence) in funding restricted to programs with a deworming component in 2017.
This estimate is based on (a) a list of deworming funding opportunities that Sightsavers created for us; and (b) our understanding that Sightsavers would not allocate much unrestricted funding to these opportunities in the absence of GiveWell funding. It’s difficult to know whether other funders might step in to fund this work, but Sightsavers believes that is unlikely and deworming has not been a major priority for Sightsavers to date.
Sightsavers’ list of opportunities includes both adding deworming to existing NTD mass distribution programs and establishing new integrated NTD programs that would include deworming and spans work in Nigeria, Guinea-Bissau, Democratic Republic of Congo, Guinea, Cameroon, Cote d’Ivoire, and possibly South Sudan.
Malaria Consortium (for work on seasonal malaria chemoprevention)
Our full review of Malaria Consortium is here.
Overview
Malaria Consortium (malariaconsortium.org) works on preventing, controlling, and treating malaria and other communicable diseases in Africa and Asia. Our review has focused exclusively on its seasonal malaria chemoprevention (SMC) programs, which distribute preventive anti-malarial drugs to children 3-months to 59-months old in order to prevent illness and death from malaria.
The evidence for SMC appears strong (stronger than deworming and not quite as strong as bednets), but we have not yet examined the intervention at nearly the same level that we have for bednets, deworming, unconditional cash transfers, or other priority programs. The randomized controlled trials on SMC that we considered showed a decrease in cases of clinical malaria but were not adequately powered to find an impact on mortality.
Malaria Consortium and its partners have conducted studies in most of the countries where it has worked to determine whether its programs have reached a large proportion of children targeted. These studies have generally found positive results, but leave us with some remaining questions about the program’s impact.
Overall, we have more limited information on some questions for Malaria Consortium than we do for the top charities we have recommended for several years. We have remaining questions on cost per child per year and on offsetting effects from possible drug resistance and disease rebound.
Funding gap
We have not yet attempted to estimate Malaria Consortium’s maximum room for more funding. We would guess that Malaria Consortium could productively use at least an additional $30 million to scale up its SMC activities over the next three to four years. We have a general understanding of where additional funds would be used but have not yet asked for a high level of detail on potential bottlenecks to scaling up. We do not believe Malaria Consortium has substantial unrestricted funding available for scaling up its support of SMC programs and expect its restricted funding for SMC to remain steady or decrease in the next few years. Details on top charities we are continuing to recommend Against Malaria Foundation (AMF) Our full review of AMF is here. Background AMF (againstmalaria.com) provides funding for long-lasting insecticide-treated net distributions (for protection against malaria) in developing countries. There is strong evidence that distributing nets reduces child mortality and malaria cases. AMF provides a level of public disclosure and tracking of distributions that we have not seen from any other net distribution charity. We estimate that AMF’s program is roughly 4 times as cost effective as cash transfers (see our cost-effectiveness analysis). This estimate seeks to incorporate many highly uncertain inputs, such as the effect of mosquito resistance to the insecticides used in nets on how effective they are at protecting against malaria, how differences in malaria burden affect the impact of nets, and how to discount for displacing funding from other funders, among many others. Important changes in the last 12 months In 2016, AMF significantly increased the number and size of distributions it committed funding to. Prior to 2015, it had completed (large-scale) distributions in two countries, Malawi and Democratic Republic of Congo (DRC). In 2016, it completed a distribution in Ghana and committed to supporting distributions in an additional three countries, including an agreement to contribute$28 million to a campaign in Uganda, its largest agreement to date by far.
AMF has continued to collect and share information on its past large-scale distributions. This includes both data from registering households to receive nets (and, in some cases, data on the number of nets each household received) and follow-up surveys to determine whether nets are in place and in use. Our research in 2016 has led us to moderately weaken our assessment of the quality of AMF’s follow up surveys. In short, we learned that the surveys in Malawi have not used fully randomized selection of households and that the first two surveys in DRC were not reliable (full discussion in this blog post). We expect to see follow-up surveys from Ghana and DRC in the next few months that could expand AMF’s track record of collecting this type of data. We also learned that AMF has not been carrying out data audits in the way we believed it was (though this was not a major surprise as we had not asked AMF for details of the auditing process previously).
AMF has generally been communicative and open with us. We noted in our mid-year update that AMF had been slower to share documentation for some distributions; however, we haven’t had concerns about this in the second half of the year.
In August 2016, four GiveWell staff visited Ghana where an AMF-funded distribution had recently been completed. We met with AMF’s program manager, partner organizations, and government representatives and visited households in semi-urban and rural areas (notes and photos from our trip).
Our estimate of the cost-effectiveness of nets has fallen relative to cash transfers since our mid-year update. At that point, we estimated that nets were ~10x as cost-effective as cash transfers, and now we estimate that they are ~4x as cost-effective as cash transfers. This change was partially driven by changes in GiveWell staff’s judgments on the tradeoff between saving lives of children under five and improving lives (through increased income and consumption) in our model, and partially driven by AMF beginning to fund bed net distributions in countries with lower malaria burdens than Malawi or DRC.
Funding gap
AMF currently holds $17.8 million, and expects to commit$12.9 million of this soon. We estimate it will receive an additional $4 million by June 2017 ($2 million from donors not influenced by GiveWell and $2 million from donors who give based on our top charity list) that it could use for future distributions. Together, we expect that AMF will have about$9 million for new spending and commitments in 2017.
We estimate that AMF could productively use or commit between $87 million (50% confidence) and$200 million (5% confidence) in the next year. We arrived at this estimate from a rough estimate of the total Africa-wide funding gap for nets in the next three years (from the African Leaders Malaria Alliance)—estimated at $125 million per year. The estimate is rough in large part because the Global Fund to Fight AIDS, Tuberculosis and Malaria, the largest funder of LLINs, works on three-year cycles and has not yet determined how much funding it will allocate for LLINs for 2018-2020. We talked to people involved in country-level planning of mass net distributions and the Global Fund, who agreed with the general conclusion that there were likely to be large funding gaps in the next few years. In mid-2016, AMF had to put some plans on hold due to lack of funding. We now believe that AMF has a strong track record of finding distribution partners to work with and coming to agreements with governments, and we do not expect that to be a limiting factor for AMF. The main risks we see to AMF’s ability to scale are the possibility that funding from other funders is sufficient (since our estimate of the gap is quite rough), the likelihood that government actors have limited capacity for discussions with AMF during a year in which they are applying for Global Fund funding, AMF’s staff capacity to manage discussions with additional countries (it has only a few staff members), and whether gaps will be spread across many countries or located in difficult operating environments. We believe the probability of any specific one of these things impeding AMF’s progress is low. We believe there are differences in cost-effectiveness within execution level 1 and believe the value of filling the first part of AMF’s gap may be higher than additional funding at higher levels. This is because AMF’s priorities include committing to large distributions in the second half of 2019 and 2020, which increases the uncertainty about whether funding would have been available from another source. We and AMF have discussed a few possibilities for how AMF might fill funding gaps. AMF favors an approach where it purchases a large number of nets for a small number of countries. This approach has some advantages including efficiency for AMF and leverage in influencing how distributions are carried out. Our view is that the risk of displacing a large amount of funding from other funders using this approach outweighs the benefits. If AMF did displace a large amount of funding which would otherwise have gone to nets, that could make donations applied to these distributions considerably less cost-effective. More details on our assessment of AMF’s funding gap are in our full review. Deworm the World Initiative, led by Evidence Action Our full review of Deworm the World is here. Background Deworm the World (evidenceaction.org/#deworm-the-world), led by Evidence Action, advocates for, supports, and evaluates deworming programs. It has worked in India and Kenya for several years and has recently expanded to Nigeria, Vietnam, and Ethiopia. Deworm the World retains or hires monitors who visit schools during and following deworming campaigns. We believe its monitoring is the strongest we have seen from any organization working on deworming. Monitors have generally found high coverage rates and good performance on other measures of quality. As noted above, we believe that Deworm the World is slightly more cost-effective than SCI, more cost-effective than AMF and the other deworming charities, and about 10 times as cost-effective as cash transfers. Important changes in the last 12 months Deworm the World has made somewhat slower progress than expected in expanding to new countries. In late 2015, Good Ventures, on GiveWell’s recommendation, made a grant of$10.8 million to Deworm the World to fund its execution level 1 and 2 gaps. Execution level 1 funding was to give Deworm the World sufficient resources to expand into Pakistan and another country. Deworm the World has funded a prevalence survey in Pakistan, which is a precursor to funding treatments in the country. It has not expanded into a further country that it was not already expecting to work in. As a result, we believe that Deworm the World has somewhat limited room for more funding this year.
Overall, we have more confidence in our understanding of Deworm the World and its parent organization Evidence Action’s spending, revenues, and financial position than we did in previous years. While trying to better understand this information this year, we found several errors. We are not fully confident that all errors have been corrected, though we are encouraged by the fact that we are now getting enough information to be able to spot inconsistencies. Evidence Action has been working to overhaul its financial system this year.
Our review of Deworm the World has focused on two countries, Kenya and India, where it has worked the longest. In 2016, we saw the first results of a program in another country (Vietnam), as well as continued high-quality monitoring from Kenya and India. The Vietnam results indicate that Deworm the World is using similar monitoring processes in new countries as it has in Kenya and India and that results in Vietnam have been reasonably strong.
Evidence Action hired Jeff Brown (formerly Interim CEO of the Global Innovation Fund) as CEO in 2015. Recently Evidence Action announced that he has resigned and has not yet been replaced. Our guess is this is unlikely to be disruptive to Deworm the World’s work; Grace Hollister remains Director of the Deworm the World Initiative.
Funding gap
We believe that there is a 50% chance that Deworm the World will be slightly constrained by funding in the next year and that additional funds would increase the chances that it is able to take advantage of any high-value opportunities it encounters. We estimate that if it received an additional $4.5 million its chances of being constrained by funding would be reduced to 20% and at$13.4 million in additional funding, this would be reduced to 5%.
In the next year, Deworm the World expects to expand its work in India and Nigeria and may have opportunities to begin treatments in Pakistan and Indonesia. It is also interested in using unrestricted funding to continue its work in Kenya, and puts a high priority on this program. Its work in Kenya has to date been funded primarily by the Children’s Investment Fund Foundation (CIFF) and this support is set to expire in mid 2017. It is unclear to us whether CIFF will continue providing funding for the program and, if so, for how long. Due to the possibility that Deworm the World unrestricted funding may displace funding from CIFF, and, to a lesser extent, the END Fund and other donors, we consider the opportunity to fund the Kenya program to be less cost-effective in expectation than it would be if we were confident in the size of the gap.
More details in our full review.
Schistosomiasis Control Initiative (SCI)
Our full review of SCI is here.
Background
SCI (imperial.ac.uk/schisto) works with governments in sub-Saharan Africa to create or scale up deworming programs. SCI’s role has primarily been to identify recipient countries, provide funding to governments for government-implemented programs, provide advisory support, and conduct research on the process and outcomes of the programs.
SCI has conducted studies in about two-thirds of the countries it works in to determine whether its programs have reached a large proportion of children targeted. These studies have generally found moderately positive results, but leave us with some remaining questions about the program’s impact.
As noted above, we believe that SCI is slightly less cost-effective than Deworm the World, more cost-effective than AMF and the other deworming charities, and about 8 times as cost-effective as cash transfers.
Important changes in the last 12 months
In past years, we’ve written that we had significant concerns about SCI’s financial reporting and financial management, and the clarity of our communication with SCI. In June, we wrote that we had learned of two substantial errors in SCI’s financial managment and reporting that began in 2015. We also noted that we thought that SCI’s financial management and financial reporting, as well as the clarity of its communication with us overall, had improved significantly. In the second half of the year, SCI communicated clearly with us about its plans for deworming programs next year and its room for more funding.
SCI reports that it has continued to scale up its deworming programs over the past year and that it plans to start up new deworming programs in two states in Nigeria before the end of its current budget year.
This year, SCI has shared a few more coverage surveys from deworming programs in Ethiopia, Madagascar, and Mozambique that found reasonably high coverage.
Professor Alan Fenwick, Founder and Director of SCI for over a decade, retired from his position this year, though will continue his involvement in fundraising and advocacy. The former Deputy Director, Wendy Harrison, is the new Director.
Funding gap
We estimate that SCI could productively use or commit a maximum of between $9.0 million (50% confidence) and$21.4 million (5% confidence) in additional unrestricted funding in its next budget year.
Its funding sources have been fairly steady in recent years with about half of its revenue in the form of restricted grants, particularly from the UK government’s Department for International Development (this grant runs through 2018), and half from unrestricted donations, a majority of which were driven by GiveWell’s recommendation. We estimate that SCI will have around $5.4 million in unrestricted funding available to allocate to its 2017-18 budget year (in addition to$6.5 million in restricted funding).
SCI has a strong track record of starting and scaling up programs in a large number of countries. SCI believes it could expand significantly with additional funding, reaching more people in the countries it works in and expanding to Nigeria and possibly Chad.
More details in our full review.
GiveDirectly
Our full review of GiveDirectly is here.
Background
GiveDirectly (givedirectly.org) transfers cash to households in developing countries via mobile phone-linked payment services. It targets extremely low-income households. The proportion of total expenses that GiveDirectly has delivered directly to recipients is approximately 82% overall. We believe that this approach faces an unusually low burden of proof, and that the available evidence supports the idea that unconditional cash transfers significantly help people.
We believe GiveDirectly to be an exceptionally strong and effective organization, even more so than our other top charities. It has invested heavily in self-evaluation from the start, scaled up quickly, and communicated with us clearly. It appears that GiveDirectly has been effective at delivering cash to low-income households. GiveDirectly has one major randomized controlled trial (RCT) of its impact and took the unusual step of making the details of this study public before data was collected (more). It continues to experiment heavily, with the aim of improving how its own and government cash transfer programs are run. It has recently started work on evaluations that benchmark programs against cash with the aim of influencing the broader international aid sector to use its funding more cost-effectively.
We believe cash transfers are less cost-effective than the programs our other top charities work on, but have the most direct and robust case for impact. We use cash transfers as a “baseline” in our cost-effectiveness analyses and only recommend other programs that are robustly more cost effective than cash.
Important changes in the last 12 months
GiveDirectly has continued to scale up significantly, reaching a pace of delivering $21 million on an annual basis in the first part of 2016 and expecting to reach a pace of$50 million on an annual basis at the end of 2016. It has continued to share informative and detailed monitoring information with us. Given its strong and consistent monitoring in the past, we have taken a lighter-touch approach to evaluating its processes and results this year.
The big news for GiveDirectly this year was around partnerships and experimentation. It expanded into Rwanda (its third country) and launched a program to compare, with a randomized controlled trial, another aid program to cash transfers (details expected to be public next year). The program is being funded by a large institutional funder and Google.org. It expects to do additional “benchmarking” studies with the institutional funder, using funds from Good Ventures’ 2015 $25 million grant, over the next few years. It also began fundraising for and started a pilot of a universal basic income (UBI) guarantee—a program providing long-term, ongoing cash transfers sufficient for basic needs, which will be evaluated with a randomized controlled trial comparing the program to GiveDirectly’s standard lump sum transfers. The initial UBI program and study is expected to cost$30 million. We estimate that it is less cost-effective than GiveDirectly’s standard model, but it could have impact on policy makers that isn’t captured in our analysis.
We noted previously that Segovia, a for-profit technology company that develops software for cash transfer program implementers and which was started and is partially owned by GiveDirectly’s co-founders, would provide its software for free to GiveDirectly to avoid conflicts of interest. However, in 2016, after realizing that providing free services to GiveDirectly was too costly for Segovia (customizing the product for GiveDirectly required much more Segovia staff time than initially expected), the two organizations negotiated a new contract under which GiveDirectly will compensate Segovia for its services. GiveDirectly wrote about this decision here. GiveDirectly told us that it recused all people with ties to both organizations from this decision and evaluated alternatives to Segovia. Although we believe that there are possibilities for bias in this decision and in future decisions concerning Segovia, and we have not deeply vetted GiveDirectly’s connection with Segovia, overall we think GiveDirectly’s choices were reasonable. However, we believe that reasonable people might disagree with this opinion, which is in part based on our personal experience working closely with GiveDirectly’s staff for several years.
Funding gap
We believe that GiveDirectly is very likely to be constrained by funding next year. GiveDirectly has been rapidly building its capacity to enroll recipients and deliver funds, while some of its revenue has been redirected to its universal basic income guarantee program (either because of greater donor interest in that program or by GiveDirectly focusing its fundraising efforts on it).
We expect GiveDirectly to have about $20 million for standard cash transfers in its 2017 budget year. This includes raising about$15.8 million from non-GiveWell-influenced sources between now and halfway through its 2017 budget year (August 2017) and $4 million from donors who give because GiveDirectly is on GiveWell’s top charity list.$4 million is much less than GiveWell-influenced donors gave in the last year. This is because several large donors are supporting GiveDirectly’s universal basic income guarantee program this year and because one large donor gave a multi-year grant that we don’t expect to repeat this year.
GiveDirectly is currently on pace (with no additional hiring) to have four full teams operating its standard cash transfer model in 2017. To fully utilize four teams, it would need $28 million more than we expect it to raise. We accordingly expect that GiveDirectly will downsize somewhat in 2017, because we do not project it raising sufficient funds to fully utilize the increased capacity it has built to transfer money. Given recent growth, we believe that GiveDirectly could easily scale beyond four teams and we estimate that at$46 million more than we expect it to raise (66 million total for standard transfers), it would have a 50% chance of being constrained by funding. Other charities worthy of special recognition Last year, we recommended four organizations as “standouts.” This year we are calling this list “other charities worthy of special recognition.” We’ve added two organizations to the list: Food Fortification Initiative and Project Healthy Children. Although our recommendation to donors is to give to our top charities over these charities, they stand out from the vast majority of organizations we have considered in terms of the evidence base for their work and their transparency, and they offer additional giving options for donors who feel highly aligned with their work. We don’t follow these organizations as closely as we do our top charities. We generally have one or two calls per year with each group, publish notes on our conversations, and follow up on any major developments. We provide brief updates on these charities below: • Organizations that have conducted randomized controlled trials of their programs: • Development Media International (DMI). DMI produces radio and television programming in developing countries that encourages people to adopt improved health practices. It conducted a randomized controlled trial (RCT) of its program and has been highly transparent, including sharing preliminary results with us. The results of its RCT were mixed, with a household survey not finding an effect on mortality (it was powered to detect a reduction of 15% or more) and data from health facilities finding an increase in facility visits. (The results, because the trial was only completed in the last year, are not yet published.) We believe there is a possibility that DMI’s work is highly cost-effective, but we see no solid evidence that this is the case. We noted last year that DMI was planning to conduct another survey for the RCT in late 2016; it has decided not to move forward with this, but is interested in conducting new research studies in other countries, if it is able to raise the money to do so. It is our understanding that DMI will be constrained by funding in the next year. Our full review of DMI, with conversation notes and documents from 2016, is here. • Living Goods. Living Goods recruits, trains, and manages a network of community health promoters who sell health and household goods door-to-door in Uganda and Kenya and provide basic health counseling. They sell products such as treatments for malaria and diarrhea, fortified foods, water filters, bednets, clean cookstoves, and solar lights. Living Goods completed a randomized controlled trial of its program and measured a 27% reduction in child mortality. Our best guess is that Living Goods’ program is less cost-effective than our top charities, with the possible exception of cash. Living Goods is scaling up its program and may need additional funding in the future, but has not yet been limited by funding. We published an update on Living Goods in mid-2016. Our 2014 review of Living Goods is here. • Organizations working on micronutrient fortification: We believe that food fortification with certain micronutrients can be a highly effective intervention. For each of these organizations, we believe they may be making a significant difference in the reach and/or quality of micronutrient fortification programs but we have not yet been able to establish clear evidence of their impact. The limited analysis we have done suggests that these programs are likely not significantly more cost-effective than our top charities—if they were, we might put more time into this research or recommend a charity based on less evidence. • Food Fortification Initiative (FFI). FFI works to reduce micronutrient deficiencies (especially folic acid and iron deficiencies) by doing advocacy and providing assistance to countries as they design and implement flour and rice fortification programs. We have not yet completed a full evidence review of iron and folic acid fortification, but our initial research suggests it may be competitively cost effective with our other priority programs. Because FFI typically provides support alongside a number of other actors and its activities vary widely among countries, it is difficult to assess the impact of its work. Our full review is here. • Global Alliance for Improved Nutrition (GAIN) – Universal Salt Iodization (USI) program. GAIN’s USI program supports national salt iodization programs. We have spent the most time attempting to understand GAIN’s impact in Ethiopia. Overall, we would guess that GAIN’s activities played a role in the increase in access to iodized salt in Ethiopia, but we do not yet have confidence about the extent of GAIN’s impact. It is our understanding that GAIN’s USI work will be constrained by funding in the next year. Our review of GAIN, published in 2016 based on research done in 2015, is here. • IGN. Like GAIN-USI, IGN supports (via advocacy and technical assistance rather than implementation) salt iodization. IGN is small, and GiveWell-influenced funding has made up a large part of its funding in the past year. This year, we published an update on our investigation into IGN’s work in select countries in 2015 and notes from our conversation with IGN to learn about its progress in 2016 and plans for 2017. It is our understanding that IGN will be constrained by funding in the next year. Our review of IGN, from 2014, is here. • Project Healthy Children (PHC). PHC aims to reduce micronutrient deficiencies by providing assistance to small countries as they design and implement food fortification programs. Our review is preliminary and in particular we do not have a recent update on how PHC would use additional funding. Our review of PHC, published in 2016 but based on information collected in 2015, is here. Our research process in 2016 We plan to detail the work we completed this year in a future post as part of our annual review process. Much of this work, particularly our experimental work and work on prioritizing interventions for further investigation, is aimed at improving our recommendations in future years. Here we highlight the key research that led to our current recommendations. See our process page for our overall process. • As in previous years, we did intensive follow up with each of our top charities, including publishing updated reviews mid-year. We had several conversations by phone with each organization, met in person with Deworm the World, SCI, and AMF (over the course of a 4-day site visit to Ghana), and reviewed documents they shared with us. • In 2015 and 2016, we sought to expand top charity room for more funding and consider alternatives to our top charities by inviting other groups that work on deworming, bednet distributions, and micronutrient fortification to apply. This led to adding Sightsavers, the END Fund, Project Healthy Children, and Food Fortification Initiative to our lists this year. Episcopal Relief & Development’s NetsforLife® Program, Micronutrient Initiative, and Nothing but Nets declined to fully participate in our review process. • We completed intervention reports on voluntary medical male circumcision (VMMC) and cataract surgery. We asked VMMC groups PSI (declined to fully participate) and the Centre for HIV and AIDS Prevention Studies (pending) to apply. We had conversations with several charities working on cataract surgery and have not yet asked any to apply. • We did very preliminary investigations into a large number of interventions and prioritized a few for further work. This led to interim intervention reports on seasonal malaria chemoprevention (SMC), integrated community case management (iCCM) and ready-to-use therapeutic foods for treating severe acute malnutrition and recommending Malaria Consortium for its work on SMC. • We stayed up to date on the research for bednets, cash transfers, and deworming. We published a report on insecticide resistance and its implications for bednet programs. A blog post on our work on deworming is forthcoming. We did not find major new research on cash transfers that affected our recommendation of GiveDirectly. Giving to GiveWell vs. top charities GiveWell and the Open Philanthropy Project are planning to split into two organizations in the first half of 2017. The split means that it is likely that GiveWell will retain much of the assets of the previously larger organization while reducing its expenses. We think it’s fairly likely that our excess assets policy will be triggered and that we will grant out some unrestricted funds. Given that expectation, our recommendation to donors is: • If you have supported GiveWell’s operations in the past, we ask that you consider maintaining your support. It is fairly likely that these funds will be used this year for grants to top charities, but giving unrestricted signals your support for our operations and allows us to better project future revenue and make plans based on that. Having a strong base of consistent support allows us to make valuable hires when opportunities arise and minimize staff time spent on fundraising. • If you have not supported GiveWell’s operations in the past, we ask that you consider checking the box on our donate form to add 10% to help fund GiveWell’s operations. In the long term, we seek to have a model where donors who find our research useful contribute to the costs of creating it, while holding us accountable to providing high-quality, easy-to-use recommendations. Footnotes: * For example, if30 million were available to fund gaps of $10 million,$5 million, and $100 million, we would recommend allocating the funds so that the$10 million and $5 million gaps were fully filled and the$100 million gap received $15 million. The post Our updated top charities for giving season 2016 appeared first on The GiveWell Blog. ### Deworming might have huge impact, but might have close to zero impact Tue, 07/26/2016 - 12:48 We try to communicate that there are risks involved with all of our top charity recommendations, and that none of our recommendations are a “sure thing.” Our recommendation of deworming programs (the Schistosomiasis Control Initiative and the Deworm the World Initiative), though, carries particularly significant risk (in the sense of possibly not doing much/any good, rather than in the sense of potentially doing harm). In our 2015 top charities announcement, we wrote: Most GiveWell staff members would agree that deworming programs are more likely than not to have very little or no impact, but there is some possibility that they have a very large impact. (Our cost-effectiveness model implies that most staff members believe there is at most a 1-2% chance that deworming programs conducted today have similar impacts to those directly implied by the randomized controlled trials on which we rely most heavily, which differed from modern-day deworming programs in a number of important ways.) The goal of this post is to explain this view and why we still recommend deworming. Some basics for this post What is deworming? Deworming is a program that involves treating people at risk of intestinal parasitic worm infections with parasite-killing drugs. Mass treatment is very inexpensive (in the range of$0.50-$1 per person treated), and because treatment is cheaper than diagnosis and side effects of the drugs are believed to be minor, typically all children in an area where worms are common are treated without being individually tested for infections. Does it work? There is strong evidence that administration of the drugs reduces worm loads, but many of the infections appear to be asymptomatic and evidence for short-term health impacts is thin (though a recent meta-analysis that we have not yet fully reviewed reports that deworming led to short-term weight gains). The main evidence we rely on to make the case for deworming comes from a handful of longer term trials that found positive impacts on income or test scores later in life. For more background on deworming programs see our full report on combination deworming. Why do we believe it’s more likely than not that deworming programs have little or no impact? The “1-2% chance” doesn’t mean that we think that there’s a 98-99% chance that deworming programs have no effect at all, but that we think it’s appropriate to use a 1-2% multiplier compared to the impact found in the original trials – this could be thought of as assigning some chance that deworming programs have no impact, and some chance that the impact exists but will be smaller than was measured in those trials. For instance, as we describe below, worm infection rates are much lower in present contexts than they were in the trials. Where does this view come from? Our overall recommendation of deworming relies heavily on a randomized controlled trial (RCT) (the type of study we consider to be the “gold standard” in terms of causal attribution) first written about in Miguel and Kremer 2004 and followed by 10-year follow up data reported in Baird et al. 2011, which found very large long-term effects on recipients’ income. We reviewed this study very carefully (see here and here) and we felt that its analysis largely held up to scrutiny. There’s also some other evidence, including a study that found higher test scores in Ugandan parishes that were dewormed in an earlier RCT, and a high-quality study that is not an RCT but found especially large increases in income in areas in the American South that received deworming campaigns in the early 20th century. However, we consider Baird et al. 2011 to be the most significant result because of its size and the fact that the follow-up found increases in individual income. While our recommendation relies on the long-term effects, the evidence for short-term effects of deworming on health is thin, so we have little evidence of a mechanism through which deworming programs might bring about long-term impact (though a recent meta-analysis that we have not yet fully reviewed reports that deworming led to short-term weight gains). This raises concerns about whether the long-term impact exists at all, and may suggest that the program is more likely than not to have no significant impact. Even if there is some long-term impact, we downgrade our expectation of how much impact to expect, due to factors that differ between real-world implementations and the Miguel and Kremer trial. In particular, worm loads were particularly high during the Miguel and Kremer trial in Western Kenya in 1998, in part due to flooding from El Niño, and in part because baseline infection rates are lower in places where SCI and Deworm the World work than in the relevant studies. Our cost-effectiveness model estimates that the baseline worm infections in the trial we mainly rely on were roughly 4 to 5 times as high as in places where SCI and Deworm the World operate today, and that El Niño further inflated those worm loads during the trial. (These estimates combine data on the prevalence of infections and intensity of infections, and so are especially rough because there is limited data on whether prevalence or intensity of worms is a bigger driver of impact). Further, we don’t know of any evidence that would allow us to disconfirm the possibility that the relationship between worm infection rates and the effectiveness of deworming is nonlinear, and thus that many children in the Miguel and Kremer trial were above a clinically relevant “threshold” of infection that few children treated by our recommended charities are above. We also downgrade our estimate of the expected value of the impact based on: concerns that the limited number of replications and lack of obvious causal mechanism might mean there is no impact at all, expectation that deworming throughout childhood could have diminishing returns compared to the ~2.4 marginal years of deworming provided in the Miguel and Kremer trial, and the fact that the trial only found a significant income effect on those participants who ended up working in a wage-earning job. See our cost-effectiveness model for more information. Why do we recommend deworming despite the reasonably high probability that there’s no impact? Because mass deworming is so cheap, there is a good case for donating to support deworming even when in substantial doubt about the evidence. We estimate the expected value of deworming programs to be as cost-effective as any program we’ve found, even after the substantial adjustments discussed above: our best guess considering those discounts is that it’s still roughly 5-10 times as cost-effective as cash transfers, in expectation. But that expected value arises from combining the possibility of potentially enormous cost-effectiveness with the alternative possibility of little or none. GiveWell isn’t seeking certainty – we’re seeking outstanding opportunities backed by relatively strong evidence, and deworming meets that standard. For donors interested in trying to do as much good as possible with their donations, we think that deworming is a worthwhile bet. What could change this recommendation – will more evidence be collected? To our knowledge, there are currently no large, randomized controlled trials being conducted that are likely to be suitable for long-term follow up to measure impacts on income when the recipients are adults, so we don’t expect to see a high-quality replication of the Miguel and Kremer study in the foreseeable future. That said, there are some possible sources of additional information: • The follow-up data that found increased incomes among recipients in the original Miguel and Kremer study was collected roughly 10 years after the trial was conducted. Our understanding is that 15 year follow-up data has been collected and we expect to receive an initial analysis of it from the researchers this summer. • A recent study from Uganda didn’t involve data collection for the purpose of evaluating a randomized controlled trial; rather, the paper identified an old, short-term trial of deworming and an unrelated data set of parish-level test scores collected by a different organization in the same area. Because some of the parishes overlap, it’s possible to compare the test scores from those that were dewormed to those that weren’t. It’s possible that more overlapping data sets will be discovered and so we may see more similar studies in the future. • We’ve considered whether to recommend funding for an additional study to replicate Baird et al. 2011: run a new deworming trial that could be followed for a decade to track long term income effects. However, it would take 10+ years to get relevant results, and by that time deworming may be fully funded by the largest global health funders. It would also need to include a very large number of participants to be adequately powered to find plausible effects (since the original trial in Baird et al. 2011 benefited from particularly high infection rates, which likely made it easier to detect an effect), so it would likely be extremely expensive. For the time being, based on our best guess about the expected cost-effectiveness of the program when all the factors are considered, we continue to recommend deworming programs. The post Deworming might have huge impact, but might have close to zero impact appeared first on The GiveWell Blog. ### Mid-year update to top charity recommendations Thu, 06/23/2016 - 17:25 This post provides an update on what we’ve learned about our top charities in the first half of 2016. We continue to recommend all four of our top charities. Our recommendation for donors seeking to directly follow our advice remains the same: we recommend they give to the Against Malaria Foundation (AMF), which we believe has the most valuable current funding gap. Below, we provide: • Updates on our view about AMF, which we consider the most important information we’ve learned in the last half-year (More) • Updates on other top charities (More) • A discussion of the reasoning behind our current recommendation to donors (More) Updates on AMF Background AMF (www.againstmalaria.com) provides funding for long-lasting insecticide-treated net distributions (for protection against malaria) in developing countries. There is strong evidence that distributing nets reduces child mortality and malaria cases. AMF has relatively strong reporting requirements for its distribution partners and provides a level of public disclosure and tracking of distributions that we have not seen from any other net distribution charity. Overall, AMF is the best giving opportunity we are currently aware of. That said, we have concerns about AMF’s recent monitoring and transparency that we plan to focus on in the second half of the year. Updates from the last six months We are more confident than we were before in AMF’s ability to successfully complete deals with most countries it engages with. Over the past few years, our key concern about AMF has been whether it would be able to effectively absorb additional funding and sign distribution agreements with governments and other partners. At the end of 2013, we stopped recommending AMF because we felt it did not require additional funding, and our end-of-year analyses in 2014 and 2015 discussed this issue in depth. In early 2016, AMF signed agreements to fund two large distributions (totaling$37 million) of insecticide-treated nets in countries it has not previously worked in. We now believe that AMF has effectively addressed this concern.
AMF is in discussions for several additional large distributions. AMF currently holds approximately $23.3 million, and we believe that it is very likely to have to slow its work if it receives less than an additional$11 million very quickly. It is possible that it could also use up to an additional (approximately) $18 million more during this calendar year. It may be more valuable to give to AMF now than it will be later this year or next year. AMF’s funding gap may be time-sensitive because: 1. AMF is in several discussions about distributions that would take place in 2017. It has told us that it needs to make decisions within a month or two about which discussions to pursue. We don’t have a clear sense for how long before a distribution AMF needs to be able to commit funding, and note that, for example, AMF committed in February 2016 to a distribution in Ghana taking place in June to August 2016. That said, it seems quite plausible that AMF needs to commit soon to distributions taking place in 2017. 2. We don’t know whether there will be large funding gaps for nets in 2018 and beyond. The price of nets has been decreasing and the size of grants from the two largest funders of nets, the Global Fund to fight AIDS, TB, and Malaria and the President’s Malaria Initiative, is not yet known. (The Global Fund is holding its replenishment conference in September, in which donor governments are asked to make three-year pledges, so we may know more before the end of the year.) It’s possible that these funders will fund all or nearly all of the net needs in countries other than those that are particularly hard to work in for 2018. If that happens, gifts to AMF in late 2016 could be less valuable than gifts in the next couple of months. (This could also mean that, if AMF fills gaps in 2017 that would have been filled by other funders in 2018, gifts now are less valuable than they have been in the past. We have added an adjustment for this to our cost-effectiveness analysis, but given the high degree of uncertainty, this could be a more important factor than we are currently adjusting for.) Notwithstanding the above, we have important questions about AMF that we plan to continue to investigate. None of these developments caused us to change our recommendation about giving to AMF, but they are important considerations for donors: 1. Monitoring data: We have new concerns about AMF’s monitoring of its distributions, particularly its post-distribution check-up (PDCU) surveys. These surveys are a key part of our confidence in the quality of AMF’s distributions. For Malawi, where most of the PDCUs completed to date have been done, our key concern is that villages that surveyors visit are not selected randomly, but are instead selected by hand by staff of the organization that both implements and monitors the distributions, which seems fairly likely to lead to bias in the results. We have also seen results from the first two PDCUs from DRC. We have not yet looked at the DRC results in-depth or discussed them with AMF, but there appear to be major problems in how the surveys were carried out (particularly a high percentage of internally inconsistent data – around 40%-50%) and, if we believe the remaining data, fairly high rates of missing or unhung nets (~20% at 6-months) and nets that deteriorated quickly (65% were in ‘very good’ or ‘good’ condition at 6-months). 2. Transparency: Recently, AMF has been slower to share documentation from some distributions. AMF has told us that it has this documentation and we are concerned that AMF is not being as transparent as it could be. We believe this documentation is important for monitoring the quality of AMF’s distributions; it includes PDCUs, results from re-surveying 5% of households in during pre-distribution registrations (AMF has told us that this is a standard part of its process, but we have not seen results from any distributions), and malaria case rate data from Malawi that AMF has told us it has on hand. AMF attributes the delays to lack of staff capacity. We plan to write more about monitoring and transparency in a future post. 3. Insecticide resistance: Insecticide resistance (defined broadly as “any ways in which populations of mosquitoes adapt to the presence of insecticide-treated nets (ITNs) in order to make them less effective”) is a major threat to the effectiveness of ITNs. Insecticide resistance seems to be fairly common across sub-Saharan Africa, and it seems that resistance is increasing. It remains difficult to quantify the impact of resistance, but our very rough best guess (methodology described in more detail below) is that ITNs are roughly one-third less effective in the areas where AMF is working than they would be in the absence of insecticide resistance. We continue to believe, despite resistance, ITNs remain a highly cost-effective intervention. See our full report for more detail. Other updates on AMF • To better understand whether AMF is providing nets that would not otherwise have been funded, we considered five cases where AMF considered funding a distribution and did not ultimately provide funding. We then looked at whether other funders stepped in and how long of a delay resulted from having to wait for other funders. We published the details here. In short, most distributions took place later than they would have if AMF had funded them (on average over a year), which probably means that the people were not protected with nets during that time. We feel that these case studies provide some evidence that nets that AMF buys do not simply displace nets from other funding sources. • We’ve noted in the past that the delays in AMF signing agreements for distributions may have been due to AMF’s hesitation about paying for the costs of a distribution other than the purchase price of nets. For the distributions that AMF has signed this year, AMF has agreed to pay for some non-net costs, particularly the costs of PDCUs. The Global Fund to fight AIDS, TB, and Malaria is paying for the other non-net costs of the distribution. AMF’s willingness to fund some of the non-net costs may have made it easier for it to sign distribution agreements and put funds to use more quickly. Updates on our other top charities Schistosomiasis Control Initiative (full report) Background SCI (www3.imperial.ac.uk/schisto) works with governments in sub-Saharan Africa to create or scale up deworming programs (treating children for schistosomiasis and other intestinal parasites). SCI’s role has primarily been to identify recipient countries, provide funding to governments for government-implemented programs, provide advisory support, and conduct research on the process and outcomes of the programs. In past years, we’ve written that we had significant concerns about SCI’s financial reporting and financial management that meant we lacked high-quality, basic information about how SCI was spending funding and how much funding it had available to allocate to programs. We decided to focus our work in the first half of 2016 on this issue. We felt that seeing significant improvements in the quality of SCI’s finances was necessary for us to continue recommending SCI. We believe that deworming is a program backed by relatively strong evidence. We have reservations about the evidence, but we think the potential benefits are great enough, and costs low enough, to outweigh these reservations. SCI has conducted studies in about half of the countries it works in (including the countries with the largest programs) to determine whether its programs have reached a large proportion of children targeted. These studies have generally found moderately positive results, but have major methodological limitations. We have not asked SCI for monitoring results since last year. Updates from the last six months We published a separate blog post on our work on SCI so far this year. Our main takeaways: • SCI has begun producing higher-quality financial documents that allow us to learn some basic financial information about SCI. • We learned of two substantial errors in SCI’s financial management and reporting. 1) a July 2015 grant from GiveWell for about$333,000 was misallocated within Imperial College, which houses SCI, until we noticed it was missing from SCI’s revenue in March 2016; and (2) in 2015, SCI underreported how much funding it would have from other sources in 2016, leading us to overestimate its room for more funding by $1.5 million. • The clarity of our communication with SCI about its finances has improved, but there is still substantial room for further improvement. We feel that SCI has improved, but we would still rank our other top charities ahead of it in terms of our ability to communicate and understand their work. Given this situation, we continue to recommend SCI now and think that SCI is reasonably likely to retain its top charity status at the end of 2016. We plan, in the second half of 2016, to expand the scope of our research on SCI. We have not asked SCI for an update on its room for more funding (due to our focus on financial documents in the first half of the year). It’s our understanding that funds that SCI receives in the next six months will be allocated to work in 2017 and beyond. Because of this, we don’t believe that SCI has a pressing need for additional funds, though our guess is that it will have room for more funding when we next update our recommendations in November and that funds given before then will help fund gaps for the next budget year. GiveDirectly (full report) Background GiveDirectly (www.givedirectly.org) transfers cash to households in developing countries via mobile phone-linked payment services. It targets extremely low-income households. The proportion of total expenses that GiveDirectly has delivered directly to recipients is approximately 83% overall. We believe that this approach faces an unusually low burden of proof, and that the available evidence supports the idea that unconditional cash transfers significantly help people. We believe GiveDirectly to be an exceptionally strong and effective organization, even more so than our other top charities. It has invested heavily in self-evaluation from the start, scaled up quickly, and communicated with us clearly. It appears that GiveDirectly has been effective at delivering cash to low-income households. GiveDirectly has one major randomized controlled trial (RCT) of its impact and took the unusual step of making the details of this study public before data was collected (more). It continues to experiment heavily. Updates from the last six months • GiveDirectly announced an initiative to test a “basic income guarantee” to provide long-term, ongoing cash transfers sufficient for basic needs. The cost-effectiveness of providing this form of cash transfers may be different from the one-time transfers GiveDirectly has made in the past. • GiveDirectly continues to have more room for more funding than we expect GiveWell-influenced donors to fill in the next six months. Its top priority is funding the basic income guarantee project. • In late 2015 and early 2016, when GiveDirectly began enrolling participants in Homa Bay county, Kenya, it experienced a high rate of people refusing to be enrolled in the program. The reason for this is not fully clear, though GiveDirectly believes in some cases local leaders advised people to not trust the program. While GiveDirectly has temporarily dealt with this setback by moving its operations to a different location in Homa Bay county, it is possible that similar future challenges could reduce GiveDirectly’s ability to commit as much as it currently projects. • GiveDirectly has reached an agreement with a major funder which provides a mechanism through which multiple benchmarking projects (projects comparing cash transfers to other types of aid programs) can be launched. The major funder may fund up to$15 million for four different benchmarking projects with GiveDirectly. GiveDirectly plans to make available up to $15 million of the grant it received from Good Ventures in 2015 to match funds committed by the major funder. GiveDirectly and its partner have not yet determined which aid programs will be evaluated or how the evaluations will be carried out. • We are reasonably confident that GiveDirectly could effectively use significantly more funding than we expect it to receive, including an additional$30 million for additional cash transfers in 2016, though scaling up to this size would require a major acceleration in the second half of the year. We have not asked GiveDirectly how funding above this amount would affect its activities and plans (because we think it is very unlikely that GiveDirectly will receive more than $30 million from GiveWell-influenced supporters before our next update in November). Deworm the World (full report) Background Deworm the World (www.evidenceaction.org/deworming), led by Evidence Action, advocates for, supports, and evaluates government-run school-based deworming programs (treating children for intestinal parasites). We believe that deworming is a program backed by relatively strong evidence. We have reservations about the evidence, but we think the potential benefits are great enough, and costs low enough, to outweigh these reservations. Deworm the World retains monitors whose reports indicate that the deworming programs it supports successfully deworm children. Updates from the last six months • We asked Deworm the World whether additional funding in the next six months would change its activities or plans. It told us that it does not expect funding to be the bottleneck to any work in that time. We’d guess that there is a very small chance that it will encounter an unexpected opportunity and be bottlenecked by funding before our next update in November. • Deworm the World appears to be making progress expanding to new countries. It has made a multi-year commitment to provide technical assistance and resources to Cross River state, Nigeria for its school-based deworming program (the first deworming is scheduled for the end of this month), and are undertaking a nationwide prevalence survey in Pakistan. • In the past, we have focused our review of Deworm the World on its work in India. We are in the process of learning more about its work in other locations, particularly Kenya. The monitoring we have seen from Kenya appears to be high quality. Summary of key considerations for top charities The table below summarizes the key considerations for our four top charities. With the exception of modest changes to room for more funding, our high-level view of our top charities, as summarized in the table below, is the same as at our last update in November 2015. Consideration AMF Deworm the World GiveDirectly SCI Program estimated cost-effectiveness (relative to cash transfers) ~10x ~10x Baseline ~5x Directness and robustness of the case for impact Strong Moderate Strongest Moderate Transparency and communication Strong Strong Strongest Weakest Ongoing monitoring and likelihood of detecting future problems Strong Strong Strongest Weakest Organizational track record of rolling out program Moderate Moderate Strong Strong Room for more funding High Limited High Likely moderate (not investigated) Reasoning behind our current recommendation to donors Our recommendation for donors seeking to directly follow our advice is to give to AMF, which we believe has the most valuable current funding gap. We believe AMF will likely have opportunities to fund distributions this year which it will not be able to fund without additional funding. Due to the excellent cost-effectiveness of AMF’s work, we consider this a highly valuable funding gap to fill. Our current estimate is that on average AMF saves a life for about every$3,500 that it spends; this is an increase from our November 2015 estimate and reflects changes to our cost-effectiveness model as well as some of our inputs into bed nets’ cost-effectiveness. As always, we advise against taking cost-effectiveness estimates literally and view them as highly uncertain.
The below table lays out our ranking of funding gaps for June to November 2016. The first million dollars to a charity can have a very different impact from, e.g., the 20th million dollars. Accordingly, our ranking of individual funding gaps accounts for both (a) the quality of the charity and the good accomplished by its program, per dollar, and (b) whether a given level of funding is highly or only marginally likely to be needed in the next six months.
We consider funding that allows a charity to implement more of its core program (without substantial benefits beyond the direct good accomplished by this program) to be “execution funding.” We’ve separated this funding into three levels:
• Level 1: the amount we expect a charity to need in the coming year. If a charity has less funding than this level, we think it is more likely than not that it will be bottlenecked (or unable to carry out its core program to the fullest extent) by funding in the coming year. For this mid-year update, we have focused on funds that are needed before our next update in November, with the exception of SCI where we believe funds will not affect its work until next year.
• Level 2: if a charity has this amount, we think there is an ~80% chance that it will not be bottlenecked by funding.
• Level 3: if a charity has this amount, we think there is a ~95% chance that it will not be bottlenecked by funding.
(Our rankings can also take into account whether a gap is “capacity-relevant” or providing an incentive to engage in our process. We do not currently believe that our top charities have capacity-relevant gaps and are not planning to make mid-year incentive grants, so we haven’t gone into detail on that here. More details on how we think about capacity-relevant and execution gaps in this post.)
Priority Charity Amount (millions) Type Description Comment 1 AMF $11.3 Execution level 1 Fund distributions in two countries that AMF is in discussions with but does not have sufficient funding for AMF is strongest overall 2 AMF$7.3 Execution level 2 Fund the next largest gap on the list of remaining 2016-17 gaps in African countries – 3 SCI $10.1 Execution level 1 Very rough because we haven’t discussed this with SCI; further gaps not estimated Not as strong as AMF in isolation, so ranked below for same type of gap 4 AMF$10.5 Execution level 3 Fund the final two AMF-relevant gaps on the list of remaining 2016-17 gaps in African countries – 5 GiveDirectly $22.2 Execution level 1 Basic income guarantee program and additional standard transfers Not as cost-effective as bednets or deworming, so lower priority 6 Deworm the World$6.0 Execution level 3 A rough guess at the funding needed to cover a 3-year deworming program in a new country Strong cost-effectiveness, but unlikely to need funds in the short-term 6 GiveDirectly $7.8 Execution level 2 Funding for additional structured projects; further gaps not estimated – We are not recommending that Good Ventures make grants to our top charities for this mid-year refresh. In November 2015, we recommended that Good Ventures fund 50% of our top charities’ highest-value funding gaps for the year and Good Ventures gave$44.4 million to our top four charities. We felt this approach resulted in Good Ventures funding its “fair share” while avoiding creating incentives for other donors to avoid the causes we’re interested in, which could lead to less overall funding for these causes in the long run. (More on this reasoning available here.)
The post Mid-year update to top charity recommendations appeared first on The GiveWell Blog.
### Our updated top charities for giving season 2015
Wed, 11/18/2015 - 14:06
We have refreshed our top charity rankings and recommendations. Our set of top charities and standouts is the same as last year’s, but we have introduced rankings and changed our recommended funding allocation, due to a variety of updates – particularly to our top charities’ room for more funding. In particular, we are recommending that Good Ventures, a foundation with which we work closely, support our top charities at a higher level than in previous years. This post includes our recommendations to Good Ventures, and gives our recommendations to individual donors after accounting for these grants.
Overall, we think the case for our top charities is stronger than in previous years, and room for more funding is greater.
Our top charities and recommendations for donors, in brief
Top charities
1. Against Malaria Foundation (AMF)
2. Schistosomiasis Control Initiative (SCI)
3. Deworm the World Initiative, led by Evidence Action
4. GiveDirectly
This year, we are ranking our top charities based on what we see as the value of filling their remaining funding gaps. Unlike in previous years, we do not feel a particular need for individuals to divide their allocation between the charities, since we are recommending that Good Ventures provide significant support to each. For those seeking our recommended allocation, we simply recommend giving to the top-ranked charity on the list, which is AMF.
Our recommendation takes the grants we are recommending to Good Ventures into account, as well as accounting for charities’ existing cash on hand and expected non-GiveWell-related fundraising, and recommends charities according to how much good additional donations (beyond these sources of funds) can do. (Otherwise, as explained below, Deworm the World would be ranked higher.) Thus, AMF’s #1 ranking is not based on its overall value as an organization, but based on the value of its remaining funding gap.
Standout charities
As with last year, we also provide a list of charities that we believe are strong standouts, though not at the same level (in terms of likely good accomplished per dollar) as our top charities. They are not ranked, and are listed in alphabetical order.
Below, we provide:
• An explanation of major changes in the past year that are not specific to any one charity. More
• A summary of our top charities’ relative strengths and weaknesses, and how we would rank them if room for more funding were not an issue. More
• A discussion of our refined approach to room for more funding. More
• The recommendations we are making to Good Ventures, and how we rank our top charities after taking these grants (and their impact on room for more funding) into account. More
• Detail on each of our top charities, including major changes over the past year, strengths and weaknesses for each, and our understanding of each organization’s room for more funding. More
• The process we followed that led to these recommendations. More
• A brief update on giving to support GiveWell’s operations vs. giving to our top charities. More
Conference call to discuss recommendations
We are planning to hold a conference call at 5:30pm ET/2:30pm PT on Tuesday, December 1st to discuss our recommendations and answer questions.
If you’d like to join the call, please register using this online form. If you can’t make this date but would be interested in joining another call at a later date, please indicate this on the registration form.
Major changes in the last 12 months
Below, we summarize the major causes of changes to our recommendations (since last year).
Overall, the case for our top charities is stronger than it was in past years. The Deworm the World Initiative shared new monitoring and evaluation materials with us, so we are more confident than we were a year ago that it is a strong organization implementing high-quality programs. In addition, the extra year of work we have seen from AMF and GiveDirectly bolsters our view that they will be able to utilize additional funding effectively.
Our top charities have increased room for more funding. Last year, we expected donors following our recommendations to fully fill the most critical funding gaps of our top charities (excluding GiveDirectly) because they had limited room for more funding: GiveDirectly had a total funding gap of ~$40 million and our other three top charities had a total gap of ~$18 million. This year, all of our top charities have more room for more funding. We believe that GiveDirectly could absorb more than $80 million and other top charities together could collectively utilize more than$100 million. We do not expect donors following our recommendations to fully fill these gaps.
We are recommending that Good Ventures make larger grants to top charities. For reasons we will be detailing in a future post, we are recommending that Good Ventures make substantial grants to our top charities this year, though not enough to close their funding gaps.
Continued refinement of the concept of “room for more funding.” We’ve tried to create a much more systematic and detailed room for more funding analysis, because the stakes of this analysis have become higher due to (a) increased room for more funding across the board and (b) increased interest from Good Ventures in providing major support.
In past years, we’ve discussed charities’ room for more funding as a single figure without distinguishing between (a) the amount the charity would spend in the next 12 months, (b) the amount the charity needs to prevent it from slowing its work due to lack of funds, and (c) funding that would be especially important to the organization’s development and success (a dual benefit) in addition to expanding implementation of its program. This year, we’ve made three changes to our room for more funding analysis:
• We’ve made (a) an assessment of whether additional funds merely allow a charity to implement its program (“execution”) or (b) whether additional funds would be especially important to the charity’s development and success as an organization (“capacity-relevant”). We also explicitly note the role of incentives for meeting GiveWell’s top-charity criteria in our recommendations (we seek to ensure that each top charity receives at least $1 million, to encourage other organizations to seek to meet these criteria). • We are explicitly assessing “execution”-related room for more funding based on our estimate of the probability that lack of funding will lead to a charity slowing its progress. We distinguish between Level 1, Level 2, and Level 3 “execution” funding gaps; a higher number means the money is less likely to be needed. • We are now ranking “funding gaps,” not just ranking charities, because the first million dollars to a charity can have a very different impact from, e.g., the 20th million dollars. For example, if Charity A accomplishes more good per dollar with its programs than Charity B, we would rank Charity A above Charity B for a given type of gap (we would rank Charity A’s “Execution Level 1” gap above Charity B’s), but we might rank Charity B’s “Execution Level 1” gap (the amount of funding it will likely need) above Charity A’s “Execution Level 3” gap (the amount of funding gap it might, but probably will not, need to carry out more of its programs in the coming year). We discuss these ideas in greater depth below. Summary of key considerations for top charities The table below summarizes the key considerations for our four top charities. More detail is provided below as well as in the charity reviews. Consideration AMF Deworm the World GiveDirectly SCI Program estimated cost-effectiveness (relative to cash transfers) ~10x ~10x Baseline ~5x Directness and robustness of the case for impact Strong Moderate Strongest Moderate Transparency and communication Strong Strong Strongest Weakest Ongoing monitoring and likelihood of detecting future problems Strong Strong Strongest Weakest Organizational track record of rolling out program Moderate Moderate Strong Strong Room for more funding, after accounting for grants we are recommending to Good Ventures (more below) Very high Limited Very high High Overall, our ranking of the charities with room for more funding issues set aside (just considering a hypothetical dollar spent by the charity on its programs, without the “capacity-relevant funding” and “incentives” issues discussed below) would be: 1. AMF and Deworm the World 3. SCI 4. GiveDirectly However, when we factor in room for more funding (including the impact of the grants we’re recommending to Good Ventures), the picture changes. More on this below. Room for more funding analysis Capacity-relevant funding and incentives Capacity-relevant funding: additional funding can sometimes be crucial for a charity’s development and success as an organization. For example, it can contribute to a charity’s ability to experiment, expand, and ultimately have greater room for more funding over the long run. It can also be important for a charity’s ability to raise funds from non-GiveWell donors, which can be an important source of long-term leverage and can put the organization in a stronger overall position. We think of this sort of funding gap as particularly important to fill, because it can make a big difference over the long run; in particular, it may substantially affect the long-term quality of our giving recommendations. “Capacity-relevant” funds can include (a) funds that are explicitly targeted at growth (e.g., funds to hire fundraising staff); (b) funds that enable a charity to expand into areas it hasn’t worked in before, which can lead to important learning about whether and how the charity can operate in the new location(s); and (c) funds that would be needed in order to avoid challenging contractions in a charity’s activities which could jeopardize the charity’s long-term growth and funding prospects. Some specific examples: • The grant that Good Ventures made to GiveDirectly earlier this year is capacity-relevant because it will be used for: (a) building a fundraising team that will aim to raise substantial donations from non-GiveWell donors, and (b) developing partnerships with bilateral donors and local governments to deliver cash transfers or to run experiments comparing standard aid programs to cash transfers. • Early funding that GiveDirectly received was capacity-relevant because it enabled GiveDirectly to rapidly grow from a small organization moving a few hundred thousand dollars per year to a much larger organization moving more than$10 million per year. If this funding hadn’t been forthcoming, GiveDirectly might be much smaller today and have much less room for more funding.
• We now think that some additional funding to AMF and Deworm the World will be capacity-relevant because each organization has only operated in a very small number of countries and new funding will enable each to enter new countries. This will allow them to learn how to operate there, and demonstrate that they can do so, increasing our willingness (and likely that of other donors) to recommend more to these organizations in the future.
It’s hard to draw sharp lines around capacity-relevant funding, and all funding likely has some effect on an organization’s development, but we have tried to identify and prioritize the funding gaps that seem especially relevant.
Execution funding allows charities to implement more of their core program but doesn’t appear to have substantial benefits beyond the direct good accomplished by this program. We’ve separated this funding into three levels:
• Level 1: the amount we expect a charity to need in the coming year. If a charity has less funding than this level, we think it is more likely than not that it will be bottlenecked (or unable to carry out its core program to the fullest extent) by funding in the coming year.
• Level 2: if a charity has this amount, we think there is an ~80% chance that it will not be bottlenecked by funding.
• Level 3: if a charity has this amount, we think there is a ~95% chance that it will not be bottlenecked by funding.
Incentives: we think it is important that charities we recommend get a substantial amount of funding due to being a GiveWell top charity, because this ensures that incentives are in place for charities (and potential charity founders) to seek to meet our criteria for top charities and thus increase the number of charities we recommend and the total room for more funding available, even when they don’t end up being ranked #1. We seek to ensure that each top charity gets at least $1 million as a result of our recommendation, and we consider this to be a high-priority goal of our recommendations. The charity-specific sections of this post discuss the reasoning behind the figures we’ve assigned to “capacity-relevant” and “Execution Level 1” gaps, but they do not provide the full details of how we arrived at these figures (and do not explicitly address the “Execution Level 2” and “Execution Level 3” gaps). We expect to add this analysis to our charity reviews in the coming weeks. Funding gaps The total (i.e., Capacity-relevant, Execution Levels 1, 2, and 3, and Incentive) funding gaps (in millions of dollars, rounded to one decimal place) for each of our top charities are: • AMF:$98.2
• Deworm the World: $19.0 • GiveDirectly:$84.0
• SCI: $26.3 However, for reasons described above, the first million dollars to a charity can have a very different impact from, e.g., the 20th million dollars. Accordingly, we have created a ranking of individual funding gaps that accounts for both (a) the quality of the charity and the good accomplished by its program, per dollar (as laid out above), and (b) whether a given level of funding is capacity-relevant and whether it is highly or only marginally likely to be needed in the coming year. The below table lays out our ranking of funding gaps. When gaps have the same “Priority,” this indicates that they are tied. The table below includes the amount we are recommending to Good Ventures. For reasons we will lay out in another post, we are recommending to Good Ventures a total of ~$44.4 million in grants to top charities. Having set that total, we are recommending that Good Ventures start with funding the highest-rated gaps and work its way down, in order to accomplish as much good as possible.
When gaps are tied, we recommend filling them by giving each equal dollar amounts until one is filled, and then following the same procedure with the remaining tied gaps. See footnote for more.*
Priority Charity Amount Type Recommendation to Good Ventures Comments 1 DtWI $7.6 Capacity-relevant$7.6 DtWI and AMF are strongest overall 1 AMF $6.5 Capacity-relevant$6.5 See above 1 GD $1.0 Incentive$1.0 Ensuring each top charity receives at least $1 million 1 SCI$1.0 Incentive $1.0 Ensuring each top charity receives at least$1 million 2 GD $8.8 Capacity-relevant$8.8 Not as cost-effective as bednets or deworming, so lower priority, but above non-capacity-relevant gaps 2 DtWI $3.2 Execution Level 2 / possibly capacity-relevant$3.2 Level 1 gap already filled via “capacity-relevant” gap. See footnote for more** 2 AMF $43.8 Execution Level 1$16.3 Exhausts remaining recommendations to Good Ventures 3 SCI $4.9 Execution Level 1 0 Not as strong as DtWI and AMF in isolation, so ranked below them for same type of gap 3 AMF$24.0 Execution Level 2 0 – 4 DtWI $8.2 Execution Level 3 0 – 4 AMF$24.0 Execution Level 3 0 – 4 SCI $11.6 Execution Level 2 0 – 5 GD$24.8 Execution Level 1 0 – 5 SCI $8.8 Execution Level 3 0 – 6 GD$20.9 Execution Level 2 0 – 7 GD $28.6 Execution Level 3 0 – Our recommendations to Good Ventures and others Summing the figures from the above table, we are recommending that Good Ventures make the following grants (in millions of dollars, rounded to one decimal place): • AMF:$22.8
• Deworm the World: $10.8 • GiveDirectly:$9.8
• SCI: $1 We also recommend that Good Ventures give$250,000 to each of our standout charities. These grants go to the outstanding organizations and create additional incentives for groups to try to obtain a GiveWell recommendation.
After these grants, AMF will require an additional ~$27.5 million to close its Execution Level 1 gap (i.e., to make it more likely than not that it is able to proceed without being bottlenecked due to lack of funding). We rank this gap higher than any of the other remaining funding gaps for our top charities, as laid out in the table above. We estimate that non-Good Ventures donors will give approximately$15 million between now and January 31, 2016. Because we do not expect AMF’s remaining ~$27.5 million Execution Level 1 funding gap to be fully filled, we rank it #1 and recommend that donors give to AMF. We rank the remaining charities for donors who are interested in having the greatest impact per dollar based on how highly their highest-rated remaining gap ranks in the table above. That results in the following rankings for individual donors: 1. AMF 2. SCI 3. Deworm the World Initiative 4. GiveDirectly Details on top charities We present information on our top charities in alphabetical order. Against Malaria Foundation (AMF) Our full review of AMF is here. Background AMF (www.againstmalaria.com) provides funding for long-lasting insecticide-treated net distributions (for protection against malaria) in developing countries. There is strong evidence that distributing nets reduces child mortality and malaria cases. AMF has relatively strong reporting requirements for its distribution partners and provides a level of public disclosure and tracking of distributions that we have not seen from any other net distribution charity. In 2011, AMF received a large amount of funding relative to what it had received historically, so it began to focus primarily on reaching agreements for large-scale net distributions (i.e., distributions on the order of hundreds of thousands of nets rather than tens of thousands of nets). In its early efforts to scale up, AMF struggled to finalize large-scale net distribution agreements. At the end of 2013, we announced that we planned not to recommend additional donations to AMF due to room for more funding-related issues (more detail in this blog post). In 2014, AMF committed most of its funds to several new distributions — some in Malawi, some in the Democratic Republic of the Congo (DRC) — and we recommended it as a top charity again. Important changes in the last 12 months Previously, our confidence in AMF’s ability to scale had been limited by the fact that it had only completed large-scale distributions with one partner (Concern Universal) in one country (Malawi). However, AMF carried out its largest distribution to date (~620,000 nets) with a new partner in the DRC in late 2014. We have not yet seen some key documentation from the large DRC distribution, but early indications suggest that the distribution generally went as planned, despite our concern that the DRC may have been an especially challenging place to work (more details here). We see this as a positive update that AMF will be able to carry out high-quality large-scale distributions in a variety of locations in the future. AMF has continued to collect and share follow-up information on its past large-scale distributions, and this information seems to support the notion that these distributions are high-quality (i.e., that nets are reaching the target population and are being used). We provide a summary of these reports in our review. Funding gap AMF currently holds$18.5 million, and we estimate it will receive an additional $1.6 million before January 31, 2016 (excluding donations influenced by GiveWell) that it could use for future distributions. AMF has told us that it has a pipeline of possible future net distributions that add up to roughly$100 million beyond what it currently holds (details in our review).
We believe that AMF’s progress would be slowed due to lack of funding were it to receive less than $50.3 million in additional funding (this is its total capacity-relevant and “Execution Level 1” gap as presented earlier in the post). In particular, we view the first additional$6.5 million that AMF would receive as capacity-relevant (and thus particularly valuable) because it would enable AMF to fund a distribution in a 5th country with a 5th partner, generating additional information about its ability to expand beyond the contexts in which it has worked to date. (Note that AMF already has funds on hand to enter its 3rd and 4th countries.)
We arrived at the capacity-relevant and Execution Level 1 figure by noting that AMF has $70.4 million worth of deals it is actively negotiating (5 deals in 4 countries) that it can only continue with if it holds the funds to do so. Subtracting the$20.1 million we expect to be available (the $18.5 million it currently holds plus the$1.6 million we expect it to receive in the coming months) leaves a $50.3 million funding gap. AMF failed to reach new distribution agreements in 2015; there is still significant uncertainty regarding AMF’s ability to finalize agreements with new partners and countries. Nevertheless, we see providing a large amount of additional funds to AMF as a reasonable bet, and see AMF as a very strong giving opportunity. We think it is possible that in November 2016 (when we next expect to complete a full refresh of our recommendations), we will recommend significantly less funding to AMF. We consider the funding we’re recommending to AMF now to be a good bet, but a risky one, because AMF currently has a relatively limited track record: it has worked with only two partners in two countries. Because of the lag between the time we provide funding and the time net distributions take place (often 2 years) and the additional lag caused by the time it takes to monitor distributions, we may not have additional information about whether or not AMF’s additional distributions were successful for 2-3 years. Next year, it is possible that we will choose to recommend significantly less funding to AMF while we wait for additional data to become available. There still appears to be a large global funding gap for bednets; a global bednet coordination group estimated that about 245 million additional nets would be needed in 2015-2017 (details in our review). Key considerations: • Program impact and cost-effectiveness. We estimate that bednets are ~10x as cost-effective as cash transfers. Our estimates are subject to substantial uncertainty. All of our cost-effectiveness analyses are available here. Our 2015 cost-effectiveness file is available here (.xlsx). • Directness and robustness of the case for impact. We believe that the connection between AMF receiving funds and those funds helping very poor individuals is less direct than GiveDirectly’s and more direct than SCI’s or Deworm the World’s. The uncertainty of our estimates is driven by a combination of AMF’s challenges historically disbursing the funds it receives and a general recognition that aid programs, even those as straightforward as bednets, carry significant risks of failure via ineffective use of nets, insecticide resistance, or other risks we don’t yet recognize relative to GiveDirectly’s program. AMF conducts extensive monitoring of its program; these results have generally indicated that people use the nets they receive. • Transparency and communication. AMF has been extremely communicative and open with us. We feel we have a better understanding of AMF than of SCI, and a similar level of knowledge about AMF as we have for Deworm the World, though our understanding is not as strong as our understanding of GiveDirectly. In particular, were something to go wrong in one of AMF’s distributions, we believe we would eventually find out (something we are not sure of in the case of SCI), but we believe our understanding would be less quick and complete than it would be for problems associated with GiveDirectly’s program (which has more of a track record of consistent intensive follow-up). • Risks: • We are not highly confident that AMF will be able to finalize additional distributions and do so quickly. AMF could struggle again to agree to distribution deals, leading to long delays before it spends funds. We view this as a relatively minor risk because the likely worst-case scenario is that AMF spends the funds slowly (or returns funds to donors). • We remain concerned about the possibility of resistance to the insecticides used in bednets. There don’t appear to be major updates on this front since our 2012 investigation into the matter; we take the lack of major news as a minor positive update. Our full review of AMF is here. Deworm the World Initiative, led by Evidence Action Our full review of Deworm the World is here. Background Deworm the World (www.evidenceaction.org/deworming), led by Evidence Action, advocates for, supports, and evaluates government-run school-based deworming programs (treating children for intestinal parasites). We believe that deworming is a program backed by relatively strong evidence. We have reservations about the evidence, but we think the potential benefits are great enough, and costs low enough, to outweigh these reservations. Deworm the World retains monitors whose reports indicate that the deworming programs it supports successfully deworm children. Important changes in the last 12 months In 2015, Deworm the World continued to support the scale-up and monitoring of deworming programs in India and Kenya. One of its notable activities this year was providing technical assistance to the Indian national government in support of India’s first national deworming day: a program in which the government provided assistance to Indian states to implement school-based deworming on a single day to encourage more states to implement the program. The first national deworming day took place in February 2015, and 12 states participated in the program (more details here). The quality of the monitoring that we saw from Deworm the World improved in 2015. Deworm the World continued to hire and train third-party monitors to directly observe deworming activities, and it slightly improved its estimates of how many children were treated. This information strongly suggests that the programs are generally operating as intended. More details in our review. Last year, Deworm the World stated to us that it could not use significant additional funding to scale up deworming programs. Deworm the World now believes that it has identified countries where it could use additional funds to support the scale-up of deworming programs, beginning with a potential program in Punjab province, Pakistan (more). (Deworm the World also plans to use funds it already holds or expects to receive to expand into Ethiopia and Nigeria.) Future donations to Deworm the World will likely be used outside of India, and in those cases governments may have less funding to support deworming. This may cause Deworm the World to pay a higher fraction of the overall cost of the program, making the potential for leverage of future donations more limited. Overall program costs may also be higher outside of India. More details in our review. A significant organizational update is that Alix Zwane stepped down as Executive Director of Evidence Action in August; she left to join the Global Innovation Fund as CEO. Evidence Action has since hired Jeff Brown (formerly Interim CEO of the Global Innovation Fund) as Executive Director. Grace Hollister remains Director of the Deworm the World Initiative. Overall, our impression is that Dr. Zwane has been a highly effective leader of Evidence Action and her departure risks disruptions that could lead to us changing our view of the organization, though we would guess that this will not be the case. In July, researchers published two new analyses of a key study regarding deworming (the most important piece of evidence we rely on), and the Cochrane Collaboration published an updated review of the evidence for mass deworming programs. The new papers did not change our overall assessment of the evidence on deworming. More in our blog post. Funding gap We believe that Deworm the World has significant opportunities to use additional funding to expand its program. We believe it may have opportunities to enter at least two more countries (in addition to Nigeria and Ethiopia, which it will be able to enter with funds it already has or expects to receive). We estimate its funding need using the two countries it is most likely to enter — Pakistan and Nepal — though note that in both cases, we see these as representative of the types of opportunities it may have, rather than the specific opportunities we expect it to take. Altogether, Deworm the World estimates that it would need$11.25 million to commit to fully funding three years of deworming programs in both countries. Because it holds (or expects to receive shortly) funding that will total $3.6 million, we estimate its funding gap for this work at$7.6 million.
Funding this gap is capacity-relevant, and is therefore a high priority, because we would like to see Deworm the World try to work in additional countries beyond India and Kenya, where it has worked historically. Next year, Deworm the World will also enter Nigeria and Ethiopia (with funding already available), so it will likely end the year having had some experience in five or more countries. This could substantially increase Deworm the World’s long-term room for more funding.
A complicating factor in thinking about Deworm the World’s funding gap is that Deworm the World is part of a larger organization, Evidence Action. Funding for Deworm the World may be fungible with funding for Evidence Action’s other activities, such as its Dispensers for Safe Water initiative (which we believe to be substantially less cost-effective than deworming). Because of this, it is difficult to determine Deworm the World’s true funding gap, and it is possible that some additional funds given to support Deworm the World could effectively lead to additional funds for a non-Deworm the World project. We understand that Evidence Action has received approximately $2.4 million in unrestricted funding over the past year. Fully funding Deworm the World could potentially cause Evidence Action to redirect some or all of these funds to its other programs. More details on all of the above are in our review. Key considerations: • Program impact and cost-effectiveness. We estimate that Deworm the World-associated deworming programs are ~10x as cost-effective as cash transfers. Our estimates are subject to substantial uncertainty. It’s important to note that we view deworming as high expected value, but this is due to a relatively low probability of very high impact. Most GiveWell staff members would agree that deworming programs are more likely than not to have very little or no impact, but there is some possibility that they have a very large impact. (Our cost-effectiveness model implies that most staff members believe there is at most a 1-2% chance that deworming programs conducted today have similar impacts to those directly implied by the randomized controlled trials on which we rely most heavily, which differed from modern-day deworming programs in a number of important ways.) Our 2015 cost-effectiveness file is available here (.xlsx). • Directness and robustness of the case for impact. Deworm the World doesn’t carry out deworming programs itself; it advocates for and provides technical assistance to governments implementing deworming programs, making direct assessments of its impact challenging. We have seen evidence that strongly suggests that Deworm the World-supported programs successfully deworm children. While we believe Deworm the World is impactful, our evidence is limited, and in addition, there is always a risk that future expansions will prove more difficult than past ones. • Transparency and communication. Deworm the World has been communicative and open with us. We believe that were something major to go wrong with Deworm the World’s work, we would be able to learn about it and report on it. • Risks: • Deworm the World is part of a larger organization, Evidence Action. It is possible that some additional funds given to support Deworm the World could effectively lead to additional funds for a non-Deworm the World project due to fungibility. Also, changes that affect Evidence Action (and its other programs) could indirectly impact Deworm the World. For example, if a major event occurs (either positive or negative) for Evidence Action, it is likely that it would reduce the time some staff could devote to Deworm the World. • Deworm the World is now largely raising funds to support programs that will be carried out under a different model in new countries, which makes it harder for us to predict future success based on historical results and may make it harder to understand and quantify Deworm the World’s impact even after the program is completed. Our full review of Deworm the World is here. GiveDirectly Our full review of GiveDirectly is here. Background GiveDirectly (www.givedirectly.org) transfers cash to households in developing countries via mobile phone-linked payment services. It targets extremely low-income households. The proportion of total expenses that GiveDirectly has delivered directly to recipients is approximately 85% overall. We believe that this approach faces an unusually low burden of proof, and that the available evidence supports the idea that unconditional cash transfers significantly help people. We believe GiveDirectly to be an exceptionally strong and effective organization, even more so than our other top charities. It has invested heavily in self-evaluation from the start, scaled up quickly, and communicated with us clearly. It appears that GiveDirectly has been effective at delivering cash to low-income households. GiveDirectly has one major randomized controlled trial (RCT) of its impact and took the unusual step of making the details of this study public before data was collected (more). It continues to experiment heavily, to the point where every recipient is enrolled in a study or a campaign variation. Important changes in the last 12 months GiveDirectly continued to scale up significantly, utilizing most of the funding it received at the end of last year. It continued to share informative and detailed monitoring information with us. Overall, it grew its operations while maintaining the high quality of its program. In August, Good Ventures granted$25 million to GiveDirectly to support potentially high-upside opportunities, such as (a) building a fundraising team that will aim to raise substantial donations from non-GiveWell donors, and (b) developing partnerships with bilateral donors and local governments to deliver cash transfers or to run experiments comparing standard aid programs to cash transfers.
GiveDirectly’s increased efforts to network with potential government and donor partners have led to some results in 2015. For example, GiveDirectly will be implementing cash transfers in a randomized controlled trial in Rwanda that will be funded by a bilateral aid donor and Google. The study will test cash transfers against another still-to-be-chosen aid program. GiveDirectly is currently in several preliminary conversations with partners for similarly large projects in the future.
Funding gap
GiveDirectly believes it could move a total of ~$94 million to poor households in the year following March 1, 2016, for which it expects to have ~$12.6 million available by March 1. We have classified ~$34.5 million of this as the total “Execution Level 1,” capacity-relevant, and incentive funding gap (more on what this means above). We arrived at this figure by assuming that GiveDirectly could double its operations in Kenya (from ~$16.5 million/year to ~$33 million/year) and scale up to ~$12.1 million/year in Uganda. This would cost a total of ~$45.1 million, of which GiveDirectly already has ~$10.6 million on hand (ignoring $2 million that we exclude due to donor coordination issues), which results in a ~$34.5 million gap.
We’ve classified some of this as a “capacity-relevant” funding gap for our purposes (making it higher priority). First, we view the ~$12.1 million it would hope to spend in Uganda as capacity-relevant, in the sense that providing it could make a major difference to GiveDirectly’s long-term development. GiveDirectly told us that operating in Uganda is more challenging than in Kenya and that it expects to learn a significant amount as it grows. It is therefore planning to grow more slowly in Uganda than it did in Kenya. GiveDirectly made two arguments for Uganda being important for its long-term trajectory: 1. If GiveDirectly lost the ability to operate in Kenya, this would significantly diminish its ability to move funds out the door. Operating in Uganda is an important hedge against this risk. 2. Kenya is a particularly easy environment in which to operate because of the existence of M-PESA, a powerful and ubiquitous provider that enables GiveDirectly to transfer funds to recipients via mobile phones. The mobile payments network is significantly less developed outside of Kenya. As such, Uganda offers an important test case for operating in a more standard environment, which could be particularly valuable to GiveDirectly as it encourages aid agencies and country governments to expand direct cash assistance. It’s harder to estimate how much of the Kenya funding needs are properly classified as “capacity-relevant” (an important distinction for our purposes, as discussed above). We guess that were GiveDirectly to be operating at a level 50% its current size (such that it only spent ~$8.25 million/year in Kenya), it would be able to build capacity from that level to its current level (and beyond) as quickly as it did in its recent past. We therefore classify ~$8.25 million of the ~$16.5 million it hopes to spend in Kenya as “capacity-relevant” and ~$8.25 million as “execution.” We note that we are highly uncertain about these estimates and that were GiveDirectly to receive no additional funding, this would cause it to contract in Kenya and lay off some of its middle management, an action that would cause it to incur reasonably high costs; we think much more contraction than that would be significantly more challenging for GiveDirectly as an organization. Based on the above, and based on GiveDirectly’s existing available funds (with some adjustments for coordination issues, along the lines of this discussion from last year) we estimate that GiveDirectly has ~$9.8 million worth of unfunded opportunities that we ought to classify as capacity-relevant or incentive funding. (We arrive at this estimate based on: ~$20.35 million (total amount we classify as capacity-relevant from Kenya and Uganda) – ~$10.6 million (funds on hand, excluding donations we ignore due to coordination issues) = ~$9.75 million.) Longer-term, we expect to continue to view funding ~$8.25 million in Kenya as capacity-relevant support and would expect to consider future expansion in Uganda (up to the current level of Kenya, i.e., ~$16.5 million/year) capacity-relevant, as well. Once GiveDirectly reaches ~$16.5 million in Uganda and proves that it can operate at that level, we only expect to view ~$8.25 million as capacity-relevant and hope that it can raise funds from other sources to support its work. More details in our review. Key considerations: • Program impact and cost-effectiveness. Our best guess is that deworming or distributing bednets achieves ~10x times more humanitarian benefit per dollar donated than cash transfers. Our estimates are subject to substantial uncertainty. All of our cost-effectiveness analyses are available here. Our 2015 cost-effectiveness file is available here (.xlsx). • Directness and robustness of the case for impact. GiveDirectly collects and shares a significant amount of relevant information about its activities. The data it collects show that it successfully directs cash to very poor people, that recipients generally spend funds productively (sometimes on food, clothing, or school fees, other times on investments in a business or home infrastructure), and that it leads to very low levels of interpersonal conflict and tension. We are more confident in the impact of GiveDirectly’s work than in that of any of the other charities discussed in this post; we believe that cash transfers face a lower burden of proof than other interventions. • Transparency and communication. GiveDirectly has always communicated clearly and openly with us. It has tended to raise problems to us before we ask about them, and we generally believe that we have a very clear view of its operations. We feel more confident about our ability to keep track of future challenges than with any of the other charities discussed in this post. • Risks: • GiveDirectly has scaled (and hopes to continue to scale) quickly. Thus far, it has significantly increased the amount of money it can move with limited issues as a result. The case of staff fraud that GiveDirectly detected is one example of an issue possibly caused by its pace of scaling, but its response demonstrated the transparency and rigor we expect. Our full review of GiveDirectly is here. Schistosomiasis Control Initiative (SCI) Our full review of SCI is here. Background SCI (www3.imperial.ac.uk/schisto) works with governments in sub-Saharan Africa to create or scale up deworming programs (treating children for schistosomiasis and other intestinal parasites). SCI’s role has primarily been to identify recipient countries, provide funding to governments for government-implemented programs, provide advisory support, and conduct research on the process and outcomes of the programs. Despite SCI sharing a number of spending reports with us, we do not feel we have a detailed and fully accurate picture of how SCI and the governments it supports have spent funds in the past. We don’t feel that SCI has ever purposefully been indirect with us, but we have often struggled to communicate effectively with SCI representatives. We still lack important and in some cases basic information about SCI’s finances, and we find this problematic. We believe that deworming is a program backed by relatively strong evidence. We have reservations about the evidence, but we think the potential benefits are great enough, and costs low enough, to outweigh these reservations. SCI has conducted studies in about half of the countries it works in (including the countries with the largest programs) to determine whether its programs have reached a large proportion of children targeted. These studies have generally found moderately positive results, but have some methodological limitations. Important changes in the last 12 months SCI reports that it has continued to scale up its deworming programs and that it has supported some programs in new countries, though we have limited monitoring information from these programs (e.g., we have not seen monitoring from its programs in Ethiopia, Sudan, Madagascar, and the DRC). This year, SCI has shared a few more coverage surveys that found reasonably high coverage of its programs. We have continued to have communication challenges with SCI. In particular: • We have a limited understanding of SCI’s work because we still lack important and basic information about how SCI spends money. SCI recognizes that its financial management system is disorganized, and some spending reports that SCI has sent us have contained errors. • We have struggled to gain a confident understanding of how SCI will use additional funds, and we cannot check how its funds were used after the fact because we lack information about its spending. In some cases, SCI has not spent additional funds as expected and it is unclear what caused the shift (more detail on one example in our August 2015 update). In July, researchers published two new analyses of a key study regarding deworming (the most important piece of evidence we rely on), and the Cochrane Collaboration published an updated review of the evidence for mass deworming programs. The new papers did not change our overall assessment of the evidence on deworming. More in our blog post. Funding gap SCI estimates that it would use the following amounts of unrestricted funding in each of the next three years (in millions of US dollars): • April 2016 – March 2017:$9.5
• April 2017 – March 2018: $13.6 • April 2018 – March 2019:$13.3
Our impression is that GiveWell-influenced donors contribute most of SCI’s unrestricted funds.
Our best guess is that, excluding the funds SCI may receive due to GiveWell’s recommendation, SCI will hold approximately $1.5 million in April 2016 that it could allocate to the above gaps. Also, after SCI set its fundraising targets, a funder committed$6 million over the next three years ($2 million per year) to deworming programs in Ethiopia, with which SCI is involved. Our best guess is that this funding reduces SCI’s “Execution Level 1” and incentive funding gap for the coming year from$9.5 million to $5.9 million. (We arrive at this estimate by subtracting ~$1.5 million and another $2 million from the total Level 1/incentive gap for the coming year). We do not classify any of this as “capacity-relevant” because we have little understanding of how it will be spent, and we do not expect to be able to understand how it was spent after the fact, either. More details on SCI’s funding gap are in our review. Key considerations: • Program impact and cost-effectiveness. Our best guess is that deworming programs implemented by SCI are ~5x as cost-effective as cash transfers. Our estimates are subject to substantial uncertainty. It’s important to note that we view deworming as high expected value, but this is due to a relatively low probability of very high impact. Most GiveWell staff members would agree that deworming programs are more likely than not to have very little or no impact, but there is some possibility that they have a very large impact. (Our cost-effectiveness model implies that most staff members believe there is at most a 1-2% chance that deworming programs conducted today have similar impacts to those directly implied by the randomized controlled trials on which we rely most heavily, which differed from modern-day deworming programs in a number of important ways.) Our 2015 cost-effectiveness file is available here (.xlsx). • Directness and robustness of the case for impact. SCI doesn’t carry out deworming programs itself; it advocates for and provides technical assistance to governments implementing deworming programs, making direct assessments of its impact challenging. We have seen some evidence demonstrating that SCI-supported programs successfully deworm children, though this evidence is relatively thin. Nevertheless, deworming is a relatively straightforward program, and we think it is likely (though far from certain) that SCI-supported deworming programs successfully deworm people. We have had difficulties communicating with SCI, which has reduced our ability to understand it. We have also spent significant time interviewing SCI staff and reviewing documents over the past 6 years and have found minor but not major concerns. • Transparency and communication. We don’t feel that SCI has ever purposefully been indirect with us, but we have often struggled to communicate effectively with SCI representatives. Specifically, (a) we had a major miscommunication with SCI about the meaning of its self-evaluations (more) and (b) although we have spent significant time with SCI, we remain unsure how SCI has spent funds and how much funding it has available (and we believe SCI itself does not have a clear understanding of this). Importantly, if there is a future unanticipated problem with SCI’s programs, we don’t feel confident that we will become aware of it. This contrasts with our other top charities, which we feel we have a strong ability to follow up on. • Risks: There are significantly more unknown risks with SCI than our other top charities due to our limited understanding of its activities. Our full review of SCI is here. Standouts As we did last year, we recommend four organizations as “standouts.” These charities score well on some of our criteria, but we are not confident enough in them to name them top charities. This year, we retain the same four standout organizations: Development Media International (DMI), the Global Alliance for Improved Nutrition’s Universal Salt Iodization program (GAIN-USI), the Iodine Global Network (IGN), and Living Goods. We followed all four of these charities in 2015, but have only published an updated review for DMI. We expect to publish updated reviews for GAIN-USI, IGN, and Living Goods in the near future. We provide brief updates on these charities below: • DMI. DMI produces radio and television programming in developing countries that encourages people to adopt improved health practices. It is a standout because of its commitment to monitoring and the possibility that it is implementing a highly cost-effective program. DMI has recently completed a randomized controlled trial of its program. Last year, we had midline results from this trial, which generally looked promising.In November 2015, DMI privately shared preliminary endline results from the RCT. These results did not find any effect of DMI’s program on child mortality, and found substantially less effect on behavior change than was found in the midline results. We (understandably) cannot publicly discuss the details of the endline results we have seen, because they are not yet finalized and because the finalized results will be embargoed prior to publication. DMI believes that there were serious problems with endline data collection (note that we have not yet tried to independently assess this claim). With the support of the trial’s Independent Scientific Advisory Committee, DMI is planning to conduct another endline survey in late 2016, with results available in 2017.We are impressed by DMI’s openness with us about its results (and its willingness for us to share the high-level summary), and we hope to have discussions with DMI about how it might be able to work toward becoming a top charity in the future. Our full review of DMI is here. • GAIN-USI. GAIN’s Universal Salt Iodization (USI) program supports national salt iodization programs. There is strong evidence that salt iodization programs have a significant, positive effect on children’s cognitive development. GAIN-USI does not work directly to iodize salt; rather, it supports governments and private companies to do so, which could lead to leveraged impact of donations or to low impact, depending on its effectiveness. Last year, we wrote, “We tried but were unable to document a demonstrable track record of impact; we believe it may have had significant impacts, but we are unable to be confident in this with what we know now. More investigation next year could change this picture.” In 2015, we continued our assessment of GAIN, focusing on its work in India and Ethiopia, including a site visit to Ethiopia in July.Overall, we tried but were unable to establish clear evidence of GAIN successfully contributing to the impact of iodization programs. This is primarily due to (a) the difficulty in attributing impact to specific activities that GAIN carried out and (b) challenges we have had communicating with GAIN about its work. We have not yet completed our final report on GAIN but hope to publish it in the near future. We have published notes from some of the conversations that were part of this research and they are available here. Our 2014 review of GAIN is here. • IGN. Like GAIN-USI, IGN supports (via advocacy and technical assistance rather than implementation) salt iodization, and as with GAIN-USI, we tried but were unable to establish clear evidence of IGN successfully contributing to the impact of iodization programs. Unlike GAIN-USI, IGN is small, operating on a budget of approximately$0.5-$1 million per year, and relies heavily on volunteer time. We are planning to post an updated review in the near future. Our 2014 review of IGN is here. • Living Goods recruits, trains, and manages a network of community health promoters who sell health and household goods door-to-door in Uganda and Kenya and provide basic health counseling. They sell products such as treatments for malaria and diarrhea, fortified foods, water filters, bednets, clean cookstoves, and solar lights. Living Goods completed a randomized controlled trial of its program and measured a 27% reduction in child mortality. We estimate that Living Goods saves a life for roughly each$10,000 it spends, approximately 3 times as much as our estimate for the cost per life saved of AMF’s program. We spoke with Living Goods and reviewed documents about their progress in 2015. We do not have major updates to report but are planning to post an updated review in the near future. Our 2014 review of Living Goods is here.
Our research process in 2015
This section describes the new work we did in 2015 to supplement our previous work on defining and identifying top charities. See the process page on our website for our overall process.
This year, we did not put a substantial amount of senior staff time into new top charities research work because (a) we were largely focused on building capacity, and (b) we reallocated a significant amount of capacity to the Open Philanthropy Project (see our post on our plans for 2015 for more details).
We focused the bulk of our research capacity for top charities work on staying up-to-date on our recommended charities. We also did an intensive evaluation of GAIN-USI, including a site visit (more details forthcoming).
We completed investigations of vitamin A supplementation and maternal and neonatal tetanus immunization campaigns. Both programs seem potentially competitive with our other priority programs, but we were not able to identify charities that worked on these programs that were willing to apply for a recommendation. We also made substantial progress on investigating several other programs, such as measles immunization, meningitis A vaccination, folic acid fortification, voluntary medical male circumcision for the prevention of HIV, and “Targeting the Ultra-Poor” (or “Ultra-Poor Graduation”) programs.
We stayed up to date on the research for bednets, cash transfers, and deworming.
We did not conduct an extensive search for new charities this year. We feel that we have a relatively good understanding of the existing charities that could potentially meet our criteria, based on past searches (see the process page on our website for more information). Instead, we solicited applications from organizations that we viewed as contenders for recommendations. A March post laid out which organizations we were hoping to investigate and why.
We did some initial research on several charities that we had not investigated before, but we did not complete the reviews in time for our 2015 recommendations. The organizations that we began investigating were:
We plan to complete these reviews in 2016.
Giving to GiveWell vs. top charities
We have grown significantly over the past few years and continue to raise funds to support our operations. This includes work on GiveWell’s top charities and the Open Philanthropy Project.
We plan to post an update on our funding situation before the end of the year.
The most up-to-date information available on this topic is linked from our June 2015 board meeting. The short story is that we are still seeking additional donations and encourage donors who feel they are sufficiently confident in our impact to give to us.
Footnotes:
* For example, if $30 million were available to fund gaps of$10 million, $5 million, and$100 million, we would recommend allocating the funds so that the $10 million and$5 million gaps were fully filled and the $100 million gap received$15 million.
This rule is material to the three gaps tied at priority level 2. It causes us to recommend that Good Ventures’ last $28.3 million to recommended charities is used to fully fill GiveDirectly’s$8.8 million capacity-relevant gap and Deworm the World’s $3.2 million Execution Level 2 (possible capacity-relevant) gap, but only fill$16.3 million of AMF’s Execution Level 1 gap.
** This gap can’t be cleanly classified because we think the funding is relatively unlikely to be needed, but if it is needed, it is likely to have capacity-relevant effects. Thus, it is technically classified as Execution Level 2, but we think it has similar value to Execution Level 1.
The post Our updated top charities for giving season 2015 appeared first on The GiveWell Blog.
### Journalists report on deworming program supported by Deworm the World Initiative in Kenya
Mon, 09/14/2015 - 14:48
Last year, Jacob Kushner, a journalist living in Kenya, reported on his observations from villages in which GiveDirectly had distributed some of its earliest cash transfers. This year, we funded him to report on the National School-Based Deworming Programme in Kenya, a program supported by Deworm the World.
Mr. Kushner’s article follows. His colleague, Anthony Langat, observed the program and interviewed some stakeholders; his interview notes are posted here.
In addition to Mr. Kushner’s article:
• We summarize our takeaways here.
• Evidence Action, which runs Deworm the World, responds to the article here.
School-based deworming: A parent’s perspective
In Kenya, some parents fear potential, minor side effects of deworming, and a few may even oppose it for religious reasons. Others say they just want to be better informed.
By Jacob Kushner and Anthony Langat
Since 2009, Evidence Action’s Deworm the World Initiative has provided technical assistance to the highly praised government-run National School-Based Deworming Programme, in Kenya. Last year the program de-wormed 6.4 million students in 16,000 schools. The Deworm the World Initiative encourages a multi-faceted strategy toward informing parents about the program, using radio announcements, community meetings, and by training teachers to encourage students to let their parents know about the program in advance of each ‘deworming day.’
But to ensure that each and every parent is made aware of it beforehand is impossible. Many families in rural Kenya lack radios, and communication between schools and certain families can be limited. And it’s safe to assume that children don’t always dutifully relay to parents each and every announcement they hear in class.
In March, Anthony Langat traveled to Kenya’s western Siaya county to observe deworming day at several schools. In Siaya county, over 312,000 students were targeted for deworming on that day. Over the course of a week he interviewed 13 parents, four teachers and one government official as well as three of Evidence Action’s staff members. He was surprised to find that many of these 13 parents said they were poorly informed or entirely uninformed about the March deworming before it occurred and that some were upset about that fact.
Julius, a 65-year-old farmer and father of six, said he was disappointed that he wasn’t informed as to what, precisely, the deworming treatment consisted of, nor of its possible side effects.
Julius said that he eventually came to learn more about the drug’s purpose, and that in the future he’d encourage his children to participate. “I will accept when they come next time. But I need a written form stating what they are going to give my child. Just a form that has information regarding the drug so that we know and stay informed,” he said.
Evidence Action staff said the deworming program is part of Kenya’s overall government school health program for children, which does not require parental permission for its individual school based treatments. Even so, Evidence Action supports training of local government agents such as Community Health Extension Workers (part of Kenya’s Health Department) to reach out to parents and communities to inform them about the deworming. The program’s community sensitization efforts include meetings with regional stakeholders, public county-level launch events, displaying posters, and public service announcements via radio. (Originally, staff tried paying a vehicle to drive around with someone announcing the program over loudspeakers, but that seemed to have less reach than radio. Some parents who were interviewed acknowledged how difficult it would have been to inform them in advance, noting that they themselves did not have working radios).
“It’s important for everybody in the community to have an awareness of what’s going on so that people feel comfortable sending their children to school to receive deworming medication and to increase the awareness of why deworming is a positive thing for communities,” said Grace Hollister, Director of the Deworm the World Initiative.
One of the most effective ways to inform parents seems to be through teacher training.
“You will find that a teacher is very well trusted in the society. Whatever the teacher says or tells the children, it is usually taken as a gospel truth. So when they tell people that their children will be dewormed, they (parents) are very sure of what the teacher has told them,” said Charles Ang’iela, District Education Officer for the sub-county of Rarieda in Western Kenya. “The teachers are playing a very crucial role.”
One parent, a 57-year old mother, seemed to echo that sentiment, suggesting teachers invite parents to discuss the program in advance. “If parents could have been called to a meeting in school and told of the deworming rather than sending the children, the information could have been received better by parents,” she said.
Children, parents, fear sickness brought on by pill
Beyond the concern over lack of prior information about the process, a few parents said their children had become sick from taking the de-worming pills, and teachers shared some anecdotes of students vomiting or becoming dizzy after taking the pill.
Julius, the father who expressed concern at his lack of information about the treatment, said the reason he was hesitant this time is that his daughter became sick from a previous deworming. “We thought it was malaria and we took her to the hospital,” where he was informed that her stomach had likely simply reacted poorly to the de-worming pills, he said. Julius said that this time around, his daughter didn’t attend school the day of the deworming. His 11-year-old son did, and Julius said he too experienced side effects of the pill.
“After he was given the pills at school, I was called (and told) that the boy had fallen sick on the way (home) and could not walk,” Julius said. “When I found him he was dizzy, and looked tired and unable to walk. I took him on my bicycle and brought him home. Upon arriving he started vomiting.”
Julius said that “When I was called that he had fallen sick because he had taken the drugs I was angry.” (The boy’s teacher, who escorted reporter Anthony Langat to the boy’s home later in the day, confirmed that on the day of the deworming the boy was sick to the point that he was unable to walk home by himself).
Jael, a 29-year-old mother of four who owns a few acres of land where she grazes cattle and raises chickens, said two of her children have also become sick from the deworming treatment. This deworming day, one of her sons refused to take the deworming pills at school. The boy’s teacher said he told her that his parents told him not to participate.
“I didn’t tell them not to take the pills. My son just feared the drug,” said Jael. “The other time my daughter felt dizzy and vomited so she feared and even the other siblings feared.”
A teacher at Ramba Primary school said that when he informed his students the day before that the deworming program was to take place, some of them reacted negatively due the side effects they’d experienced during previous dewormings. “‘What, PZQ?’” he recalled some of them saying. (PZQ refers to drug praziquantel which is administered in some parts of Kenya to treat schistosomiasis, a disease caused by a waterborne parasite. Albendazole treats the soilborne parasite helminths, which is more common in Kenya and therefore albendazole is administered to a larger target population than PZQ. Both drugs are approved by the World Health Organization.) “‘Over my dead body, I will not take that drug again. The way it reacted with me? No, no, no, I will not take it.’” A teacher at Gagra Primary School recalled a 2013 deworming in which about six of his students fainted, having taken PZQ.
Hollister said “there is the potential that there could be some side effects because of the medication, which can happen especially to children with very large worm loads, for example stomach pains.”
Kenya’s government recommends a detailed protocol for mitigating and responding to side effects brought on by the pill. Teachers undergo a half day of training during which they’re instructed to keep students in class for two hours after a de-worming so as to observe whether any suffer side effects. They’re given phone numbers for doctors who are on-call nearby and they’re advised to locate and find contact information for the closest hospitals in advance of a deworming.
A small minority of parents may fear deworming for religious reasons
Teachers and Evidence Action staff said on rare occasions some parents may quietly but intentionally not send their children to school on de-worming days because they object to it for religious reasons. No parents interviewed for this article expressed this sentiment, but staff presumed that those parents who do withhold their children from deworming for religious reasons may mistakenly associate the pill with infertility, contraceptives or abortion.
The head teacher at Ramba Primary school said about 10 of the school’s 900 or so students refused to take the pill. “Some were because of the parents and the denomination factor—some kind of cult within their church could not allow them to take the drug,” he said, referring to one local church called Roho Israel. “Some parents also tried to demonize praziquantel (PZQ) unfairly just because of the dizziness and all that. So there is that kind of fear and fright that some people would say that you may collapse and fall down.”
A teacher at Gagra Primary School said five of his female students were afraid that the drug could cause infertility. “I do not know how the parents were connecting it to that,” he said. (A teacher at another local primary school, Lwak, said he hadn’t heard of any cases of parents refusing to let their children participate for religious reasons.)
“You know, this school is sponsored by (the) Catholic Church, and the population around here is mostly Catholic,” said Dickson Akawo, another teacher at Gagra Primary school. He said that for some parents, “what is in their mind is that even this (drug) has some sort of sterilizing effect on them. So the parents can deter a child from coming during the deworming day.”
“What they do not know is that we can become so clever,” said Akawo, describing how he and other teachers stock extra pills on deworming day in order to deworm students on subsequent days because “we know the repercussions of somebody not being dewormed.”
Kenya director for Evidence Action’s Deworm the World Initiative Thomas Kisimbi said that while the program does allow for extra drugs to be administered to students who may have missed it, he said the program doesn’t sanction teachers’ administering the pill to children who they believe missed it the first time due to objections by them or their parents.
Kisimbi estimated that only a few hundred children out of the more than 6 million who are dewormed through the program refuse to take the pills for religious or cultural reasons. (Hollister suggested that even those few may be geographically confined to just certain sub-counties). Toward convincing such parents to agree to the deworming, Kisimbi said “The best we can do is provide information and for them to make an informed decision around that.” Beyond that, Kisimbi said some of those parents are bound to eventually notice that other children undergo health improvements as a result of the program and therefore come around.
Parent’s priorities
Many parents interviewed said that they were far more concerned with malaria than they were with parasites. Kisimbi said this is likely due to the fact that Malaria has the potential to kill, whereas worms only sicken their children. Some parents cited other health concerns such as lack of clean water, and one father said that while schools implement the deworming program, they often fail to address other, important health issues. “I even urge teachers in school to ensure that children wash their hands before they eat,” the parent said.
Ang’iela, the government official, agreed that parasites are one of many health issues that plague children here, but cautioned that parents aren’t always aware of how seriously parasites do affect their children.
“You know, the traditional approach to life here is still strong whereby some people think that water is water so long as it is lake water, fruit is a fruit so long as it is a fruit that nobody has touched,” Ang’iela said. “They only see that their hands are dirty when they see charcoal on it and even that charcoal they will just rub it off and continue eating. So there is a lot of work which still needs to be done from this traditional approach, but that in my opinion has continued exposing them to things like worms.”
For his part, Kisimbi said the most urgent challenge for Kenya’s deworming program isn’t addressing fear of side effects or parent’s concerns about the drug. Rather, it’s how to get students who are not enrolled in school—or whose schools are not legally registered with the government—in for treatment. And in the long run, for the program to continue, Kenya’s government will have to buy in further.
“We want this program to go on 10 years, 20 years beyond the [involvement] of Evidence Action and we want the government to be able to take greater responsibility over the program,” he said.
In most respects, the National School-Based Deworming Program seems to be working quite smoothly. There were no indications that children who attended school on the March deworming day were being missed, that pills were not arriving at the correct locations, or that teachers were insufficiently trained in administering them. For their part, teachers unanimously noted significant improvements in class attendance, presumably a result of the reduction in parasites affecting their students. And nearly all the parents interviewed—even those who wished they had been better informed about the program—said they viewed the program favorably and were glad their children were being dewormed in school. Many parents said their children were sick less often and missed fewer school days as a result of an improvement in their children’s health they attributed to the dewormings.
This reporting offers merely an anecdotal look into the question of whether parents are sufficiently informed about the program. Elsewhere, Evidence Action is already working to determine the extent of parent satisfaction and knowledge and how to improve both. In India they are conducting a human centered design process, interviewing parents, teachers and other actors in the deworming process to determine what precisely each want to know about the process and how to best deliver that information in a cost effective way. Similarly, in Uganda Evidence Action was working with a human-centered design firm to understand consumer behavior toward using chlorine in drinking water so as to better inform families with small children.
Jacob Kushner is an investigative journalist currently based in East/Central Africa and the Caribbean. He reports on development economics and inequality, foreign aid and investment, governance and innovation in developing nations. Anthony Langat is a Kenyan journalist living in Nairobi.
GiveWell’s response
We are excited that we were able to commission Mr. Kushner and Mr. Langat to visit the Deworm the World Initiative on the ground in Kenya. (We previously commissioned Mr. Kushner to report on GiveDirectly’s operations in Kenya.)
To date, the information we have on the Deworm the World Initiative has come from (a) Deworm the World Initiative staff, (b) monitoring and evaluation information provided to us by the Deworm the World Initiative, and (c) our own site visit to the Deworm the World Initiative in India. Mr. Kushner and Mr. Langat’s visit represents a different perspective on this program, and a chance to identify issues we couldn’t have otherwise.
The intensity of our research process gives us confidence that we’ve likely identified and considered key issues relevant to our top charities, but we’re also aware that we may have missed some. Our goal in asking Mr. Kushner and Mr. Langat to visit top charities in the field is to reduce the likelihood that we have missed any major problems. Mr. Kushner and Mr. Langat did not encounter any problems of this sort. Overall, their report is consistent with our expectations, and it bolsters our confidence in the Deworm the World Initiative.
Based on information from randomized controlled trials, we know that the pills used in combination deworming programs can sometimes cause relatively benign side effects (more in our intervention report). We are also not surprised that some parents say that they are less informed than they would like to be about the program.
We believe these issues are worth addressing, but we don’t think these costs come close to outweighing the benefits that the program provides.
Finally, we would not expect it to be hard to find similar examples were someone to interview recipients of other aid programs. It is worth keeping in mind that very few aid agencies have allowed themselves to be subject to this type of analysis, so we are grateful to support an organization like Evidence Action (which runs the Deworm the World Initiative) that is ready to open itself up to outside criticism.
Evidence Action’s response
To GiveWell and Jacob Kushner:
We appreciate the time that Jacob Kushner and Anthony Langat spent with Kenya’s National School-Based Deworming Programme, and with Evidence Action’s Deworm the World Initiative. Evidence Action provides technical assistance to this evidence-based program of the Kenyan government.
The writers’ time is paid for by GiveWell which we hope is clearly disclosed. Anthony Langat, one of the writers, visited Siaya county during the most recent school-based deworming day (Jacob Kushner did not visit the communities). During that day, 312,226 children were targeted for deworming in their schools; more recently, in early June 2015, more than 2 million children received treatment for parasitic worms in counties across Kenya. In the last school year (2013/14), 6.4 million children were dewormed as part of Kenya’s national program, and this year the program is on track to deworm a similar number of children.
We appreciate that the journalists are trying to find out what parents understand about the National School-Based Deworming Programme. We are naturally keenly interested in this aspect of the program as well, as teachers and parents are very important in making school-based deworming a success.
However, there are a few important points to remember:
1. Kenya’s National School-Based Deworming Programme is a government program, jointly implemented by the Ministry of Health and the Ministry of Education, Science, and Technology. The Kenyan government administers a variety of public health interventions through its public schools and has the full and final say on all such programs and how they are run. Such school health programs are promulgated by the government to ensure children have equitable access to quality health services; the National School-Based Deworming Programme is one such program. We provide technical support to the government on implementation of a cost-effective, high coverage program; assist in developing procedures and protocols; and serve as the coordinating body for the program.
2. We do not think that this article evaluates parental knowledge and involvement in a rigorous or scientific manner. Anthony Langat chatted with 13 parents – a highly anecdotal and non-representative sample. A few parents that he spoke to expressed confusion about the reason for why children received deworming treatment. We very much regret that these parents had a less than desirable experience. Incidentally, the side effects of receiving treatment for parasitic worms that one parent noted — nausea and vomiting — are actually associated with very high worm loads in children, making the child’s parasitic worm treatment all the more important.
In a representative survey of parents that we conducted at the beginning of the 2014/15 school year to assess parental awareness about deworming, we found that 21% of parents were informed about Deworming Day by their child’s teacher and 50% of parents by their child. Poster, radio, and community health extension workers (“CHEW” bar in the figure below) were other significant sources of information about Deworming Day (n=308 parents in 53 schools, Q3 2014, multiple answers were possible so total is >100%).
We continuously evaluate the quality of the outreach of the program with representative surveys (rather than anecdotal stories) and suggest to the ministries ways to improve community awareness.
As the authors noted, we are also engaged in a detailed qualitative study in India to better understand how to improve the government’s training cascade that conveys information about deworming all the way to every teacher, classroom, and families. We are nearing the end of this assessment that will yield important lessons for deworming programs in other countries as well.
3. Most importantly, we want to emphasize this point that got lost in the piece by Kushner and Langat: All who are involved in the National School-Based Deworming Programme are keenly concerned about the health and safety of children. The children and their well-being are all of our utmost priority. That is why we engage in school-based deworming in the first place. As a result, children’s safety always comes first.
There are two points to be made on this matter: First, the drug given for the majority of deworming treatment in Kenya, albendazole, has a remarkable safety record. Hundreds of millions of patients have taken the drug all over the world over the last 20 years. There is extensive data on the minimal adverse experiences. Side effects reported in the published literature are extremely low. Gastrointestinal side effects, the main side effect, occur with an overall frequency of less than 1%. A second drug, praziquantel, is administered in communities endemic for schistosomiasis – this is a subset of the overall communities treated by the program. It is recommended that children are provided with food prior to the administration of praziquantel, to minimize nausea and other minor side effects of the medication. Like albendazole, this drug is considered very safe for treating school-age children.
Second, the Kenyan government, with our assistance, has implemented a strict and rigorous ‘adverse event’ protocol. This is standard procedure for all national deworming programs and every last person involved in deworming is extensively trained on procedures. The protocol is here.
The Adverse Event Protocol contains these elements:
• Prescribes the protocol for preventing adverse events such as emphasizing the importance of not deworming sick children;
• Emphasizes importance of relevant training messages (i.e., making sure drinking water is available, that children chew albendazole tablets, and if treating for schistosomiasis, that children eat beforehand);
• Clarifies the difference between mild adverse events and serious adverse events and gives examples of each;
• Prescribes the actions to be taken for managing adverse events on and after deworming day;
• Explains the need to ensure children have eaten prior to praziquantel administration – the program also provides funds for school feeding on treatment days to schools in high-need areas;
• Establishes an emergency response team and phone trees to ensure school personnel connect with appropriate health providers and the affected child’s parents;
• Lists contact information for local medical officers and emergency response team;
• Includes reporting forms and protocol for mild and serious events.
Kids’ safety and health will always come first and that is why the government of Kenya has implemented this national program.
4. The detrimental health effects of parasitic worms are profound. 6.4 million kids have better lives because of being dewormed once a year. Now entering Kenya’s fourth year of the national program, the government of Kenya is closing in to eliminate parasitic worms in Kenya as a public health threat. Worm prevalence rates in Kenya rapidly dropped by more than 20% in the last two years to the point where the country is nearing just a few percentage points prevalence in several counties. The government of Kenya is very forward looking with this health policy and at the vanguard of protecting the lives of its children. Similarly to how the U.S. American South eliminated worms as a public health threat at the turn of the century, Kenya, as a fast-developing country, is protecting its children from the devastating health effects of parasitic worms. We applaud the government of Kenya in implementing what is a best buy in development and good public health policy.
We would urge GiveWell and the journalists to continue to be rigorous in evaluating the programs it recommends.
As more governments in countries with high worm burdens roll out national school-based deworming programs, we are ever so much closer to eliminating the public health threat of parasitic worms in children worldwide. Significant progress is being made — and hundreds of millions of children have a brighter future as a result.
### New deworming reanalyses and Cochrane review
Fri, 07/24/2015 - 13:54
On Wednesday, the International Journal of Epidemiology published two new reanalyses of Miguel and Kremer 2004, the most well-known randomized trial of deworming. Deworming is an intervention conducted by two of our top charities, so we’ve read the reanalyses and the simultaneously updated Cochrane review closely and are responding publicly. We still have a few remaining questions about the reanalyses, and have not had a chance to update much of the content on the rest of our website regarding these issues, but our current view is that these new papers do not change our overall assessment of the evidence on deworming, and we continue to recommend the Schistosomiasis Control Initiative and the Deworm the World Initiative.
Key points:
• We’re very much in support of replicating and stress-testing important studies like this one. We did our own reanalysis of the study in question in 2012, and the replication released recently is more thorough and identifies errors that we did not.
• We don’t think the two replications bear on the most important parts of the case we see for deworming. Both focus on Miguel and Kremer 2004, which examines impacts of deworming on school attendance; in our view, the more important case for deworming comes from a later study that found impacts on earnings many years later. The school attendance finding provides a possible mechanism through which deworming might have improved later-in-life earnings; this is important, because (as stated below) the mechanism is a serious question.
• However, the replications do not directly challenge the existence of an attendance effect either. One primarily challenges the finding of externalities (effects of treatment on untreated students, possibly via reducing e.g. contaminated soil and water) at a particular distance. The other challenges both the statistical significance and the size of the main effect for attendance but we believe is best read as finding significant evidence for a smaller attendance effect. Regardless, the results we see as most important, particularly on income later in life, are not affected.
• The updated Cochrane review seems broadly consistent with the earlier version, which we wrote about in 2012. We agree with its finding that there is little sign of short-term impacts of deworming on health indicators (e.g., weight and anemia) or test scores, and, as we have previously noted, we believe that this does undermine – but does not eliminate – the plausibility of the effect on earnings.
• In our view, the best reasons to be skeptical about the evidence for deworming pertain to external validity, particularly related to the occurrence of El Nino during the period of study, which we have written about elsewhere. These issues are not addressed in the recent releases.
• At the same time, because mass deworming is so cheap, there is a good case for donating to support deworming even when in substantial doubt about the evidence. This has consistently been our position since we first recommend the Schistosomiasis Control Initiative in 2011. Our current cost-effectiveness model (which balances the doubts we have about the evidence with the cost of implementing the program) is here.
• While we think that replicating and challenging studies is a good thing, it looks in this case like there was an aggressive media push – publication of two papers at once coinciding with an update of the Cochrane review and a Buzzfeed piece, all on the same day – that we think has contributed to people exaggerating the significance of the findings.
Details follow. We also recommend the comments on this issue by Chris Blattman (whose post has an interesting comment thread) and Berk Ozler.
The reanalyses of Miguel and Kremer 2004Aiken et al. 2015 and Davey et al. 2015 participated in a replication program hosted by the International Initiative for Impact Evaluation (3ie), in which Miguel and Kremer shared the data from their trials and Aiken, Davey and colleagues reanalysed them. Working paper versions of these reanalyses were published on the 3ie website dated October 2014, and Joan Hamory Hicks, Miguel and Kremer responded to both of them there. The World Bank’s Berk Ozler wrote a blog post in January reviewing the reanalyses and Hicks, Miguel, and Kremer’s replies.
Aiken et al. 2015 straightforwardly attempts to replicate Miguel and Kremer 2004’s results from data and code shared by the authors. They do a much more thorough job than when we attempted something similar in 2012, and find a number of errors.
Amongst a number of smaller issues, Aiken et al. find a coding error in Miguel and Kremer’s estimate of the externality impacts of deworming on students in nearby schools, in which Miguel and Kremer only counted the population of the nearest 12 schools. That coding error substantially changes estimates of the impact of deworming on both the prevalence of worm infections in nearby schools and the attendance of students in nearby schools, particularly estimates of the impact of further out schools, between 3 and 6 km away.
Aiken et al. state: “Having corrected these errors, re-analysis found no statistically significant indirect-between-school effect on the worm infection out- come, according to the analysis methods originally used. However, among variables used to construct this effect, a parameter describing the effect of Group 1 living within 0–3 km did remain significant, albeit at a slightly smaller size (original -0.26, SE 0.09, significant at 95% confidence level; updated -0.21, SE 0.10, significant at 95% confidence). The corresponding parameter for the 3–6- km distances became much smaller and statistically insignificant (original -0.14, SE 0.06, significant at 90% confidence; updated -0.05, SE 0.08, not statistically significant).” Aiken et al.’s supplementary material and Hicks, Miguel, and Kremer’s response to the 3ie replication working paper clarifies this explanation. In short, fixing the coding error does not much affect estimates of the externality within 3 km of treatment schools, but does significantly change estimated externalities between 3 and 6 km out, and following the original Miguel and Kremer 2004 process for synthesizing those estimates into an overall estimate of the cross-school externality on worm prevalence, the resulting figure is not statistically significant. However, if you simply drop the 3-6 km externality estimate, which is now negative and no longer statistically significant, then you continue to see a statistically significant cross-school externality (see the second to last row of Table 1).
The same coding error also affects estimates of the externality effect on school attendance, in a broadly similar way. Aiken et al. write: “Correction of all coding errors in Table IX thus led to the major discrepancies shown in Table 3. The indirect-between-school effect [on attendance] was substantially reduced (from +2.0% to -1.7%) with an increased standard error (from 1.3% to 3.0%) making the result non-significant. The total effect on school attendance was also substantially reduced (from 7.5% to 3.9% absolute improvement), making it only slightly more than one standard error interval away [from] zero, hence also non-significant.” The correction to the coding error significantly increases the standard error of the 3-6km externality estimate, which then increases the standard error of the overall estimate significantly. The increased uncertainty, rather than the change in the point estimate of the externality, is what drives the conclusion that the total effect on school attendance is no longer statistically significant. As in the prevalence externality case, dropping the 3-6km estimate altogether preserves a statistically significant cross-school externality (and total effect).
We are uncertain about what to believe about the externality terms at this point. It seems fairly clear that had Miguel and Kremer caught the coding error prior to publication, their paper would have ignored potential externalities beyond 3km, and the replication done today would have found that the analysis up to 3km was broadly right. The replication penalizes the paper for having initially (incorrectly) found externalities further out. While we continue to be worried about the possibility of specification searching in the externality terms, and we see a case for treating the initial paper as a form of preregistration, we don’t see it as at all obvious that we should penalize the Miguel and Kremer results in the way that Aiken et al. suggest.
The Aiken et al. replication, like the original paper, finds no evidence of an impact on test scores.
Davey et al. 2015 is a more interpretive reanalysis, in which the authors use a more “epidemiological” analytical approach to reanalyze the data. The abstract states:
Results: Quasi-randomization resulted in three similar groups of 25 schools. There was a substantial amount of missing data. In year-stratified cluster-summary analysis, there was no clear evidence for improvement in either school attendance or examination performance. In year-stratified regression models, there was some evidence of improvement in school attendance [adjusted odds ratios (aOR): year 1: 1.48, 95% confidence interval (CI) 0.88–2.52, P = 0.150; year 2: 1.23, 95% CI 1.01–1.51, P = 0.044], but not examination performance (adjusted differences: year 1: −0.135, 95% CI −0.323–0.054, P = 0.161; year 2: −0.017, 95% CI −0.201–0.166, P = 0.854). When both years were combined, there was strong evidence of an effect on attendance (aOR 1.82, 95% CI 1.74–1.91, P < 0.001), but not examination performance (adjusted difference −0.121, 95% CI −0.293–0.052, P = 0.169).
Conclusions: The evidence supporting an improvement in school attendance differed by analysis method. This, and various other important limitations of the data, caution against over-interpretation of the results. We find that the study provides some evidence, but with high risk of bias, that a school-based drug-treatment and health-education intervention improved school attendance and no evidence of effect on examination performance.
Reviewing the key conclusions in order:
• “In year-stratified cluster-summary analysis, there was no clear evidence for improvement in either school attendance or examination performance.” The results of the year-stratified cluster-summary analysis are substantively the same as the results of the year-stratified regression models that Davey et al. use (next bullet), with wider confidence intervals resulting from the reduction in sample size of caused by using unweighted school-level data (N=75). Table 2 reports a 5.5 percentage point impact on attendance in 1998 (corresponding to an odds ratio of 1.78) and a 2.2 percentage point impact for 1999 (corresponding to an odds ratio of 1.21). Davey et al.’s regressions find an odds ratio for 1998 of 1.77 (unadjusted, p=0.097) or 1.48 (adjusted, p=0.150) and for 1999 of 1.23 (unadjusted, p=0.047, or adjusted, p=0.044), i.e. the same point estimates with tighter confidence intervals. We don’t see it as surprising or problematic that collapsing a large cluster-randomized trials’ data to the cluster level results in a loss of statistical significance.
• “In year-stratified regression models, there was some evidence of improvement in school attendance [adjusted odds ratios (aOR): year 1: 1.48, 95% confidence interval (CI) 0.88–2.52, P = 0.150; year 2: 1.23, 95% CI 1.01–1.51, P = 0.044], but not examination performance (adjusted differences: year 1: −0.135, 95% CI −0.323–0.054, P = 0.161; year 2: −0.017, 95% CI −0.201–0.166, P = 0.854).” The lack of a result on exam performance echoes Miguel and Kremer 2004’s results. The “some evidence of improvement” result for school attendance is more striking, since the year 2 results are positive and statistically significant while the year 1 results are more positive but not statistically significant (due to a wider confidence interval). We read this as the test in year 1 being underpowered; treating years 1 and 2 as two independent randomized control trials, a fixed-effects meta-analysis would find a statistically significant overall effect.
• “When both years were combined, there was strong evidence of an effect on attendance (aOR 1.82, 95% CI 1.74–1.91, P < 0.001), but not examination performance (adjusted difference −0.121, 95% CI −0.293–0.052, P = 0.169).” These results accord with the Miguel and Kremer 2004 results.
• “We find that the study provides some evidence, but with high risk of bias, that a school-based drug-treatment and health-education intervention improved school attendance and no evidence of effect on examination performance.” The authors make two main arguments for the high risk of bias. First, they note (in Figure 3) that the correlation across schools between attendance rates and the number of attendance observations appears to differ across the treatment and control groups, with a broad tendency towards positive correlation between observations and attendance rates in the intervention group and a negative correlation in the control group, which would lead to estimates weighted by the number of observations to overestimate the true impact. However, we see three reasons not to regard this evidence as particularly problematic:
• Hicks, Miguel, and Kremer report conducting a test for the claimed change in the correlation and finding a non-statistically significant result (page 9). As far as we know, Davey et al. have not responded to this point, though we think it is possible that Hicks, Miguel, and Kremer’s test is underpowered.
• As noted above, the unweighted (year-stratified cluster-summary) estimates are not lower than the year-stratified regression models (which Davey et al. report do weight by observation–“we used random-effects regression on school attendance observations, an approach which gives greater weight to clusters with higher numbers of observations”), they just have wider confidence intervals. In order for the observed correlation to be biasing the weighted results, the weighted estimates would need to be meaningfully different from the unweighted ones, which is not the case here. Accordingly, we see little reason even in Davey et al.’s framework for preferring the less precise year-stratified cluster-summary results to the year-stratified regressions, which use significantly more information to reach virtually the same point estimates.
• Hicks, Miguel, and Kremer report results weighted by pupil instead of observation (Table 3), and find results strongly consistent with their attendance-weighted results, without the risk of being biased by attendance observations. However, their results imply treatment effects that are larger than the odds ratios reported in Davey et al.’s year-stratified regression models, which Davey et al. report do weight by observation. We’re not sure what to make of this discrepancy, and we haven’t see Davey et al. respond on this point.
Second, and relatedly, Davey et al. note that the estimated attendance effect in the combined years analysis is larger than in either of the underlying years, and they suggest that the change is due to the inclusion of a before-after comparison for Group 2 (which switched from control in year one to treatment in year two) in the purportedly experimental analysis. We see this concern as more plausible, and don’t have a conclusive view on it at this point, but we think it would affect the magnitude of the observed effect rather than its existence (since we read the year-stratified regressions, which are not subject to this potential bias, as supporting an impact on attendance).
To summarize, we see no reason even based on Davey et al.’s own choices to prefer the year-stratified cluster-summary, which discards a significant amount of information, to the year-stratified regression models, which together point to a statistically significant impact on attendance. Hicks, Miguel, and Kremer make a variety of other arguments against decisions made by Davey et al., and they, along with Blattman and Ozler, argue that many of the changes are jointly necessary to yield non-significant results. We haven’t considered this claim fully because we see the Davey et al. results as supporting a statistically significant attendance impact, but if we turn out to be wrong about that, it would be important to more fully weigh the other deviations they make from Miguel and Kremer’s approach in reaching a conclusion.
School attendance data has never played a major role in our view about deworming (more on our views below), but we see little reason based on these re-analyses to doubt the Miguel and Kremer 2004 result that deworming significantly improved attendance in their experiment. We see much more reason to be worried about external validity, particularly related to the occurrence of El Nino during the period of study, which we have written about elsewhere.
The new Cochrane ReviewThe new Cochrane review on deworming reaches largely the same conclusions as the 2012 update, which we have discussed previously.
The new review incorporates the Aiken et al. and Davey et al. replications of Miguel and Kremer 2004 and the results of the large DEVTA trial, but continues to exclude Baird et al. 2011, Croke 2014, and Ozier 2011.
We agree with the general bottom line that there is little evidence for any biological mechanism linking deworming to longer term outcomes, and that that should significantly reduce one’s confidence in any claimed long-term effects of deworming. However, the Cochrane authors make some editorial judgments we don’t agree with.
They state:
• “The replication highlights important coding errors and this resulted in a number of changes to the results: the previously reported effect on anaemia disappeared; the effect on school attendance was similar to the original analysis, although the effect was seen in both children that received the drug and those that did not; and the indirect effects (externalities) of the intervention on adjacent schools disappeared (Aiken 2015).” As described above, in summarizing the results of Aiken et al. 2015, we would have noted that estimated cross-school externalities remain statistically significant in the 0-3km range.
• “The statistical replication suggested some impact of the complex intervention (deworming and health promotion) on school attendance, but this varied depending on the analysis strategy, and there was a high risk of bias. The replication showed no effect on exam performance (Davey 2015).” We think it is misleading to summarize the results as “[impact on school attendance] varied depending on the analysis strategy, and there was a high risk of bias.” Our read is that Davey et al. reported some analyses in which they discarded a significant amount of information and accordingly lost statistical significance, but found attendance impacts that were consistently positive and of the same magnitude (and statistically significant in analyses that preserved information).
• “There have been some recent trials on long-term follow-up, none of which met the quality criteria needed in order to be included in this review (Baird 2011; Croke 2014; Ozier 2011; described in Characteristics of excluded studies). Baird 2011 and Ozier 2011 are follow-up trials of the Miguel 2004 (Cluster) trial. Ozier 2011 studied children in the vicinity of the Miguel 2004 (Cluster) to assess long-term impacts of the externalities (impacts on untreated children). However, in the replication trials (Aiken 2014; Aiken 2015; Davey 2015), these spill-over effects were no longer present, raising questions about the validity of a long-term follow-up.” This last sentence seems problematic from multiple perspectives:
• Davey et al. 2015 does not mention or look for externalities or spill-over effects.
• Aiken et al. 2015 replicates Miguel and Kremer 2004’s finding of a statistically significant externality within 0-3 km, so summarizing it as “these spill-over effects were no longer present” seems to be an over-simplification.
• The lack of geographic externality is a particularly unpersuasive explanation for excluding Ozier 2011, which focuses on spill-over effects to younger siblings of children who were assigned to deworming, especially given that Aiken et al. confirm Miguel and Kremer’s finding of within-school externalities (which seems more similar to the siblings case). More generally, the fact that one study failed to find a result seems like a bad reason to exclude a follow-up study to it that did.
More generally, we agree with many of the conclusions of the Cochrane review, but excluding some of the most important studies on a topic because they eventually treated the control group seems misguided. Doing so structurally excludes virtually all long-term follow-ups, since they are often ethically required to eventually treat their control groups.
Our case for dewormingAs we wrote in 2012, the last time the Cochrane review on deworming was updated, our review of deworming focuses on three kinds of benefits:
• General health impacts, especially on haemoglobin. We currently conclude, partly based on the last edition of the Cochrane review: “Evidence for the impact of deworming on short-term general health is thin, especially for soil-transmitted helminth (STH)-only deworming. Most of the potential effects are relatively small, the evidence is mixed, and different approaches have varied effects. We would guess that deworming populations with schistosomiasis and STH (combination deworming) does have some small impacts on general health, but do not believe it has a large impact on health in most cases. We are uncertain that STH-only deworming affects general health.” This last claim continues to be in line with Cochrane’s updated finding of no impact of STH-only deworming on haemoglobin and most other short-term outcomes.
• Prevention of potentially severe effects, such as intestinal obstruction. These effects are rare and play a relatively small role in our position on deworming.
• Developmental impacts, particularly on income later in life. The new Cochrane review continues to exclude the studies we see as key to this question. Bleakley 2004 is outside of the scope of the Cochrane review because it is not an experimental analysis, and Baird et al. 2011 is excluded because its control group eventually received treatment. However, as before, the Cochrane review does discuss Miguel and Kremer 2004, which underlies the Baird et al. 2011 follow-up; in their assessment of the risk of bias in included studies, Miguel and Kremer 2004 continues to be the worst-graded of the included trials. We also do not think that the Aiken et al. or Davey et al. papers should substantially affect our assessment of the Baird et al. 2011 results. Aiken et al.’s main finding is about the coding error affecting the 3-6km externality terms. I’m not clear on whether the coding error in the construction of the externality variable extends to Baird et al. 2011, but, regardless, the results we see as most important, particularly on income, do not rely on the externality term. Davey et al.’s key argument is against the combined analysis in which Group 2 is considered control in year one and treatment in year two. I remain uncertain about whether this worry is fundamentally correct, but Baird et al. is not subject to it because their estimates treat Group 2 as consistently part of the treatment group.
Nonetheless, we continue to have serious reservations about these studies and would counsel against taking them at face value.
We think it’s a particular mistake to analyze the evidence in this case without respect to the cost of the intervention. Table 4 of Baird et al. 2012 estimates that, not counting externalities, their results imply that deworming generates a net present value of $55.26, against an average cost of$1.07, i.e. that deworming is ~50 times more effective than cash transfers. We do not think it is appropriate to take estimates like these at face value or to expect them to generalize without adjustment, but the strong results leave significant room for cost-effectiveness to regress to the mean and still beat cash. In our cost-effectiveness model, we apply a number of ad-hoc adjustments to penalize for external validity and replicability concerns, and most of us continue to guess that deworming is more cost-effective than cash transfers, though of course these are judgment calls and we could easily be wrong.
The lack of a clear causal mechanism to connect deworming to longer term developmental outcomes is a significant and legitimate source of uncertainty as to whether deworming truly has any effect, and we do not think it would be inappropriate for more risk-averse donors to prefer to support other interventions instead, but we don’t agree with the Cochrane review’s conclusion that it’s the long-term evidence that is obviously mistaken in this case. (We have noted elsewhere that most claims for long-term impact seem to be subject to broadly similar problems.)
The importance of data sharing and replicationWe continue to believe that it is extremely valuable and important for authors to share their data and code, and we appreciate that Miguel and Kremer did so in this case. We’re also glad to see the record corrected regarding the 3-6km externality terms in Miguel and Kremer 2004. But our overall impression is that this is a case in which the replication process has brought more heat than light. We hope that the research community can develop stronger norms supporting data sharing and replication in the future.
The post New deworming reanalyses and Cochrane review appeared first on The GiveWell Blog.
### Change of leadership at Evidence Action
Fri, 07/17/2015 - 14:53
Evidence Action — which runs the Deworm the World Initiative, one of GiveWell’s top charities — announced today that Alix Zwane will be stepping down as Executive Director on August 3. She is leaving to join the Global Innovation Fund as CEO. Laliteswar Kumar, currently Director, Africa Region, will serve as Interim Executive Director. Dr. Zwane expects to remain involved in the organization until August. Evidence Action aims to identify a new Executive Director within a few months.
Dr. Zwane’s departure does not change our recommendation of the Deworm the World Initiative and we would guess that it will not be a significant factor in our view of the Deworm the World Initiative in the future. Our recommendation is largely based on the strength of evidence and cost-effectiveness of its program and its track record of carrying out that program.
If this change has more of an effect on our funding recommendations than we expect, this will likely be due to one or more of the following factors:
• We have limited experience with changes in senior leadership at our top charities. All of our other current top charities are led by the organizations’ founders. It is possible that the new Executive Director will have a different vision for the organization or may be unable to generate similar results.
• Strong communication with each of our top charities is a key part of our research process. We have found Dr. Zwane particularly easy to communicate with. Although we have had substantial communication with other staff, much of our communication with the Deworm the World Initiative, particularly around issues related to room for more funding, has been with her. It is possible that communicating with other staff will not be as smooth and could lead to lower confidence in the Deworm the World Initiative’s work.
• Evidence Action’s new Executive Director may have a different approach to transparency. Evidence Action has been highly transparent to date, a quality which we have found to be relatively rare among charities. Dr. Zwane told us that she does not expect Evidence Action’s approach to transparency to change.
• We would not be surprised if Evidence Action fails to identify a new Executive Director within a few months. This search, particularly if it takes a while, could distract from oversight of current programs and planning for the future.
Overall, our impression is that Dr. Zwane has been a highly effective leader of Evidence Action and her departure risks disruptions that could lead to us changing our view of the organization, though we would guess that this will not be the case.
In addition to recommending the Deworm the World Initiative, we have also recommended that Good Ventures provide funding for Evidence Action Beta, with the goal of supporting the development of new top charities (e.g., a planning grant and a grant for a seasonal income support project).
Dr. Zwane’s departure may have more of an effect on our work with Evidence Action Beta, where all of our communication to date has been with her, where the track record is more limited, and where our positive view of Dr. Zwane’s leadership plays a larger role in our confidence in the program.
Finally, the Global Innovation Fund is an organization that aims to “invest in social innovations that aim to improve the lives and opportunities of millions of people in the developing world” and has significant resources (at least $200 million over the next five years) at its disposal. We are excited about its future under Dr. Zwane’s leadership. The post Change of leadership at Evidence Action appeared first on The GiveWell Blog. ### Top charities’ room for more funding Fri, 04/03/2015 - 11:55 In December, we published targets for how much money we hoped to move to each of our top four charities, with the expectation of revisiting these targets mid-year: In past years, we’ve worked on an annual cycle, refreshing our recommendations each December. This year, because we anticipate closing (or nearly closing) the funding gaps of some of our top charities during giving season and moving a significant amount of money (~$5 million) after giving season before our next scheduled refresh, we plan to update our recommendations based solely on room for more funding in the middle of next year. We’re tentatively planning to do this on April 1st, the earliest we will realistically be able to post an update on charities’ ongoing funding needs that accounts for the funds they will receive over the next few months.
These targets were based on a guess that GiveWell-influenced donors would give $7.5 million to our top four charities in December 2014 to March 2015 (excluding Good Ventures and a$1 million gift to SCI from an individual that we knew about prior to setting the targets). Our actual money moved for this period was about $8.7 million to the top four charities, plus$0.4 million that we can allocate at our discretion and have not yet allocated.
Over the past couple of months, we have spoken with each of our top charities to get updates on how much funding they have received from GiveWell-influenced and other donors and their current room for more funding. In sum, the amounts that our top charities raised as a result of our recommendations were broadly consistent with what we expected and there have not been any significant updates to the charities’ room for more funding. Therefore, we are not revising our recommended allocation (for every $7.5 given,$5 to AMF, $1 to GiveDirectly,$1 to SCI, and $0.5 to Deworm the World) at this time. Summary for December 2014 to March 2015 (all figures in USD millions): Charity Target from individuals (Dec 2014) Max from individuals (Dec 2014) Actual from individuals Summary Against Malaria Foundation 5 5 4.5 Close to target Schistosomiasis Control Initiative 1 1 1.1 On target Deworm the World Initiative 0.5 1 0.7 Reached target but did not exceed max GiveDirectly 1 25 2.4 Reached target but did not exceed max Against Malaria Foundation (AMF) Donations to AMF from GiveWell-influenced donors were short of our target by about$0.5 million. AMF is currently in discussions about funding several large-scale bednet distributions. It is our understanding that the amount of funding AMF has available is a limiting factor on both how many nets it can provide to each distribution it is considering and on how many discussions it can pursue at one time.
We have written before about AMF’s lack of track record at signing agreements for and successfully completing large-scale distributions with partners other than Concern Universal in Malawi. In 2014, AMF signed its first agreement to fund a large-scale distribution with another partner in a different country: IMA World Health in the province of Kasaï Occidental in the Democratic Republic of the Congo (more). The Kasaï Occidental distribution was scheduled to be completed in late 2014. We have not yet seen results from this distribution, and AMF’s track record of completing and reporting on successful large-scale distributions remains limited. AMF expects to be able to share information from this distribution in the next few weeks.
We plan to continue recommending funds to AMF for now and to reassess AMF’s progress later in the year.
GiveDirectly
In December, we noted that GiveDirectly could likely absorb up to $25 million in funding from GiveWell-influenced individuals. We tracked$2.4 million to GiveDirectly from these individuals and it is possible that GiveWell influenced several million dollars more – between February 2014 and January 2015, GiveDirectly received several million dollars from individuals who did not provide information on how they learned about the organization. We continue to believe that GiveDirectly has substantial room for more funding.
Schistosomiasis Control Initiative (SCI)
In December we set a target of SCI receiving $1 million from GiveWell-influenced individual donors and set the max we aimed for SCI to receive from this group at the same amount. We estimate that SCI received about$1.1 million based on GiveWell’s recommendation.
We have fairly limited information on SCI’s room for more funding because (a) SCI recently began working with a new financial director and is in the process of reorganizing its financial system, and so has not yet been able to provide us with a comprehensive financial update; and (b) SCI held a meeting on March 24 to allocate unrestricted funds and sent us a report from that meeting recently, which we have not yet had time to review. We will be following up with SCI to learn more about its plans and funding needs.
We plan to continue recommending funds to SCI because (a) our room for more funding estimates for SCI are rough and we believe there is a reasonable chance that SCI has room for more funding; (b) we expect to learn more about SCI’s room for more funding in the next few months; and (c) we do not expect SCI to receive a large amount of funding due to our recommendation over the next few months (since most donors give in December).
Deworm the World Initiative, which is led by Evidence Action
In December we set a target of $0.5 million from GiveWell-influenced individual donors to Deworm the World and set the max we aimed for Deworm the World to receive from this group at$1 million. We estimate that Deworm the World received about $0.66 million based on GiveWell’s recommendation. It’s our understanding that Deworm the World may have opportunities over the next few years to support up to three deworming programs which could each cost several million dollars. We are in the process of following up with Deworm the World to learn more about how likely these programs are to require unrestricted funding from Deworm the World and when funding might become a bottleneck to moving forward with these programs. We plan to continue recommending funds to Deworm the World. The post Top charities’ room for more funding appeared first on The GiveWell Blog. ### Our updated top charities Mon, 12/01/2014 - 12:35 Our top charities are (in alphabetical order): We have recommended all four of these charities in the past. We have also included four additional organizations on our top charities page as standout charities. They are (in alphabetical order): In the case of ICCIDD, GAIN-USI, and DMI, we expect to learn substantially more in the coming years (both through further investigation and through further progress by the organizations); we see a strong possibility that these will become top-tier recommended charities in the future, and we can see reasons that impact-minded donors could choose to support them today. Ranking our top charities against each other is difficult and laden with judgment calls, particularly since: • Our cost-effectiveness analyses are non-robust, and reasonable people could reach a very wide variety of conclusions regarding which charity accomplishes the most good per dollar. • The charity we estimate as having the weakest cost-effectiveness (GiveDirectly) is also the one that we feel has the strongest organizational performance and the most direct, robust connection between donations and impact. • We do not currently feel highly confident in our cost-effectiveness estimates. We changed a number of inputs to our estimates recently. We did not have time to fully consider and vet them, and we plan to put more work into these estimates over the next few months. We do not expect our estimates to change significantly but given the fact that we have been updating them very recently, we would not be surprised if they do. We plan to publish a post soon detailing the major changes and most debatable assumptions in our current estimates. We consider the lateness of major revisions to this year’s estimates a shortcoming (and will be adding it to our mistakes page when we do our annual review). • This year we expect to influence a significant amount of donations. In some past years, we’ve been able to assume that each dollar of donations to an organization is about equally effective. This year, we could easily see one or more of our top charities reach the point of diminishing returns to additional donations and/or close its funding gap entirely. • We’ve been trying to predict and coordinate donations from Good Ventures, from individual donors, and from major donors who have given us private information about their plans. In so doing, we’ve run into game-theoretic challenges. If two donors are interested in funding the same organization, each has an incentive to downplay his/her interest in the hopes that the other will provide more of the funding. We’ve been trying to avoid reinforcing such incentives. We discuss how these considerations affected our targets below, and we plan to elaborate on this issue in a future post. • In past years, we’ve worked on an annual cycle, refreshing our recommendations each December. This year, because we anticipate closing (or nearly closing) the funding gaps of some of our top charities during giving season and moving a significant amount of money (~$5 million) after giving season before our next scheduled refresh, we plan to update our recommendations based solely on room for more funding in the middle of next year. We’re tentatively planning to do this on April 1st, the earliest we will realistically be able to post an update on charities’ ongoing funding needs that accounts for the funds they will receive over the next few months. This plan also raises questions about donor agency and coordination; we plan to discuss this in a future post.
We’ve tried to balance these considerations against each other and come up with an “ideal allocation” of the ~$7.5 million in estimated “money moved” we expect to influence (not counting grants from Good Ventures) over the next 4 months. Details are below. Based on this allocation, for any donors looking to give as we would, we recommend an allocation of$5 to AMF (67%), $1 to SCI (13%),$1 to GiveDirectly (13%) and $.50 to DtWI (7%) for every$7.50 given.
Good Ventures is planning to make grants of $5 million to each of AMF and GiveDirectly,$3 million to SCI, and $250,000 to DtWI. Good Ventures also plans to make grants of$250,000 to each of the standout organizations. We advised on these grants a few weeks ago, and did so while weighing our funding targets for each charity and forecasts of what other donors are likely to do; parts of our picture have since changed, and these grants do not represent the allocation we would advise donors to use nor do they reflect our views about the relative ranking of these organizations. We made sure to settle on and announce these grants before giving season so that no donor would have to grapple with questions about Good Ventures’s likely actions (more in our upcoming post on donor coordination), and Good Ventures will not be making additional grants to these charities in the near to medium future (6-12 months) unless there are substantive updates on things like evidence bases and capacity for absorbing money (i.e. Good Ventures will not be giving further simply in response to new information about donor behavior over the next 4 months).
Below we provide:
• Additional detail on each of these eight organizations, including (for past recommendations) major changes over the past year, strengths and weaknesses for each, and our understanding of each organization’s room for more funding (which forms the basis for our funding targets and recommended allocation). More
• The thinking behind our funding targets and recommended allocation. More
• The process we followed that led to these top charities. More
• Brief notes on giving now vs. giving later and giving to GiveWell vs. our top charities. More
Conference call to discuss our recommendationsWe are planning to hold a conference call at 5:30pm EST on Wednesday, December 3rd to discuss our recommendations and answer questions. If you’d like to join the call, please register using this online form. If you can’t make this date but would be interested in joining another call at a later date, please indicate this on the registration form.
Top charitiesWe present information on our top charities in alphabetical order.
Against Malaria Foundation (AMF)Our full review of AMF is here.
Important changes in the last 12 months
We named AMF our #1-ranked charity at the end of 2011. Over the next 2 years, AMF received more than $10 million on the basis of our recommendation but struggled to identify opportunities to use the funds it had received. At the end of 2013, we announced that we planned not to recommend additional donations to AMF until it committed the bulk of its current funds. This did not reflect a negative view of AMF; instead it reflected room for more funding related issues. More detail in this blog post. In 2014, AMF finalized several distributions in Malawi and the Democratic Republic of the Congo (DRC) with three different implementing partners (two of which account for the bulk of the nets to be distributed). In 2014, it committed approximately$8.4 million to distributions which will take place before January 1, 2016 (some of which have already begun) and now has $6.8 million available for future distributions.$1.7 million of this is committed to a distribution scheduled for 2017 (and could potentially be allocated to distributions taking place sooner). Excluding the 2017 distribution, AMF has committed approximately $11.2 million to distributions in its history. AMF continued to collect and share follow up information on its programs. We covered these reports in our August 2014 AMF update. Funding gap AMF requires access to funding in order to negotiate deals because it cannot initiate discussions with potential partners unless it is confident that it will have sufficient funding to support its future agreements. The funding it currently holds would enable it to fund approximately 3 distributions at a scale similar to what it has funded recently. AMF has told us that it has a pipeline of possible future net distributions that add up to$36 million (details in our review).
We see some reason for caution in thinking about AMF’s room for more funding. It has made strong progress on being able to negotiate distributions and commit funds. However, as of today there have only been two large-scale distributions that have moved forward far enough for data to be available. Both of these are significantly smaller than distributions AMF has recently or will soon fund, and both are in the same area with the same partner as each other. Some of the recently negotiated distributions could prove more challenging (since they are in DRC).
If AMF received an additional $10 million in total over the next 4 months, it would have about twice as much funding available as the total it committed to large-scale distributions in 2014. (As stated above, it committed$8.4 million to distributions taking place before 2017 and has $6.8 million available for further commitments.) If it received$25 million, it would have about 4 times that total. 2-4 times past distributions seems like a range that would allow AMF to do significantly more than it has in the past, without going so far beyond its past capacity as to raise serious scaling concerns.
We believe that $10 million total (the low end of that range), which means$5 million after the Good Ventures grant, is an appropriate target after which further donations are likely better off going to other charities.
Key considerations:
• Program impact and cost-effectiveness. Our best guess is that distributing bednets is in the same cost-effectiveness range as deworming programs and more cost-effective than cash transfers by a factor of 5-10. Our estimates are subject to substantial uncertainty. (Note: all our cost-effectiveness analyses are available here. Our file for bednets is here (.xls), and the comparison to deworming, cash transfers and iodine is here (.xls).)
• Directness and robustness of the case for impact. We believe that the connection between AMF receiving funds and those funds helping very poor individuals is less direct than GiveDirectly’s and more direct than SCI’s or DtWI’s. The uncertainty of our estimates is driven by a combination of AMF’s challenges historically disbursing the funds it receives and a general recognition that aid programs, even those as straightforward as bednets, carry significant risks of failure via ineffective use of nets, insecticide resistance or other risks we don’t yet recognize relative to GiveDirectly’s program. AMF conducts extensive monitoring of its program; these results have generally indicated that people use the nets they receive.
• Transparency and communication. AMF has been extremely communicative and open with us. We feel we have a better understanding of AMF than SCI and worse than GiveDirectly. In particular, were something to go wrong in one of AMF’s distributions, we believe we would eventually find out (something we are not sure of in the case of SCI), but we believe our understanding would be less quick and complete than it would be for problems associated with GiveDirectly’s program (which has more of a track record of consistent intensive followup).
• Risks:
• Two of AMF’s recent distributions (and much of its future pipeline) will take place in the DRC. Our impression is that the DRC is a particularly difficult place to work, and it is possible that AMF’s distributions there will struggle or fail. We view this as a moderate risk.
• We are not highly confident that AMF will be able to finalize additional distributions and do so quickly. AMF could struggle again to agree to distribution deals, leading to long delays before it spends funds. We view this as a relatively minor risk because the likely worst case scenario is that AMF spends the funds slowly (or returns funds to donors).
• We remain concerned about the possibility of resistance to the insecticides used in bednets. There don’t appear to be major updates on this front since our 2012 investigation into the matter; we take the lack of major news as a minor positive update.
A note on how quickly we expect AMF to spend the funds it receives. AMF works by sourcing, evaluating and negotiating deals for net distributions. This process takes time and requires AMF to have significant access to funding – it cannot approach a country to begin negotiations unless it is confident that it will have sufficient funding to pay for the nets it offers. We would not be surprised if AMF fails to reach additional deals in the next 12 months. We do expect it to commit the majority of its available funds (that it will have as of this coming January) within the next 24 months. If AMF does not make much progress in committing funds in the next 12 months, we will adjust our recommendation for 2015 accordingly, possibly recommending a lower target level of funds or suspending the recommendation entirely (depending on the specifics of the situation).
Our full review of AMF is here.
Deworm the World Initiative, (DtWI), led by Evidence ActionOur full review of DtWI is here.
Important changes in the last 12 months
Dr. Kevin Croke released a new study of a randomized controlled trial of a deworming program showing large, long-term impacts from deworming programs (for more, see this blog post). This study is a significant positive update on the impacts of deworming and increased our confidence that deworming programs have significant long-term impacts.
DtWI spent the funds it received due to GiveWell’s recommendation largely as we anticipated; it now has some (though limited) room for more funding.
In 2014, two events affected DtWI’s projection of the additional funding it would require to scale up in India:
• The Children’s Investment Fund Foundation (CIFF), a major foundation that had supported DtWI’s programs in Kenya, agreed to a 6-year, $17.7 million grant to support DtWI’s expansion to additional states in India and technical assistance to the Government of India for a national deworming program. With these funds, DtWI does not require significant additional funding to support its India expansion. • The new Indian government expressed interest in conducting a single deworming day nationally with increased national attention and resources. Advocating for such a policy and assisting the national government in creating a plan became the major focus of DtWI’s India work in 2014, which both reduced the amount of time it was able to spend generating interest in heavy DtWI involvement in new states and also required little funding since there were few costs of that project aside from staff time. We see this as positive news regarding DtWI’s potential impact; it may simply reduce DtWI’s further need for funds from individual donors. Together, these changes led DtWI to the conclusion that funding is no longer the bottleneck to reaching more people in India. (More detail in this blog post.) Funding gap DtWI told us that it seeks$1.3 million over the next two years. We expect it to allocate approximately 30% of the additional funds it receives for work related to expanding school-based, mass deworming programs (including related operating and impact evaluation expenses) and will allocate other funds to priorities that are less directly connected to expanding and evaluating deworming programs (investigating ways to combine other evidence-based programs with deworming rollouts, supplementing a project supported by another funder).
Good Ventures has announced a $250,000 grant to DtWI, leaving it with$1.05 million in remaining room for more funding over the next two years. We would ideally like DtWI to receive an additional $500,000 (for a total of$750,000) to provide it with more than half of its two-year gap.
Key considerations:
• Program impact and cost-effectiveness. Our current calculations indicate that DtWI-associated deworming, when accounting for DtWI’s potential “leverage” in influencing government funds, has extremely strong cost-effectiveness, better than bednets and 10-20 times better than cash transfers. Our estimates are subject to substantial uncertainty. (Note: all our cost-effectiveness analyses are available here. Our file for deworming, cash transfers and iodine is here (.xls).)
• Directness and robustness of the case for impact. DtWI doesn’t carry out deworming programs itself; it advocates for and provides technical assistance to governments implementing deworming programs, making direct assessments of its impact challenging. There are substantial potential advantages to supporting such an organization, as it may be able to have more impact per dollar by influencing government policy than by simply carrying out programs on its own, but this situation also complicates impact assessment. While we believe DtWI is impactful, our evidence is limited, and in addition, there is always a risk that future expansions will prove more difficult than past ones. In addition, DtWI is now largely raising funds to support research projects that are not directly connected to short-term implementation of deworming programs. We do not have a view about the value of these research projects.
• Transparency and communication. DtWI has been communicative and open with us. We have only recommended DtWI for one year and therefore have less history with it than AMF, GiveDirectly, or SCI, but we believe that were something to go wrong with DtWI’s work, we would be able to learn about it and report on it.
• Risks:
• DtWI is part of a larger organization, Evidence Action, so changes that affect Evidence Action (and its other programs) could indirectly impact DtWI. For example, if a major event occurs (either positive or negative) for Evidence Action, it is likely that it would reduce the time some staff could devote to DtWI.
• Most of DtWI’s funding is in the form of restricted funding from large, institutional funders. We are not sure how DtWI’s plans would change in response to a large funder offering it significant support to undertake a project not directly in line with its current plans.
Our full review of DtWI is here.
GiveDirectlyOur full review of GiveDirectly is here.
Important changes in the last 12 months
GiveDirectly continued to scale up significantly, utilizing most of the funding it received at the end of last year. It continued to share informative and detailed monitoring information with us. Overall, it grew its operations while maintaining high quality.
In June, three of its board members launched Segovia, a for-profit company aimed at improving the efficiency of cash transfer distributions in the developing world (see our blog post on Segovia for more information).
GiveDirectly is working with other researchers to begin a very large study on cash transfers and the impact they have on broader economic factors such as inflation and job growth. This study will include a long-term follow up component as well. GiveDirectly told us that the ideal sample size for this study, which is randomized at the village level, would require $15 million for cash transfers. Baseline data collection for the study began in August 2014. GiveDirectly has preregistered its plans for measurement and analysis (more information in our review). Funding gap GiveDirectly has scaled up significantly over the past year, spending (or committing to spend by enrolling recipients) approximately$13.6 million of the $17.4 million it received last year. (It also allocated an additional$1.8 million to other organizational costs.) It now believes that it could spend up to $40 million in a year. We believe this is a reasonable cap for GiveDirectly and would not hesitate to see it receive this amount. However, due to other charities’ significantly superior estimated cost-effectiveness, we are seeking larger total amounts for them. We hope that GiveDirectly will receive at least$1 million from individual donors (excluding Good Ventures) this giving season as a result of our recommendation.
Key considerations:
• Program impact and cost-effectiveness. Our best guess is that deworming or distributing bednets achieves 5-10 times more humanitarian benefit per dollar donated than cash transfers. Our estimates are subject to substantial uncertainty. (Note: all our cost-effectiveness analyses are available here. Our file for deworming, cash transfers and iodine is here (.xls).)
• Directness and robustness of the case for impact. GiveDirectly collects and shares a significant amount of relevant information about its activities. The data it collects show that it successfully directs cash to very poor people, that recipients generally spend funds productively (sometimes on food, clothing, or school fees, other times on investments in a business or home infrastructure), and that it leads to very low levels of interpersonal conflict and tension. We are more confident in the impact of GiveDirectly’s work than in that of any of the other charities discussed in this post.
• Transparency and communication. GiveDirectly has always communicated clearly and openly with us. It has tended to raise problems to us before we ask about them, and we generally believe that we have a very clear view of its operations. We feel more confident about our ability to keep track of future challenges than with any of the other charities discussed in this post.
• Risks: GiveDirectly has scaled (and hopes to continue to scale) quickly. Thus far, it has significantly increased the amount of money it can move with limited issues as a result. The case of staff fraud that GiveDirectly detected is one example of an issue possibly caused by its pace of scaling, but its response demonstrated the transparency we expect.
Our full review of GiveDirectly is here.
Schistosomiasis Control Initiative (SCI)Our full review of SCI is here.
Important changes in the last 12 months
As discussed above regarding DtWI, Dr. Kevin Croke released a new study of a randomized controlled trial of a deworming program showing large, long-term impacts from deworming programs (for more, see this blog post). This study is a significant positive update on the impacts of deworming and increased our confidence that deworming programs have significant long-term impacts.
We continued our work revisiting SCI’s case for impact (detailed here). There appear to have been major problems with some, though not all, of the studies we had relied on (pre-2013) to assess SCI’s impact. SCI shared some additional monitoring information with us which supported the conclusion that its programs have generally succeeded, though these reports have significant limitations.
We also reviewed the papers of several academics who had previously been critical of SCI’s activities. We found little in this literature to change our views on SCI’s programs.
We spent significantly more time with SCI in 2014 (including a 3-day visit to its headquarters in London) than we had in previous years, aiming to improve our understanding of its operations and spending. The picture that emerged was more detailed though largely consistent with what we believed before. Specifically:
• We are less confident in our understanding of how SCI has spent unrestricted funds. At the end of 2013, we believed we had a relatively strong understanding of SCI’s unrestricted spending, but after spending additional time reviewing reports and discussing with SCI staff, we have more questions today than we did a year ago.
• We have better information about how SCI plans to use additional funds it receives and the constraints, besides funding, that SCI faces in utilizing additional funding (more in our review).
Funding gap
SCI told us that it has approximately $3.8 million worth of opportunities that it would be highly likely to undertake if it had the funding available. (Some of this would be spent in 2015 and some held for the following year to ensure programs can continue once started). It believes it could possibly absorb an additional$4.5 million (up to $8.3 million total) for opportunities that are more speculative. Overall, our best guess is that SCI will use up to approximately$6.3 million and, beyond that, would build up reserves.
Partly for reasons of donor coordination, we have set its target at $6.8 million total (more below). We hope that SCI will receive$1 million from individual donors (excluding Good Ventures) this giving season as a result of our recommendation.
Key considerations:
• Program impact and cost-effectiveness. Our best guess is that deworming is roughly as cost-effective as distributing bednets and more cost-effective than cash transfers by a factor of 5-10. Our estimates are subject to substantial uncertainty. (Note: all our cost-effectiveness analyses are available here. Our file for deworming, cash transfers and iodine is here (.xls).)
• Directness and robustness of the case for impact. We have seen some evidence demonstrating that SCI successfully deworms children, though this evidence is relatively thin. Nevertheless, deworming is a relatively straightforward program, and we think it is likely (though far from certain) that SCI is successfully deworming people. We have had difficulties communicating with SCI (see below), which has reduced our ability to understand it; we have also spent significant time interviewing SCI staff and reviewing documents over the past 5 years and have found minor but not major concerns.
• Transparency and communication. We have had consistent difficulties communicating with SCI. Specifically, (a) we had a major miscommunication with SCI about the meaning of its self-evaluations (more) and (b) although we have spent significant time with SCI, we remain unsure of how SCI has spent funds and how much funding it has available (and we believe SCI itself does not have a clear understanding of this). Importantly, if there is a future unanticipated problem with SCI’s programs, we don’t feel confident that we will become aware of it; this contrasts with AMF and GiveDirectly, both of which we feel we have a strong ability to follow up.
• Risks: There are significantly more unknown risks with SCI than our other top charities due to our limited understanding of its activities. We hope for SCI to have $6.8 million available, which is significantly more unrestricted funding than it has had available in the past. Our full review of SCI is here. SummaryThe table below summarizes the key considerations for our four top charities. Consideration AMF DtWI GiveDirectly SCI Program estimated cost-effectiveness (relative to cash transfers) 5-10x 10-20x 1x 5-10x (and possibly more) Directness and robustness of the case for impact Strong Weakest Strongest Moderate Transparency and communication Strong Strong Strongest Weakest Ongoing monitoring and likelihood of detecting future problems Strong Strong Strongest Weakest Organizational track record of rolling out program Moderate Moderate Strong Strong Room for more funding (more below) High Limited Very high Limited when accounting for all donors Note the absence of two criteria we have put weight on in years past: • Program evidence of effectiveness. With the new evidence about deworming, we think differences on this front are much reduced, though we still think net distribution and cash transfers have more robust cases than deworming. • Potential for innovation/upside. All of these organizations are fairly mature at this point, and we expect each to get significant revenue this giving season. Standouts Much of the work we did this year went into investigating potential new additions to our top charities list. The strongest contenders we found are discussed below. Ultimately, none of these made it into our top tier of recommendations, but that could easily change in the future. We believe that more investigative effort could result in a much better understanding of GAIN-USI (discussed below) and potentially a top-tier recommendation. Meanwhile, ICCIDD and DMI (also discussed below) do not have the track record we’d want to see for our top tier of recommendations, but in both cases we expect major developments in the next year. Specifically, ICCIDD will have a substantially larger working budget (due to GiveWell money moved), and DMI may have new data from its randomized controlled trial that could cause a significant upgrade in its status. These are all strong giving opportunities, and we’ve vetted them all relatively thoroughly. Two work on a program (universal salt iodization) that we believe has excellent cost-effectiveness and a strong evidence base, and the other two have recently released data from randomized evaluations of their own programs (something that is very rare among charities). We have thoroughly vetted each of these organizations, including site visits. And we can see arguments for supporting these organizations in lieu of our top charities this year, though we ultimately recommend our top charities above them. Below are some brief comments on each standout organization. Donors interested in learning more should read our full reviews of each organization. Development Media International (DMI) produces radio and television broadcasts in developing countries that encourage people to adopt improved health practices, such as exclusive breastfeeding of infants and seeking treatment for symptoms associated with fatal diseases. Its programs reach many people for relatively little money, so if its program successfully changes listeners’ behavior, it may be extremely cost-effective. It is in the midst of running a randomized controlled trial of its program; the midline results were released earlier this year, at which point we blogged about them. At midline, the study found moderate increases (relative to the control group) in self-reported health behaviors. Our attempt to estimate the likely mortality impact of these behaviors – when accounting for other concerns about the generalizability of the study – implied cost-effectiveness worse than AMF’s. This isn’t sufficient for a recommendation this year, as DMI has much less of a track record than our top charities. However, if endline results hit DMI’s targeted mortality impact, we would expect to adjust our estimate significantly, and DMI could become a top charity. DMI’s current budget is approximately$2.5 million; it has told us it expects to receive approximately $2.5-$4 million from existing funders in the next year and could absorb an additional $6-$7.5 million, which it would either use to supplement a program already broadcasting in a country or move into a new country, depending on how much it received.
Our cost-effectiveness analysis for DMI is here (.xls).
Our full review of DMI is here.
GAIN-USI. GAIN’s Universal Salt Iodization (USI) program supports salt iodization programs. There is strong evidence that salt iodization programs have a significant, positive effect on children’s cognitive development, and we consider the program to accomplish (very roughly speaking) comparable good per dollar to bednets and deworming (see our intervention report).
GAIN-USI does not work directly to iodize salt; rather, it supports governments and private companies to do so, which could lead to leveraged impact of donations or to diminished impact depending on its effectiveness. We tried but were unable to document a demonstrable track record of impact; we believe it may have had significant impacts, but we are unable to be confident in this with what we know now. More investigation next year could change this picture.
GAIN’s USI program was one of the recipients of a large, multi-year grant from the Bill and Melinda Gates Foundation. The grant ends in 2015 and has yet to be renewed; we are unsure of whether it will be.
Donors whose primary interest is supporting a strong intervention, and who are comfortable supporting a large and reputable organization whose role is to promote and support the intervention (but whose track record we cannot assess at this time), should strongly consider supporting GAIN’s USI program.
GAIN is a large organization running many programs, so donors should consider the possibility that funds restricted to GAIN’s USI program might effectively support its other efforts (more on this general concern here). GAIN told us that it has very little unrestricted funding, so it is unlikely to be able to reallocate funds from other programs to continue to support USI work. It is possible that resources that are shared across programs (such as some staff) could be shifted toward other programs if resources for USI increased, but we would guess that this effect would be small.
Our cost-effectiveness analysis for deworming, cash transfers and iodine is here (.xls).
Our full review of GAIN is here.
International Council for the Control of Iodine Deficiency Disorders Global Network (ICCIDD). Like GAIN-USI, ICCIDD supports (via advocacy and technical assistance rather than implementation) salt iodization, and as with GAIN-USI, we tried but were unable to establish a track record of successfully contributing to iodization programs. Unlike GAIN-USI, ICCIDD is small, operating on a budget of approximately half a million dollars per year, and relies heavily on volunteer time. We believe that additional funding in the range of a few hundred thousand dollars could have a significant positive impact on its operations.
Good Ventures has granted a total of $350,000 to ICCIDD this year ($100,000 as a participation grant and $250,000 with the grants announced today), and we would be happy to see ICCIDD receive a few hundred thousand dollars more, after which point we would be more hesitant as it would be more than doubling its budget. We hope that ICCIDD will use the additional funding to improve its capacity and potentially become a top charity in the future. Our cost-effectiveness analysis for deworming, cash transfers and iodine is here (.xls). Our full review of ICCIDD is here. Living Goods recruits, trains, and manages a network of community health promoters who sell health and household goods door-to-door in Uganda and Kenya and provide basic health counseling. They sell products such as treatments for malaria and diarrhea, fortified foods, water filters, bed nets, clean cook stoves and solar lights. It completed a randomized controlled trial of its program and measured a 27% reduction in child mortality. We estimate that Living Goods saves a life for roughly each$10,000 it spends, approximately 3 times as much as our estimate for the cost per life saved of AMF’s program. Living Goods has been operating on a budget of $3 million per year and aims to scale up to operate on a budget of$10 million per year, of which it expects to receive approximately two-thirds from existing funders.
Our cost-effectiveness analysis for Living Goods is here (.xls).
Our full review of Living Goods is here.
Funding targets by charityIn order to give guidance to donors seeking to give as we would, we’ve come up with funding targets for each charity. These targets are based on “dividing up” $7.5 million in money moved, which is our best guess for how much individual donors will give based on our recommendations over the next 4 months. We are using the following principles in setting targets: • We’d like each top charity to receive a substantial amount of funding. When a charity receives substantial funding at our recommendation, it (a) gives that charity good reason to continue working with us, reporting to us, and helping us learn further about its activities; (b) gives that charity the opportunity to continue building its track record and demonstrating its capabilities, information we will use in future years; and (c) continues to reinforce the idea that GiveWell-recommended charities receive substantial funding – the main incentive charities have to participate in our process. • All else equal, we’d like stronger overall charities – defined as those that accomplish more good per dollar, taking all considerations into account – to receive more funding. • Each charity has a conceptual “maximum” past which we think donations would hit strongly diminishing returns. We aren’t allocating any “money moved” to a charity in excess of the max; beyond that point, we think the money is better spent supporting other top charities. We are also taking the announced Good Ventures grants into account. These grants were recommended using similar considerations, though some of our information has changed. Our targets are as follows. Note the distinction between “total max” (the most we’d be comfortable seeing a charity take in, at which point we would make an announcement), “total target” (the total amount we would like to see this charity take in, including Good Ventures grants and other donations), “target from individuals” (the amount we are seeking specifically from GiveWell-influenced individual over the next four months), and “max from individuals” (the most we’d be comfortable seeing a charity take in, taking into account what we know about other donors’ plans). • Against Malaria Foundation:$5 million target from individuals, $5 million max from individuals. As discussed in the section on AMF, our ideal amount for AMF to take in would be$10 million this giving season, and Good Ventures has already committed $5 million. We therefore target$5 million for AMF.
• Deworm the World Initiative: $0.5 million target from individuals,$1 million max from individuals. We think Deworm the World Initiative is an outstanding giving opportunity with limited room for more funding, as discussed above.
• Schistosomiasis Control Initiative: $1 million target from individuals,$1 million max from individuals. We believe SCI will end the giving season with $3 million from Good Ventures,$1 million from a major donor who discussed his plans with us, $1 million in donations that we expect to come from non-GiveWell-related sources (based on projections from past years rather than on knowledge of specific donors). We also believe it has$1 million in cash available for the $6.3-$8.3 million in opportunities we describe above. In total, then, SCI already can expect to have $6 million available, which would be around the maximum we’d recommend in isolation. However, our discussion with the possible$1 million donor has led us to set a higher overall “total target” than we would have otherwise, settling on a total target of $6.8 million. (We plan to elaborate on our thoughts about donor coordination and donor agency in a future post.) Since we are hoping for SCI to have a total of$6.8 million available for its activities, we are recommending $1 million in donations from GiveWell-influenced individuals this giving season. (We are rounding$0.8 million in estimated remaining gap to $1 million in recommended giving since these figures are not precise, and we see value in round numbers for our targets.) • GiveDirectly:$1 million target from individuals, $25 million max from individuals. We believe GiveDirectly could absorb up to$40 million total ($5 million from the Good Ventures grant,$10 million we expect it to receive from non-GiveWell-related sources already, and $25 million on top of that). However, our revised cost-effectiveness estimates (which we will discuss more in a future post) now classify cash transfers as significantly less cost-effective than bednet distribution and deworming, by a factor of around 5-10. In addition, the$5 million grant from Good Ventures and the funds we expect it to receive from elsewhere means that GiveDirectly will raise nearly as much in its next fiscal year as it did last year. Given that we anticipate moving roughly $7.5 million from individual donors in the next four months, we’d like to direct roughly$1 million of those donations to GiveDirectly. Note that GiveDirectly is, by a substantial amount, the organization we feel has performed best and most consistently in carrying out its intervention and providing quality data on the results, and people who are particularly skeptical of cost-effectiveness estimates are likely to find it the most appealing. We also are very excited about the future of GiveDirectly, in terms of its continuing ability to produce useful information via studies and its potential to grow and raise more from sources unconnected to GiveWell, though at this point we feel GiveDirectly is mature enough that further donations are not crucial in helping it toward this goal.
Summary table (all figures in USD millions):
Charity Total max (including all donations) Total target (including all donations) Donations committed or expected from Good Ventures and non-GiveWell sources Target from individuals Max from individuals Against Malaria Foundation 10 10 5 5 5 Schistosomiasis Control Initiative 6.8 6.8 6 1 1 Deworm the World Initiative 1.3 0.75 0.25 0.5 1 GiveDirectly 40 16 15 1 25
For donations beyond the ~$7.5 million total we’re projecting over the next four months, we think the decision of which charity to support would be particularly difficult. Of our top charities, only GiveDirectly would have clear room for more funding after receiving an amount in line with the above, but the others – and to a lesser extent, some of our standout charities – have significantly superior estimated cost-effectiveness according to our latest analyses. We will be continuing to stress-test and reflect on these analyses as we reflect on the question of how to modify our recommendations once the above targets are hit. Our research process in 2014This section describes the new work we did in 2014 to supplement our previous work on defining and identifying top charities. See the process page on our website for our overall process. This year, we completed an investigation of one new intervention (salt iodization). We made substantial progress on several others (maternal and neonatal tetanus immunization campaigns, mass drug administration for lymphatic filariasis, and vitamin A supplementation) but did not complete them. We also stayed up to date on the research for bednets, cash transfers and deworming and made a substantial update to our view on deworming, based on a new study by Kevin Croke. We did not conduct an extensive search for new charities this year. We feel that we have a relatively good understanding of the existing charities that could potentially meet our criteria, based on past searches (see the process page on our website for more information). Instead, we solicited applications from organizations that we viewed as contenders for recommendations. (Living Goods is an exception; it contacted us with the results from its randomized controlled trial.) A February post laid out which organizations we were hoping to investigate and why. In addition to the 4 standout charities, we also considered Nothing but Nets (a bednets organization that declined to participate in our process), Evidence Action’s Dispensers for Safe Water program (which is forthcoming), the Center for Neglected Tropical Disease and UNICEF’s maternal and neonatal tetanus program. In the case of the latter two, we ran out of time to complete the relevant intervention reports this year (due to prioritizing other work, which seemed more likely to lead to new recommendations) and plan to complete them in 2015. Brief notes on giving now vs. later and supporting GiveWell vs. top charitiesGiving now vs. giving later Last year, some staff members chose to save some of their charitable giving budget for future giving opportunities, and we discussed the considerations about giving now vs. later in this post. This year, we think the situation is a bit different, as AMF has returned to our top charities list, the case for both SCI and GiveDirectly has improved (due to new evidence on deworming and GiveDirectly’s strong performance in disbursing cash transfers), and we have extensively investigated possible other options. With these changes, we feel that (unlike last year) this year is an excellent year to give a substantial amount if you are interested primarily on our top charities work. We think our top charity recommendations are unlikely to improve a great deal (i.e. they’re unlikely to improve enough to make saving worthwhile) in the coming years. A couple considerations that might be relevant in weighing the decision to give now versus later: • Will the giving opportunities available in the future be better than the ones we have identified now? There are competing factors. On one hand, our research capacity has expanded significantly over the past 2 years, and this has given us the ability to research more opportunities both in our traditional, top charities work and the Open Philanthropy Project. On the other, the world is getting better and some of the best opportunities available today (e.g., deworming, bednets, salt iodization) may no longer be available 10 years from now. We now feel that we’ve investigated a large proportion of realistic short-to-medium-term contenders for top charity recommendations. If money moved ends up exceeding the ~$7.5 million we’re projecting over the next four months, a stronger case for waiting may emerge, as many of the strongest charities will be near what we think they can productively absorb in the short term (and our standout charities may become recommended next year, as discussed in the section on standouts).
• How much funding will be available in the future to the opportunities we identify? Our impression is that funding available for the opportunities we identify has and will continue to grow significantly. Good Ventures is a part of this, but we hope that other future, major philanthropists will consider supporting our recommendations as Good Ventures has.
Donors interested in supporting opportunities that come from the Open Philanthropy Project have a stronger case for saving to give later. Note that it could be several years before the Open Philanthropy Project has recommendations suitable for individual donors, and these recommendations will likely reflect a very different process, very different criteria, and a much higher tolerance for high-risk opportunities that are difficult to fully explain and defend in writing (though we will work hard to lay out the basic case).
Giving to GiveWell vs. our top charities
We have grown significantly over the past 2 years and continue to raise funds to support our operations. The funds we have received have enabled us to expand our staff. Without this increased capacity, we would not have been able to consider as many organizations as we did this year.
We plan to post an update soon about our budget situation. The most up to date information available is linked from our August board meeting. The short story is that we are still seeking additional donations. For the first time this year, our checkout form will ask donors to consider allocating 10% of their donation to our operating expenses. This option is not yet live on our website; we hope to implement this change in the next few weeks.
The post Our updated top charities appeared first on The GiveWell Blog.
### Deworm the World Initiative (led by Evidence Action) update
Fri, 11/21/2014 - 12:40
Summary
The Deworm the World Initiative (DtWI), led by Evidence Action, received approximately $2.3 million as a result of GiveWell’s recommendation last year. While there were some deviations, it largely allocated these funds as we expected. DtWI now has limited room for funding; it is currently seeking to raise an additional$1.3 million to support its activities in 2015 and 2016. We expect it to allocate approximately 30% of the additional funds it receives for work related to expanding school-based, mass deworming programs and funding related operating expenses (including impact evaluation related expenses), and will allocate other funds to priorities that are less directly connected to expanding and evaluating deworming programs (investigating ways to combine other evidence-based programs with deworming rollouts, supplementing a project supported by another funder).
We currently expect to release updated recommendations by December 1st. We think it is likely that the Deworm the World Initiative will remain on our top charities list.
How did DtWI spend the money it received due to GiveWell, and how does this compare to our expectations?
GiveWell directed approximately $2.3 million to the Deworm the World Initiative since we added it to our top charities list in December 2013. At the time of our recommendation, we expected DtWI to spend additional funds in the following ways; we did not have precise estimates for how much it would spend in each category: • Some portion to provide reserves for DtWI, both to make the organization more resilient and to allow it to respond to high impact opportunities • Some portion to allow DtWI to offer a lower-intensity level of assistance to regions that didn’t require its standard level of assistance • Some portion to support expansion to new states in India It has allocated these funds as follows (years when we expect funds to be spent in parentheses; 2014 means funds have been spent): •$881,000 – ongoing reserves. Our understanding is that DtWI does not have plans to spend these funds in the near future. Instead, these funds make DtWI more robust as an organization: for example, it is less likely to need to significantly shift priorities in order to fundraise and it is more likely to be able to respond quickly to high-impact opportunities it identifies.
• $509,000 – expansion into new countries (2015 and 2016). This includes preliminary work in Ethiopia, Indonesia, Philippines to support possible future work and$104,000 for prevalence surveys and technical assistance to the government and partner organization in Vietnam.
• $430,000 – ongoing work in India (2014 and early 2015). This will fund a follow-up prevalence survey in Bihar to assess the impact of three rounds of deworming on worm prevalence and intensity, and enable expansion to preschool children there, as well as contribute to the third round of the Rajasthan and Delhi programs. •$207,000 – contribution to elimination research primarily funded by the Children’s Investment Fund Foundation (CIFF) and the Bill and Melinda Gates Foundation (BMGF) (2015-2017). CIFF and BMGF provided approximately $1.6 million in funding to the Deworm the World Initiative and the London School of Hygiene and Tropical Medicine to conduct research on the feasibility and cost effectiveness of breaking transmission of soil-transmitted helminths. Breaking transmission would potentially require a different approach (likely covering more than just school-aged children) than DtWI’s standard school-based deworming model. •$151,000 – DtWI overhead (2014). These funds support DtWI as an organization but are not directly programmed (e.g., a portion of Alix Zwane’s, the Executive Director of Evidence Action salary, Evidence Action financial staff, etc.). Note that DtWI estimated $151,000 based on allocating 15% of programmed GiveWell-sourced funding to DtWI overhead. DtWI said it could more explicitly track these funds but would be time consuming to do so. We agreed that more detailed accounting was not necessary. •$129,000 – additional staff (2014). In 2014, DtWI hired (a) a deputy director to support its programming worldwide and (b) someone to focus on its impact evaluation. The latter hire is likely to be doing work on the breaking transmission work discussed below. We allocate some of this line item to expansion and related operating expenses and some to research.
Overall, DtWI’s funding decisions seem reasonable to us and are broadly consistent with what we anticipated.
• 46% ($1,067,000) supported expanding deworming programs and funding related operating expenses (including impact evaluation related expenses). This includes the deputy director who supports the organization as a whole but is necessary to expanded work in India and other new countries and half of the salary for the impact-evaluation-focused new staffer since he works on programmatic and technical support across DtWI. • 38% ($881,000) supported ongoing reserves.
• 10% ($241,000) supported research that we had not anticipated (including the other half of the new staffer since he is spending a significant part of his time on this research). • 6% (151,000) supported DtWI as a whole. How would DtWI spend additional funds? The Deworm the World Initiative seeks an additional$1.3 million to support its activities in 2015 and 2016. DtWI expects to spend $377,000 of the$1.3 million (29%) it seeks on work related to expanding school-based mass deworming programs and funding related operating expenses (including impact evaluation related expenses). More specifically, these activities would be:
• $230,000: staff to support expansion in India, new countries, and related operating and evaluation expenses. This line item is the salary for the deputy director and part of the salary for the impact evaluation focused staff member described above. •$144,000: DtWI overhead (described above).
• $500,000: evaluation of new evidence-based programs that leverage deworming. We have limited detail about what this would entail. One idea that DtWI has investigated is the possibility of distributing bednets along with deworming pills in schools as an alternative distribution mechanism to national net distributions. Another is including hand-washing educational programming alongside deworming days. This line item includes$50,000 to support DtWI’s evaluation of its hygiene and deworming program funded by Dubai Cares and $50,000 to enable DtWI to hire a senior epidemiologist. •$230,000: staff to support evaluation of DtWI’s work in Kenya. This work is primarily funded by CIFF. DtWI believes that additional resources can improve significantly the quality of the analysis done regarding the cost effectiveness of breaking transmission. This line item includes $100,000 to support the impact evaluation focused staff member described above. •$170,000: implementation support for the integrated deworming, sanitation and hygiene education program in Vietnam, in partnership with Thrive Networks.
Why is DtWI seeking additional funds primarily to support research and evaluation rather than scale up? What changed in the past year?
In 2014, two events affected DtWI’s projection of the additional funding it would require to scale up in India:
1. The Children’s Investment Fund Foundation (CIFF), a major foundation that had supported DtWI’s programs in Kenya, agreed to a 6-year, $17.7 million grant to support DtWI’s expansion to additional states in India and technical assistance to the Government of India for a national deworming program. At the end of 2013, DtWI believed it was reasonably likely that it would not receive this grant and had not anticipated how quickly it would come through. With these funds, DtWI does not require significant additional funding to support its India expansion. 2. The new Indian government expressed interest in conducting a single deworming day nationally with increased national attention and resources. Advocating for such a policy and assisting the national government in creating a plan became the major focus of DtWI’s India work in 2014, which both reduced the amount of time it was able to spend generating interest in heavy DtWI involvement in new states, and also required little funding since there were few costs of that project aside from staff time. DtWI believes that the first national deworming day will likely happen in February 2015. Together, these changes led DtWI to the conclusion that funding is no longer the bottleneck to reaching more people in India. More broadly, we believe that if donors close both the$1.3-million 2-year funding gap of DtWI and the ~5-\$8-million funding gap of the Schistosomiasis Control Initiative (SCI), another deworming organization we recommend, funding will not be the primary bottleneck to deworming programs’ scaling in general. Overall, our impression is that there is currently more funding available for scaling up deworming programs than capacity at organizations to utilize funds for scaleup.
Dr. Zwane believes that DtWI’s research agenda is important for two reasons:
1. She believes it is possible that this research will demonstrate that other approaches to deworming are more cost-effective, such as eliminating worms from areas to avoid the need for mass treatments, or combining deworming with other interventions such as bednet distributions or hygiene education.
2. She would like DtWI to consistently provide useful information to funders and policymakers and undertaking this research will enable it to continue doing so.
Notes on other deworming implementers and funders
It is not unlikely that GiveWell-directed donors will close the funding gaps of both DtWI and Schistosomiasis Control Initiative in the coming few months. Because of this, we also asked Alix Zwane (Executive Director of Evidence Action) about other implementers and funders working on deworming.
Implementers
Dr. Zwane told us DtWI and SCI are the two primary organizations that focus primarily on expanding countrywide deworming programs. Other organizations work on deworming but are not as directly focused on scaleup with government partners to her knowledge. There are other NGOs that work on other neglected tropical diseases (e.g., SightSavers) and school health (e.g., Partnership for Child Development), but Dr. Zwane is less familiar with the reach and scope of the service delivery they support.
Organizations that do a smaller amount of deworming implementation include UNICEF, Micronutrient Initiative and Vitamin Angels, which have begun adding deworming pills to their vitamin A supplementation programs, and WaterAid, which adds deworming to some of its water and sanitation programs.
IMA World Health, Helen Keller International, Sight Savers International, The MENTOR Initiative, and possibly others implement deworming programs supported by the funders discussed below. We have yet to speak with these organizations and have little information about their deworming programs or funding needs.
According to Dr. Zwane, the Global Network for Neglected Tropical Diseases works primarily on advocacy and does not focus on deworming, specifically, while Children without Worms coordinates partners globally and does not work on providing technical assistance for program delivery directly currently to her knowledge.
Funders
Major funders of deworming service delivery include the following: Dubai Cares, The END Fund, CIFF, the UK government’s Department for International Development (DFID), Michael and Susan Dell Foundation, and the US government’s USAID.
According to Dr. Zwane, these funders are interested in supporting scale-up, and she believes that DtWI will be in a strong position to raise funds for scale up from them if and when funding becomes a bottleneck. These funders are less likely to fund the types of activities to which DtWI has allocated GiveWell-directed funding.
A longer list of organizations working on deworming is available in this document, from a recent meeting of groups that are part of the STH Coalition.
The post Deworm the World Initiative (led by Evidence Action) update appeared first on The GiveWell Blog.
|
2017-09-20 04:05:59
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.195009246468544, "perplexity": 3213.002717117565}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-39/segments/1505818686169.5/warc/CC-MAIN-20170920033426-20170920053426-00236.warc.gz"}
|
https://brilliant.org/problems/if-only-i-could-see-in-81-dimensions-this-would-be/
|
# If Only I Could See In 81 Dimensions, Then This Would Be Easy
Algebra Level 5
Given 81 variables that satisfy
$0 \leq a_1 \leq a_2 \leq \ldots \leq a_{81} \leq 1,$
what is the maximum value of
$\left[ 9 \sum_{i=1}^{81} a_i ^2 \right] + \left[ \sum_{1 \leq j < k \leq 81 } ( a_k - a_j + 1)^2 \right] ?$
×
Problem Loading...
Note Loading...
Set Loading...
|
2018-01-18 14:01:34
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.7788374423980713, "perplexity": 5528.927613055066}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": false}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-05/segments/1516084887414.4/warc/CC-MAIN-20180118131245-20180118151245-00053.warc.gz"}
|
https://cs.stackexchange.com/questions/119076/proving-sets-of-regular-expressions-and-context-free-grammars-are-decidable
|
# Proving sets of regular expressions and context free grammars are decidable [duplicate]
Consider below languages:
1. $$L_1=\{|M$$ is a regular expression which generates at least one string containing an odd number of 1's$$\}$$
2. $$L_2=\{|G$$ is context free grammar which generates at least one string of all 1's$$\}$$
Its given that both above languages are decidable, but no proof is given. I tried guessing. $$L_1$$ is decidable, its a set of regular expressions containing
• odd number of $$1$$'s, or
• even number of $$1$$'s and $$1^+$$ or
• $$1^*$$
So we just have to parse regular expression for these characteristics. Is this right way to prove $$L_1$$ is decidable?
However, can we have some algorithm to check whether given input CFG accepts at least one string of all 1's? I am not able to come up with and hence not able prove how $$L_2$$ is decidable.
• Please ask only one question per post. – D.W. Dec 31 '19 at 20:08
• The problem did not talk about intersection of regular and context free languages, but whether they accept certain kind of words independently. – anir Jan 1 at 3:56
• The first listed dup (cs.stackexchange.com/questions/80713/…) seems to address exactly that, and also explains why intersection is relevant. – D.W. Jan 1 at 10:59
|
2020-04-07 08:05:12
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 11, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.5608463883399963, "perplexity": 465.1787766172998}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-16/segments/1585371675859.64/warc/CC-MAIN-20200407054138-20200407084638-00369.warc.gz"}
|
http://learning-laboratory.com/latex-square-roots-radicals/
|
# LaTeX Square Roots
LaTeX square roots are done with the command
\sqrt
The syntax is
\sqrt{x}
which produces the square root of x, like so:
$\sqrt{x}$
# n Roots
You can use the same \sqrt command to make roots other than square. The syntax
\sqrt[n]{x}
produces
$\sqrt[n]{x}$
Be sure to put in square brackets instead of curly. Square brackets are used for optional arguments to LaTeX commands.
# Examples of Use
\Huge\sqrt[3]{27}=3
produces
$\Huge\sqrt[3]{27}=3$
\LARGE\sqrt[\frac{1}{2}]{4}
produces
$\LARGE\sqrt[\frac{1}{2}]{4}$
x=\frac{-b\pm\sqrt{b^2-4ac}}{2a}
$x=\frac{-b\pm\sqrt{b^2-4ac}}{2a}$
You can even square root your square roots.
\ldots\sqrt{\sqrt{\sqrt{\ldots}}}=x
$\ldots\sqrt{\sqrt{\sqrt{\ldots}}}=x$
✔︎Bonus points (meaningless, imaginary points): Is this infinite series of square roots solvable for x, even if you can’t see the term until you reach infinity? I think so.
I believe almost any valid LaTeX string can go in the root argument, even another \sqrt, but not another root:
\Huge\sqrt[\sqrt{x}]{x}
$\Huge\sqrt[\sqrt{x}]{x}$
But this
\Huge\sqrt[\sqrt[n]{x}]{x}
produces an error for me in mathtex:
$\Huge\sqrt[\sqrt[n]{x}]{x}$
### Offsite Resources:
Tagged , , , . Bookmark the permalink.
|
2018-02-25 07:58:21
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 8, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9677153825759888, "perplexity": 3525.9698066404508}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-09/segments/1518891816178.71/warc/CC-MAIN-20180225070925-20180225090925-00208.warc.gz"}
|
http://matematika.reseneulohy.cz/2929/using-the-definition
|
## Using the definition
Using the definition of the sum of a series, solve the following exercises.
• #### Variant 1
Show that the harmonic series $$\displaystyle \sum_{n=1}^{\infty}\frac1n$$ diverges.
• #### Variant 2
Show that the series $$\displaystyle \sum_{n=1}^{\infty} \frac1{\sqrt n}$$ diverges.
• #### Variant 3
Investigate the convergence or divergence of the series $$\displaystyle \sum_{n=1}^{\infty} \ln\left(1+\frac1n\right)$$.
• #### Variant 4
Let $$\displaystyle \lim_{n\to \infty} a_n = a \in \mathbb R$$. Determine $$\displaystyle \sum_{n=1}^{\infty} (a_{n+1}-a_n)$$.
|
2022-05-26 14:23:13
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.960780143737793, "perplexity": 2847.4324508217715}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-21/segments/1652662606992.69/warc/CC-MAIN-20220526131456-20220526161456-00059.warc.gz"}
|
http://www2.macaulay2.com/Macaulay2/doc/Macaulay2-1.19/share/doc/Macaulay2/FiniteFittingIdeals/html/_quot__Scheme.html
|
# quotScheme -- Calculates the defining equations for Quot schemes of points
## Synopsis
• Usage:
quotScheme(Q,n,L)
• Inputs:
• Outputs:
## Description
The Quot scheme of n points of \mathcal{O}^p on \mathbb{P}^r embeds as a closed subscheme of the Grassmannian of rank n quotients of a push forward of \mathcal{O}(d)^p. This function gives the defining equations of this closed subscheme.
i1 : S=ZZ[x_0,x_1]; i2 : quotScheme(S^2,1,{0}) o2 = ideal(a a - a ) 2 3 4 o2 : Ideal of ZZ[a ..a ] 1 4
|
2023-02-03 13:33:47
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.7142711281776428, "perplexity": 2573.6546763942315}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2023-06/segments/1674764500056.55/warc/CC-MAIN-20230203122526-20230203152526-00350.warc.gz"}
|
http://www.ncatlab.org/nlab/show/binary+digit
|
# nLab binary digit
A binary digit, or bit, is either $0$ or $1$.
The set of binary digits is the boolean domain $\mathbb{B}$.
As a unit of information, a bit is the amount of information needed to specify which of the $2$ possibilities a given binary digit is. In natural units, a bit is $ln 2$.
Created on September 13, 2010 19:14:03 by Toby Bartels (64.89.62.209)
|
2015-05-07 01:35:13
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 5, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9221693873405457, "perplexity": 486.51985012095304}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2015-18/segments/1430460084453.65/warc/CC-MAIN-20150501060124-00089-ip-10-235-10-82.ec2.internal.warc.gz"}
|
http://www.r-bloggers.com/r-tools-for-dynamical-systems-r-pplane-to%C2%A0draw%C2%A0phase%C2%A0planes/
|
# R Tools for Dynamical Systems ~ R pplane to draw phase planes
April 5, 2010
By
(This article was first published on mind of a Markov chain » R, and kindly contributed to R-bloggers)
MATLAB has a nice program called pplane that draws phase planes of differential equations models. pplane on MATLAB is an elaborate program with an interactive GUI where you can just type the model to draw the phase planes. The rest you fidget by clicking (to grab the initial conditions) and it draws the dynamics automatically.
As far as I know, R doesn’t have a program of equal stature. R’s GUI itself is non-interactive (maybe because creating a good GUI require money), and you can’t fiddle around with the axes graphically, for example. The closest I could find was code from Prof. Kaplan from Macalester College in his program, pplane.r.
Below is a slight modification of his program that uses the deSolve package for a more robust approximation of the trajectory, and I made it so you can draw the trajectories by clicking, using the locator() function.
The pplane.r program takes in a 2D differential equation model, initial values and parameter value specifications to draw the dynamics on a plane. It draws arrows at evenly spaced out points at a certain resolution to see the general shape of the dynamics. This is done by using a crude method to create the Jacobian matrix. The next step is to give in initial values to draw the trajectory.
The only changes that made were to change the phasetraj() function, which draws the trajectories after you’ve made the arrow plot. Instead of using a self-made Runge Kutta method, I replaced it with a more robust ode() from the deSolve package. I also made it possible to point click multiple points (initial values) to draw the trajectories from. The code is shown below and it’s a little bit redundant because of different model specifications between pplane.r and deSolve, but it works. Also, the nullclines() function that draws the nullclines seems to not be working for whatever reason.
I could make the code more coherent, but I am lazy. Point is, the code can reproduce the pplane package in MATLAB to the best of my knowledge.
When I run draw.traj(), it asks for the number of initial points you’d like to give (denoted by loc.num). If that number is 5, I click the graph 5 times, and it automatically runs the model. I run a predator-prey model from a previous post with parameters: $alpha = 1; beta = .001; gamma = 1; delta = .001$.
I could specify the color of the trajectory, and its time range. As analyzed by linearization, the predator-prey dynamics of this model is a center (you could see the dynamics go in a circle with the initial value up top). One can imagine running all kinds of 2D models. It’s not as interactive as the MATLAB version, but I think it works well enough as a first step.
Code is follows (please source in pplane.r and deSolve package):
```library(deSolve)
LotVmod <- function (Time, State, Pars) {
with(as.list(c(State, Pars)), {
dx = x*(alpha - beta*y)
dy = -y*(gamma - delta*x)
return(list(c(dx, dy)))
})
}
nullclines(predatorprey(alpha, beta, gamma, delta),c(-10,100),c(-10,100),40)
phasearrows(predatorprey(alpha, beta, gamma, delta),c(-10,100),c(-10,100),20);
# modification of phasetraj() in pplane.r
draw.traj <- function(func, Pars, tStart=0, tEnd=1, tCut=10, loc.num=1, color = "red") {
traj <- list()
print(paste("Click", loc.num, "initial values"))
x0 <- locator(loc.num, "p")
for (i in 1:loc.num) {
out <- as.data.frame(ode(func=func, y=c(x=x0\$x[i], y=x0\$y[i]), parms=Pars, times = seq(tStart, tEnd, length = tCut)))
lines(out\$x, out\$y, col = color)
traj[[i]] <- out
}
return(traj)
}
alpha = 1; beta = .001; gamma = 1; delta = .001
nullclines(predatorprey(alpha, beta, gamma, delta),c(-10,100),c(-10,100),40)
phasearrows(predatorprey(alpha, beta, gamma, delta),c(-10,100),c(-10,100),20, col = "grey")
draw.traj(func=LotVmod, Pars=c(alpha = alpha, beta = beta, gamma = gamma, delta = delta), tEnd=10, tCut=100, loc.num=5)```
Filed under: deSolve, Food Web, R
R-bloggers.com offers daily e-mail updates about R news and tutorials on topics such as: Data science, Big Data, R jobs, visualization (ggplot2, Boxplots, maps, animation), programming (RStudio, Sweave, LaTeX, SQL, Eclipse, git, hadoop, Web Scraping) statistics (regression, PCA, time series, trading) and more...
|
2015-12-01 11:13:16
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 1, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.3195493221282959, "perplexity": 1422.9005816549918}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2015-48/segments/1448398466260.18/warc/CC-MAIN-20151124205426-00351-ip-10-71-132-137.ec2.internal.warc.gz"}
|
https://plainmath.net/74415/given-a-set-of-n-inequalities-each-of-th
|
# Given a set of n inequalities each of the form ax+by+cz≤d for some a,b,c,d in Q, determine if there
Given a set of n inequalities each of the form ax+by+cz≤d for some a,b,c,d in Q, determine if there exists x, y and z in Q that satisfy all the inequalities.
Here is an O(n4) algorithm for solving this: for each triple of inequalities, intersect their corresponding planes to get a point (x,y,z) iff possible. If no such intersecting point exists continue on to the next triple of inequalities. Test each of these intersection points against all the inequalities. If a particular point satisfies all the inequalities the solution has been found. If none of these points satisfy all the inequalities then there is no point satisfying the system of inequalities. There are O(n3) such intersection points and there are n inequalities thus the algorithm is O(n4).
I would like a faster algorithm for solving this (eg: O(n3), O(n2), O(n*logn), O(n)). If you can provide such an algorithm in an answer that would be great. You may notice this problem is a subset of the more general k-dimensional problem where there are points in k-dimensions instead of 3 dimensions as in this problem or 2 dimensions as in my previous problem mentioned above. The time complexity of my algorithm generalized to k dimensions is O(nk+1). Ideally I would like something that is a polynomial time algorithm however any improvements over my naive algorithm would be great. Thanks
You can still ask an expert for help
## Want to know more about Inequalities systems and graphs?
• Questions are typically answered in as fast as 30 minutes
Solve your problem for the price of one coffee
• Math expert for every subject
• Pay only if we can solve it
zwichtsu
You can modify your algorithm a bit to make it $O\left({n}^{3}\right)$.
If we have a line, we can check if there's a point on that line satisfying all inequalities in $O\left(n\right)$ time. If we denote one of the directions on this line "up", each plane gives you an upper or lower bound on the part of the line satisfying the inequalities. So we can compute the intersection points between the line and each plane, and check if the lowest upper bound is above the highest lower bound.
Like in your algorithm, candidate lines can be intersection lines between each pair of planes $O\left({n}^{2}\right)$ of them. So you get a total complexity of $O\left({n}^{3}\right)$.
Note, care must be taken for the special case when all planes are parallel and so there's no intersection lines.
|
2022-08-16 17:17:45
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 51, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8564809560775757, "perplexity": 272.7468206847012}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-33/segments/1659882572408.31/warc/CC-MAIN-20220816151008-20220816181008-00306.warc.gz"}
|
http://community.boredofstudies.org/482/tutoring-classifieds/346744/4u-mathematics-1st-state-%7C-hsc-maths-tutor.html
|
# Thread: 4U Mathematics 1st in the State | HSC Maths Tutor
1. ## 4U Mathematics 1st in the State | HSC Maths Tutor
I graduated from North Sydney Boys' High School in 2015 and study combined Actuarial Studies with Advanced Mathematics at UNSW. I am offering tutoring for:
Year 7-10 Mathematics
Preliminary General/2U/3U Mathematics
HSC General/2U/3U/4U Mathematics.
Over the years I have developed techniques to approach exams in general, as well as specific questions, especially for the HSC. I have also independently gathered a variety of resources, of which I will share with you. I believe with these techniques and resources, I will be able to help you maximise your potential and reach your goals.
Credentials:
-Over 1500 hours experience between Years 5 and 12.
-Mathematics Ext 2: HSC Mark 100 (1st in State)
-Mathematics Ext 1: HSC Mark 100
-Prize in Australian Mathematics Competition
-Medal in ICAS Mathematics Competition
-High Distinction in the UNSW School Mathematics Competition
-UNSW Award for Best Student in Mathematics Extension 1 and 2.
Location: At my home (in Westmead).
Contact:
Phone 0422 782 551
Email rishi.maran000@gmail.com
2. ## Re: 4U Mathematics 1st in the State | 99.15 ATAR | HSC Maths and Physics Tutor
Damnnn first in the state... Im jelly :P Congrats
3. ## Re: 4U Mathematics 1st in the State | 99.15 ATAR | HSC Maths and Physics Tutor
First in state in 4 Unit is just beast. Nothing else to say!
thank you ^
5. ## Re: 4U Mathematics 1st in the State | 99.15 ATAR | HSC Maths and Physics Tutor
Hi
May I ask but what was the "secret" to getting a state rank?
7. ## Re: 4U Mathematics 1st in the State | 99.15 ATAR | HSC Maths and Physics Tutor
Pretty explanatory that he would do actuarial studies with the rank in 4 unit
8. ## Re: 4U Mathematics 1st in the State | 99.15 ATAR | HSC Maths and Physics Tutor
Originally Posted by eyeseeyou
Hi
May I ask but what was the "secret" to getting a state rank?
Not sure about any secrets, but a good strategy that I used was to keep a pocket notebook. Any questions that I was unable to do, I would write them in there. Then I would study those questions and the relevant theory behind them. Hope that helped.
Haha thanks
10. ## Re: 4U Mathematics 1st in the State | 99.15 ATAR | HSC Maths and Physics Tutor
Originally Posted by Rishi Marang
Not sure about any secrets, but a good strategy that I used was to keep a pocket notebook. Any questions that I was unable to do, I would write them in there. Then I would study those questions and the relevant theory behind them. Hope that helped.
Questions that you were unable to do? Would you mind showing us an example of such a question which conquered the First in State for 4U himself?
11. ## Re: 4U Mathematics 1st in the State | 99.15 ATAR | HSC Maths and Physics Tutor
Originally Posted by KX
Questions that you were unable to do? Would you mind showing us an example of such a question which conquered the First in State for 4U himself?
Plot twist: his notebook was always empty
12. ## Re: 4U Mathematics 1st in the State | 99.15 ATAR | HSC Maths and Physics Tutor
Originally Posted by KX
Questions that you were unable to do? Would you mind showing us an example of such a question which conquered the First in State for 4U himself?
Well this is an example in the book from early Year 12.
$a. Show that \dfrac{d}{dx} (\tan^{-1}x + \tan^{-1}\dfrac{1}{x})= 0$
$b. Hence sketch the graph f(x) = \tan^{-1}x + \tan^{-1}\dfrac{1}{x}$
13. ## Re: 4U Mathematics 1st in the State | 99.15 ATAR | HSC Maths and Physics Tutor
IIRC, that's a 3u question from the inverse trig exercise in Cambridge, right?
14. ## Re: 4U Mathematics 1st in the State | 99.15 ATAR | HSC Maths and Physics Tutor
there is variation in Chapter 2 i believe, although I'm quite sure I encountered this question from a past paper
15. ## Re: 4U Mathematics 1st in the State | 99.15 ATAR | HSC Maths and Physics Tutor
Originally Posted by DatAtarLyfe
IIRC, that's a 3u question from the inverse trig exercise in Cambridge, right?
2010 Extension 1 HSC Q5 (b).
16. ## Re: 4U Mathematics 1st in the State | 99.15 ATAR | HSC Maths and Physics Tutor
Arrgggh, Riiiiiishi. Wise words from the 4U god. "Just do your work and you'll be alright, don't muck around" Reppin Sydney Boys High school XD
17. ## Re: 4U Mathematics 1st in the State | 99.15 ATAR | HSC Maths and Physics Tutor
Originally Posted by Dark-Knight64
Arrgggh, Riiiiiishi. Wise words from the 4U god. "Just do your work and you'll be alright, don't muck around" Reppin Sydney Boys High school XD
Wasn't he NSB not SBHS?
18. ## Re: 4U Mathematics 1st in the State | 99.15 ATAR | HSC Maths and Physics Tutor
Originally Posted by eyeseeyou
Wasn't he NSB not SBHS?
Yes, as it says in the original post.
19. ## Re: 4U Mathematics 1st in the State | 99.15 ATAR | HSC Maths and Physics Tutor
InteGrand is like the wise father of the forum
20. ## Re: 4U Mathematics 1st in the State | 99.15 ATAR | HSC Maths and Physics Tutor
It's supposed to a joke, cuz the Sydney morning herald article published Rishi as attending SBHS instead of NSBHS. I can't find the original SMH article anymore, but the same article "Public beats private in HSC firsts" was published in "The Australian".
21. ## Re: 4U Mathematics 1st in the State | 99.15 ATAR | HSC Maths and Physics Tutor
Originally Posted by Dark-Knight64
It's supposed to a joke, cuz the Sydney morning herald article published Rishi as attending SBHS instead of NSBHS. I can't find the original SMH article anymore, but the same article "Public beats private in HSC firsts" was published in "The Australian".
It is here: http://www.theaustralian.com.au/news...80e3d46c2097aa .
22. ## Re: 4U Mathematics 1st in the State | 99.15 ATAR | HSC Maths and Physics Tutor
Originally Posted by InteGrand
The person who wrote the article is a moron who got it wrong
23. ## Re: 4U Mathematics 1st in the State | 99.15 ATAR | HSC Maths and Physics Tutor
Originally Posted by Dark-Knight64
Arrgggh, Riiiiiishi. Wise words from the 4U god. "Just do your work and you'll be alright, don't muck around" Reppin Sydney Boys High school XD
Alright who is this...
Originally Posted by eyeseeyou
The person who wrote the article is a moron who got it wrong
very true
Reshey!
25. ## Re: 4U Mathematics 1st in the State | 99.15 ATAR | HSC Maths and Physics Tutor
Originally Posted by Rishi Marang
Alright who is this...
very true
He/she needs to go back to high school to learn his/her english again before becoming a newsletter editor and needs to go back to uni to restudy newsletter editing and writing
Page 1 of 3 123 Last
There are currently 1 users browsing this thread. (0 members and 1 guests)
#### Posting Permissions
• You may not post new threads
• You may not post replies
• You may not post attachments
• You may not edit your posts
•
|
2018-03-18 17:27:13
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 2, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.6362737417221069, "perplexity": 6452.478187495204}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-13/segments/1521257645830.10/warc/CC-MAIN-20180318165408-20180318185408-00379.warc.gz"}
|
http://www.ck12.org/chemistry/Gas-Density/lesson/Gas-Density/r5/
|
<meta http-equiv="refresh" content="1; url=/nojavascript/"> Gas Density ( Read ) | Chemistry | CK-12 Foundation
You are viewing an older version of this Concept. Go to the latest version.
# Gas Density
%
Best Score
Practice Gas Density
Best Score
%
Gas Density
0 0 0
Why does carbon dioxide sink in air?
When we run a reaction to produce a gas, we expect it to rise into the air. Many students have done experiments where gases such as hydrogen are formed. The gas can be trapped in a test tube held upside-down over the reaction. Carbon dioxide, on the other hand, sinks when it is released. Carbon dioxide has a density greater that air, so it will not rise like these other gases would.
### Gas Density
As you know, density is defined as the mass per unit volume of a substance. Since gases all occupy the same volume on a per mole basis, the density of a particular gas is dependent on its molar mass. A gas with a small molar mass will have a lower density than a gas with a large molar mass. Gas densities are typically reported in g/L. Gas density can be calculated from molar mass and molar volume.
Balloons filled with helium gas float in air because the density of helium is less than the density of air.
#### Sample Problem One: Gas Density
What is the density of nitrogen gas at STP?
Step 1: List the known quantities and plan the problem.
Known
• N 2 = 28.02 g/mol
• 1 mol = 22.4 L
Unknown
• density = ? g/L
Molar mass divided by molar volume yields the gas density at STP.
Step 2: Calculate.
$\frac{28.02 \ \text{g}}{1 \ \text{mol}} \times \frac{1 \ \text{mol}}{22.4 \ \text{L}}=1.25 \ \text{g} / \text{L}$
When set up with a conversion factor, the mol unit cancels, leaving g/L as the unit in the result.
The molar mass of nitrogen is slightly larger than molar volume, so the density is slightly greater than 1 g/L.
Alternatively, the molar mass of a gas can be determined if the density of the gas at STP is known.
#### Sample Problem Two: Molar Mass from Gas Density
What is the molar mass of a gas whose density is 0.761 g/L at STP?
Step 1: List the known quantities and plan the problem.
Known
• N 2 = 28.02 g/mol
• 1 mol = 22.4 L
Unknown
• molar mass = ? g/L
Molar mass is equal to density multiplied by molar volume.
Step 2: Calculate.
$\frac{0.761 \ \text{g}}{1 \ \text{L}} \times \frac{22.4 \ \text{L}}{1 \ \text{mol}}=17.0 \ \text{g} / \text{mol}$
Because the density of the gas is less than 1 g/L, the molar mass is less than 22.4.
#### Summary
• Calculations are described showing conversions between molar mass and density for gases.
#### Practice
1. Which of the gases has the highest density?
2. Which gas has the lowest density?
3. Would you expect nitrogen to have a higher or lower density that oxygen? Why?
#### Review
1. How is density calculated?
2. How is molar mass calculated?
3. What would be the volume of 3.5 moles of a gas?
|
2014-10-26 01:15:31
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 2, "texerror": 0, "math_score": 0.6316878795623779, "perplexity": 1886.3233077465593}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2014-42/segments/1414119653628.45/warc/CC-MAIN-20141024030053-00010-ip-10-16-133-185.ec2.internal.warc.gz"}
|
http://openstudy.com/updates/506fd11ee4b07f4e7ba62f55
|
## A community for students. Sign up today
Here's the question you clicked on:
## anonymous 3 years ago The base of solid "S" is the region enclosed by the parabola "y=36-25x^(2)" and the x-axis. Cross-sections perpendicular to the y-axis are squares. Find the Volume of the described solid "S".
• This Question is Closed
1. anonymous
I've come up with an answer of $\int\limits_{0}^{36} \pi((36-y)\div(25))$ Any one agree or disagree?
2. RadEn
can u make draw your answer, i cant see it my conection is low :(
3. anonymous
|dw:1349508324532:dw| A little wonky but readable.
4. RadEn
it should be : |dw:1360048939561:dw|
5. RadEn
ops,, sorry you are right.. |dw:1360049305963:dw|
6. RadEn
because that function must be squared first
7. RadEn
except to find area, without squared :)
8. RadEn
so, i agree with u
9. RadEn
what is the volume do u get ?
10. anonymous
the final volume i got was ((648)(pi))/(25). not sure if that's correct
11. RadEn
yea, v=(36^2)/50 (pi) = 648/25 (pi) you are correct
12. anonymous
okay, i'll post if i get this one right or not :)
13. anonymous
The correct answer was 2592/25, not sure how that works.
14. RadEn
thought one.... :P maybe 648/25 convert to decimal's form, it can be = 25.92 or convert to mix fraction : |dw:1349590447689:dw|
#### Ask your own question
Sign Up
Find more explanations on OpenStudy
Privacy Policy
|
2016-07-30 05:38:19
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.6960997581481934, "perplexity": 6711.551339929511}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": false}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2016-30/segments/1469257832939.78/warc/CC-MAIN-20160723071032-00019-ip-10-185-27-174.ec2.internal.warc.gz"}
|
http://ocw.mit.edu/ans7870/6/6.006/s08/lecturenotes/dd_prog3.htm
|
# Document Distance: Program Version 3
Problem Definition | Data Sets | Programs: v1 - v2 - v3 - v4 - v5 - v6 | Programs Using Dictionaries
Ah - the problem is that concatenating two lists takes time proportional to the sum of the lengths of the two lists, since each list is copied into the output list!
Therefore, building up a list via the following program:
L = []
for i in range(n):
L = L + [i]
takes time Θ(n2) (i.e. quadratic time), since the copying work at the ith step is proportional toi, and
$1 + 2 + \cdots + n = \Theta(n^2)$
On the other hand, building up the list in the following way takes only Θ(n) time (i.e. linear time), since appending doesn't require re-copying the first part of the list, only placing the new element at the end.
L = []
for i in range[n]:
L = L.append(i)
So, let's change our get_words_from_line_list from:
def get_words_from_line_list(L):
"""
Parse the given list L of text lines into words.
Return list of all words found.
"""
word_list = []
for line in L:
words_in_line = get_words_from_string(line)
word_list = word_list + words_in_line
return word_list
to:
def get_words_from_line_list(L):
"""
Parse the given list L of text lines into words.
Return list of all words found.
"""
word_list = []
for line in L:
words_in_line = get_words_from_string(line)
# Using "extend" is much more efficient than concatenation here:
word_list.extend(words_in_line)
return word_list
(Note that extend appends each element in its argument list to the end of the word_list; it thus takes time proportional to the number of elements so appended.)
We call our revised program docdist3.py (PY).
If we run the program again, we obtain:
>docdist3.py t2.bobsey.txt t3.lewis.txt
File t2.bobsey.txt : 6667 lines, 49785 words, 3354 distinct words
File t3.lewis.txt : 15996 lines, 182355 words, 8530 distinct words
The distance between the documents is: 0.574160 (radians)
3838997 function calls in 84.861 CPU seconds
Ordered by: standard name
ncalls tottime percall cumtime percall filename:lineno(function)
1 0.000 0.000 0.000 0.000 :0(acos)
1241849 4.710 0.000 4.710 0.000 :0(append)
22663 0.101 0.000 0.101 0.000 :0(extend)
1277585 4.760 0.000 4.760 0.000 :0(isalnum)
232140 0.814 0.000 0.814 0.000 :0(join)
345651 1.254 0.000 1.254 0.000 :0(len)
232140 0.784 0.000 0.784 0.000 :0(lower)
2 0.001 0.000 0.001 0.000 :0(open)
2 0.000 0.000 0.000 0.000 :0(range)
2 0.014 0.007 0.014 0.007 :0(readlines)
1 0.010 0.010 0.010 0.010 :0(setprofile)
1 0.000 0.000 0.000 0.000 :0(sqrt)
1 0.006 0.006 84.851 84.851 <string>:1(<module>)
2 44.533 22.267 44.577 22.289 docdist3.py:108(count_frequency)
2 11.296 5.648 11.296 5.648 docdist3.py:125(insertion_sort)
2 0.000 0.000 84.507 42.254 docdist3.py:147(word_frequencies_for_file)
3 0.184 0.061 0.327 0.109 docdist3.py:165(inner_product)
1 0.000 0.000 0.327 0.327 docdist3.py:191(vector_angle)
1 0.012 0.012 84.846 84.846 docdist3.py:201(main)
2 0.000 0.000 0.015 0.007 docdist3.py:51(read_file)
2 0.196 0.098 28.618 14.309 docdist3.py:67(get_words_from_line_list)
22663 13.150 0.001 28.321 0.001 docdist3.py:80(get_words_from_string)
1 0.000 0.000 84.861 84.861 profile:0(main())
0 0.000 0.000 profile:0(profiler)
232140 1.492 0.000 2.276 0.000 string.py:218(lower)
232140 1.543 0.000 2.357 0.000 string.py:306(join)
Much better! We shaved about two minutes (out of about three) on the running time here, by changing this one routine from having quadratic running time to having linear running time.
There is a major lesson here: Python is a powerful programming language, with powerful primitives like concatenation of lists. You need to understand the cost (running times) of these primitives if you are going to write efficient Python programs. See Python Cost Model for more discussion and details.
Are there more quadratic running times hidden in our routines?
The next offender (in terms of overall running time) is count_frequency, which computes the frequency of each word, given the word list. Here is its code:
def count_frequency(word_list):
"""
Return a list giving pairs of form: (word,frequency)
"""
L = []
for new_word in word_list:
for entry in L:
if new_word == entry[0]:
entry[1] = entry[1] + 1
break
else:
L.append([new_word,1])
return L
This routine takes more than 1/2 of the running time now.
Can you improve it? Is this quadratic?
|
2014-10-25 12:39:04
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 1, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.4447883665561676, "perplexity": 3311.671606786925}, "config": {"markdown_headings": true, "markdown_code": false, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2014-42/segments/1414119648155.19/warc/CC-MAIN-20141024030048-00163-ip-10-16-133-185.ec2.internal.warc.gz"}
|
https://www.alignmentforum.org/users/william_s
|
# William Saunders
PhD student at the University of Toronto, studying machine learning and working on AI safety problems.
Zoom In: An Introduction to Circuits
The worry I'd have about this interpretability direction is that we become very good at telling stories about what 95% of the weights in neural networks do, but the remaning 5% hides some important stuff, which could end up including things like mesa-optimizers or deception. Do you have thoughts on that?
Reinforcement Learning in the Iterated Amplification Framework
I'm talking about an imitation version where the human you're imitating is allowed to do anything they want, including instatiting a search over all possible outputs X and taking that one that maximizes the score of "How good is answer X to Y?" to try to find X*. So I'm more pointing out that this behaviour is available in imitation by default. We could try to rule it out by instructing the human to only do limited searches, but that might be hard to do along with maintaining capabilities of the system, and we need to figure out what "safe limited search" actually looks like.
Reinforcement Learning in the Iterated Amplification Framework
If M2 has adversarial examples or other kinds of robustness or security problems, and we keep doing this training for a long time, wouldn't the training process sooner or later sample an X that exploits M2 (gets a high reward relative to other answers without actually being a good answer), which causes the update step to increase the probability of M1 giving that output, and eventually causes M1 to give that output with high probability?
I agree, and think that this problem occurs both in imitation IA and RL IA
For example is the plan to make sure M2 has no such robustness problems (if so how)?
I believe the answer is yes, and I think this is something that would need to be worked out/demonstrated. I think there is one hope that if M2 can increase the amount computing/evaluation power it uses for each new sample X as we take more samples, then you can keep taking more samples without ever accepting an adversarial one (This assumes something like for any adversarial example, all M2 with at least some finite amount of computing power will reject it). There's maybe another hope that you could make M2 robust if you're allowed to reject many plausibly good X in order to avoid false positives. I think both of these hopes are in the IOU status, and maybe Paul has a different way to put this picture that makes more sense.
Outer alignment and imitative amplification
Overall, I think imitative amplification seems safer, but I maybe don't think the distinction is as clear cut as my impression of this post gives.
if you can instruct them not to do things like instantiate arbitrary Turing machines
I think this and "instruct them not to search over arbitrary text strings for the text string that gives the most approval", and similar things, are the kind of details that would need to be filled out to make the thing you are talking about actually be in a distinct class from approval-based amplification and debate (My post on imitation and RL amplification was intended to argue that without further restrictions, imitation amplification is in the same class as approval-based amplification, which I think we'd agree on). I also think that specifying these restrictions in a way that still lets you build a highly capable system could require significant additional alignment work (as in the Overseer's Manual scenario here)
Conversely, I also think there are ways that you can limit approval-based amplification or debate - you can have automated checks, for example, that discard possible answers that are outside of a certain defined safe class (e.g. debate where each move can only be from either a fixed library of strings that humans produced in advance or single direct quotes from a human-produced text). I'd also hope that you could do something like have a skeptical human judge that quickly discards anything they don't understand + an ML imitation of the human judge that discards anything outside of the training distribution (don't have a detailed model of this, so maybe it would fail in some obvious way)
I think I do believe that for problems where there is a imitative amplification decomposition that solves the problem without doing search, that's more likely to be safe by default than approval-based amplification or debate. So I'd want to use imitative amplification as much as possible, falling back to approval only if needed. On imitative amplification, I'm more worried that there are many problems it can't solve without doing approval-maximizing search, which brings the old problems back in again. (e.g. I'm not sure how to use imitative amplification at the meta-level to produce better decomposition strategies than humans use without using approval-based search)
Concrete experiments in inner alignment
Possible source for optimization-as-a-layer: SATNet (differentiable SAT solver)
https://arxiv.org/abs/1905.12149
2-D Robustness
One way to try to measure capability robustness seperate from alignment robustness off of the training distribution of some system would be to:
• use an inverse reinforcement learning algorithm infer the reward function of the off-distribution behaviour
• train a new system to do as well on the reward function as the original system
• measure the number of training steps needed to reach this point for the new system.
This would let you make comparisons between different systems as to which was more capability robust.
Maybe there's a version that could train the new system using behavioural cloning, but it's less clear how you measure when you're as competent as the original agent (maybe using a discriminator?)
The reason for trying this is having a measure of competence that is less dependent on human judgement/closer to the systems's ontology and capabilities.
Problems with AI debate
For Alaska vs. Bali, alternative answer is "You could be convinced that either Alaska or Bali is a good vacation destination". It's an interesting question whether this could actually win in debate. I think it might have a better chance in Factored Evaluation, because we can spin up two seperate trees to view the most compelling argument for Alaska and the most compelling argument for Bali and verify that these are convincing. In debate, you'd need view either Alaska Argument before Bali Argument, or Bali Argument before Alaska Argument, and you might just be convinced by the first argument you see in which case you wouldn't agree that you could be convinced either way.
HCH is not just Mechanical Turk
I'd say that the claim is not sufficient - it might provide some alignment value, but it needs a larger story about how the whole computation is going to be safe. I do think that the HCH framework could make specifying an aligned GOFAI-like computation easier (but it's hard to come up with a rigorous argument for this without pointing to some kind of specification that we can make claims about, which is something I'd want to produce along the way while proceeding with HCH-like approaches)
Deceptive Alignment
I think a cleaner way of stating condition 3 might be "there exists some policy with a non-zero chance of deception at some point in the future, with expected value for the mesa-optimizer higher than the policy of always cooperating".
This could be met if the threat of modification never goes away, but maybe decreases over time. (at training time, system is monitored for defection every episode, at a deployment time it is monitored for defection 1/million episodes, not worthwhile to defect during training but worthwhile to defect at deployment time)
This could fail to be met if the mesa-optimizer believes that 1) there is a 1% chance that it will be detected by the base optimizer 2) if this happens, the base optimizer will be able to determine $O_mesa$ and give it a large negative reward, 100x more than the possible reward from the best defecting policy. (not sure if there's any version of this that would work, since it seems hard to figure out $O_mesa$ and provide negative reward)
An Increasingly Manipulative Newsfeed
To me, It seems like the point of this story is that we could build an AI that ends up doing very dangerous things without ever asking it "Will you do things I don't like if given more capability?" or some other similar question that requires it to execute the treacherous turn. In contrast, if the developers did something like build a testing world with toy humans in it who could be manipulated in a way detectable to the developers, and placed the AI in the toy testing world, then it seems like this AI would be forced into a position where it either acts in a way according to it's true incentives (manipulate the humans and be detected), or execute the treacherous turn (abstain from manipulating the humans so developers will trust it more). So it seems like this wouldn't happen if the developers are trying to test for treacherous turn behaviour during development.
|
2020-04-03 01:12:38
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.5344992280006409, "perplexity": 1008.9515527274415}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-16/segments/1585370509103.51/warc/CC-MAIN-20200402235814-20200403025814-00268.warc.gz"}
|
https://tomvanantwerp.com/coding-questions/leetcode-053-maximum-subarray/
|
# 53. Maximum Subarry
## The Problem
Link to original problem on Leetcode.
Given an integer array nums, find the contiguous subarray (containing at least one number) which has the largest sum and return its sum.
Examples
Example 1:
Input: nums = [-2,1,-3,4,-1,2,1,-5,4]
Output: 6
Explanation: [4,-1,2,1] has the largest sum = 6.
Example 2:
Input: nums = [1]
Output: 1
Example 3:
Input: nums = [0]
Output: 0
Example 4:
Input: nums = [-1]
Output: -1
Example 5:
Input: nums = [-100000]
Output: -100000
Constraints
• 1 <= nums.length <= 3 * 104
• -105 <= nums[i] <= 105
Follow up: If you have figured out the $O(n)$ solution, try coding another solution using the divide and conquer approach, which is more subtle.
## My Solution
### Naïve Approach
The worst thing I can think of would be to compute the sum of every conceivable subarray. This would be $O(n{^3})$ time complexity and $O(1)$ space complexity.
// Bad, don't do thisconst maxSubArray = (nums) => { let sum = nums[0]; for (let i = 0; i < nums.length; i++) { for (let j = i; j < nums.length; j++) { // I'm using slice and reduce to get the subarray // sum instead of writing a third for loop, // because why not? const newSum = nums .slice(i, j + 1) .reduce((acc, curr) => { return acc + curr; }, 0); sum = Math.max(sum, newSum); } } return sum;};
This code passed Leetcode's example test cases, but times out when submitted. No surprises there!
It can be improved to $O(n{^2})$ time by noticing that we don't need to compute each piece of each subarry. For example, with an array [2, 5, -3, 4], I would start with 2, 2+5, then 2+5-3, then 2+5-3+4 for the first loop of i. See how I recompute every value every time? Instead, I could do it as 2, 2+5, 7-3, 4+4. Here's what that would look like:
// Better but still bad, don't do this eitherconst maxSubArray = (nums) => { let sum = nums[0]; for (let i = 0; i < nums.length; i++) { let leftSideSum = 0; for (let j = i; j < nums.length; j++) { leftSideSum += nums[j]; sum = Math.max(sum, leftSideSum); } } return sum;};
I did some research and found Kadane's algorithm for solving this problem in $O(n)$ time. It breaks the problem down into the question: would I get a higher sum by continuing the largest subarry ending at index i - 1, or just starting a new subarry at i? To do this, it keeps track of the best sum we've seen so far and the best subarray sum ending at i. No lie, this took some time to wrap my head around. Here's an implementation.
// Much improved, could do thisconst maxSubArray = (nums) => { let current = -Infinity, best = -Infinity; for (let i = 0; i < nums.length; i++) { current = Math.max(nums[i], current + nums[i]); best = Math.max(best, current); } return best;};
|
2022-05-21 09:08:42
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 5, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.4202626645565033, "perplexity": 2873.0043473119777}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 5, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-21/segments/1652662539049.32/warc/CC-MAIN-20220521080921-20220521110921-00109.warc.gz"}
|
https://astronomy.stackexchange.com/questions/19165/does-the-redshifting-of-photons-from-the-universes-expansion-violate-conservati?noredirect=1
|
# Does the redshifting of photons from the Universe's expansion violate conservation of momentum?
The energy-momentum relation,
$$E^2 = m^2c^4 +p^2c^2,$$
lets us derive the momentum of a massless particle:
$$p = \frac{E}{c} = \frac{h\nu}{c}$$
However, the expansion of the Universe redshifts light. This should decrease the momentum of photons. Where would the momentum go, in order for conservation of momentum to hold?
• Is this a different question to astronomy.stackexchange.com/questions/18613/… ? Nov 24 '16 at 16:33
• @RobJeffries Yes, because as far as I know, conservation of energy does not hold in GR. I'm asking about momentum. Nov 24 '16 at 16:35
• Light blue-shifts as it falls into super-clusters, then red-shifts a little less as it climbs out of the ever-expanding cluster's gravity well. Basically expansion causes universe to not act like a closed system energy or momentum-wise; but this is only on extremely large scales. Nov 26 '16 at 16:00
## 1 Answer
In relativity you can think of a single conservation law that unites conservation of energy and momentum -- conservation of four-momentum. Energy and momentum are the zeroth and the first to third components of the four-momentum respectively. Such conservation laws arise from invariance of the Lagrangian with respect to a translation in space-time coordinates.
In General Relativity these conservation laws are local concepts that (most people think) can only be applied in local, inertial (flat) frames of reference. In particular, they cannot be applied in changing space-times and so cannot be applied to situations involving the expansion of the universe.
• The second paragraph is a little muddled and makes it sound as though there is a controversy among experts, when there isn't. The issue isn't whether the spacetime is "changing" or has a timelike Killing vector. That's just the condition for test particles to have a conserved energy. The issue is whether the spacetime is curved. (As a side issue, not relevant here, there is a way to make a globally conserved mass-energy in the case of asymptotically flat spacetimes.)
– user15381
Dec 17 '20 at 16:13
|
2021-10-18 23:55:14
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.646559476852417, "perplexity": 470.86820953067036}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": false}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-43/segments/1634323585215.14/warc/CC-MAIN-20211018221501-20211019011501-00199.warc.gz"}
|
https://www.physicsforums.com/threads/koide-neutrinos-and-phenomenology.851043/
|
# A Koide, neutrinos, and phenomenology
1. Jan 6, 2016
### mitchell porter
Compared to the other fermions, I have always had much less interest in Koide mass formulas for neutrinos. There is less data, and neutrino mass works differently anyway (Dirac plus Majorana, whereas the other fermions are just Dirac).
But today that has changed. First, today we have a paper that predicts Dirac and Majorana masses, using a Koide-like ansatz:
http://arxiv.org/abs/1601.00754
Some New Symmetric Relations and the Prediction of Left and Right Handed Neutrino Masses using Koide's Relation
Yong-Chang Huang, Syeda Tehreem Iqbal, Zhen Lei, Wen-Yu Wang
(Submitted on 5 Jan 2016)
Masses of the three generations of charged leptons are known to completely satisfy the Koide's mass relation. But the question remains if such a relation exists for neutrinos? In this paper, by considering SeeSaw mechanism as the mechanism generating tiny neutrino masses, we show how neutrinos satisfy the Koide's mass relation, on the basis of which we systematically give exact values of not only left but also right handed neutrino masses.
And second, we have been discussing various minimal BSM theories, in which the masses of the right-handed neutrinos are constrained by astrophysics. We have discussed NMSM and vMSM, and I'd also like to mention this model which is SM + RH neutrinos + axion.
With a completely predictive Koide-like ansatz and a definite physical framework, we can ask if the predicted numbers are compatible with the framework, yes or no. And that's progress.
So we may be entering a time when there can be a tighter interaction between Koide ansatze, and BSM phenomenology.
2. Jan 6, 2016
### DuckAmuck
I've seen the Kiode mass formula before. It just seems like numerology at present, but its features are uncanny. Like the pi/4 angle from (1,1,1).
Those right handed neutrinos are extremely massive: good dark matter candidates.
Last edited: Jan 6, 2016
3. Jan 8, 2016
### mitchell porter
Actually, in the typical seesaw model, superheavy neutrinos like that cannot be the dark matter, because they decay immediately into lighter particles. In the typical seesaw model, where those particles count is in the very early universe, when it's so hot and dense that no particle lives very long. Under those conditions, the short lifetime isn't a handicap with respect to physical relevance, and their presence in the mix can explain the subsequent matter/antimatter imbalance in the universe (this is the "leptogenesis" theory of where the imbalance comes from).
But if you want the present-day dark matter to be made of right-handed neutrinos, you normally have to suppose that they are much much lighter than that. I am aware of precisely one model in which superheavy neutrinos are the dark matter, and it places them in an extra dimension in order to make them long-lived.
Also, after thinking about this paper, I have decided that I find its way of generalizing the Koide formula to the neutrinos, to be unlikely.
All the other Koide formulas involve Dirac masses. In the type I seesaw, the observed neutrino mass is a derived quantity, the smaller eigenvalue in a matrix whose elements are the Dirac mass and the Majorana mass. Those matrix elements are the more fundamental quantities. So both precedent, and logic, suggest that one should look first for Koide relations among the neutrino Dirac masses, and perhaps among the Majorana masses too.
But in this paper, the hypothesis is that it's the seesaw eigenvalues which are connected in that way. I have found that François Goffinet pointed out the problem with this in his 2008 thesis. But it seems like Alejandro Rivero is the only one so far, to make a concrete proposal of the more "logical" kind.
Last edited: Jan 8, 2016
4. Jan 8, 2016
### DuckAmuck
Is there any validity to the koide formula for leptons besides "this kind of works out nicely"? Also you mention other koide formulas. Are there more besides what is mentioned in this paper?
5. Jan 9, 2016
### arivero
Actually I am a bit annoyed that all this papers keep quoting my old arXiv:hep-ph/0505220 and none of the new notes on the topic. I had hoped at least the people to click in the author names and find arXiv:1111.7232. A more recent note with a lot of references is my talk http://es.slideshare.net/alejandrorivero/koide2014talk on an online seminar (babling video of myself here: cosmovia)
Last attempt to produce more formulae, by Koide itself, is Phys. Rev. D 92, 111301 (2015).
Last edited: Jan 9, 2016
6. Jan 19, 2016
### ohwilleke
Empirically, Koide's formula is so right on the money in an area where the data are quite precise, and equally important the data have grown much more precise today than they were when it was devised (with the fit significantly improving), so I don't really have any serious doubts that Koide's formula for charged leptons is real. If there is an error in Koide's formula, my expectation is that it would be on the order of magnitude of the mass of the neutrinos relative to the mass of the charged leptons (one or two parts per million or so).
Moreover, the extension of the formula to quarks is sufficiently close to suggest that the quark mass hierarchy derives from the same first order source as the charged leptons, but with an additional second order complication of some sort.
A variety of hypotheses to explain this have been advanced and none has secured a consensus.
I am personally partial to the (unorthodox) notion that Koide's formula arises from a balancing of the masses of the various particles that a fundamental fermion can transform into via W boson interactions, which would imply that the Yukawa couplings of the Higgs boson in a deeper theory arise dynamically rather than being fixed constants of Nature, but I don't claim any strong authority for that position other than that it seems amenable to producing a decent fit for the data. In this analysis, the reason that Koide's formula is so perfect for charged leptons, and so imperfect for some of the quarks, is that any given charged lepton can only transform into the other two charged leptons with 100% probability between them, while any given quark can change into three other possible quarks. An extension of Koide's formula for quarks fits very well, for example, to the top-bottom-charm triple where the probability of decays to the next quark down the chain are very high (close to 100%) on both transformations, and the quality degrades the more there are meaningful probabilities of a decay chain other than the triple in question, particularly because the correction between the extended Koide prediction and the experimentally measured value typically is on the same order of magnitude as the probability of the triple not being the decay chain times the mass of the omitted possibility.
Our knowledge of neutrino mass differences is sufficiently precise that we know for a fact that these do not make a charged lepton Koide triple, although Brannen has made a proposal that involves a change of sign in the relation for neutrinos. Testing any proposal experimentally will take a while, however, because while we know the mass differences quite precisely, the precision with which we can determine even the heaviest neutrino mass is only the order of 100% MOE (cosmology provides the tightest limitations and also increasingly favors the normal hierarchy statistically relative to the inverted one). It doesn't help that our understanding of neutrino oscillation is as basically a black box process. We can come up with a formula that is a good fit, but don't really have a consensus story about a mechanism of the oscillation process that fits neatly into the rest of the SM.
FWIW, I am deeply skeptical of the proposition that neutrinos are Majorana particles and of the SeeSaw mechanism with vastly heavier right hand neutrinos as an explanation for their mass. It may be a decade or two before neutrinoless double beta decay experiments are sensitive enough to resolve the Majorana particle question, but there are basically no experimental hints whatsoever of that to date, and given the importance of neutrinos having distinct particles and antiparticles to balance the lepton number of the SM which was the basis of their prediction, I don't think it makes any sense for a neutrino to be its own antiparticle. I also see no compelling reason for right handed neutrinos with masses different than the LH neutrinos to exist.
Heuristically, it makes a lot of sense to associate the electron mass with its electromagnetic field strength, and the neutrino mass scale with the weak force field strength. But, this heuristic argument doesn't explain why different generations of either charged leptons or neutrinos which have the same field strengths have different masses.
7. May 20, 2016
### mitchell porter
The Koide relation has been extended to the quarks in a certain way, and I would like to see it extended analogously to the neutrinos, just to see if it is consistent with all the known constraints; but I have trouble understanding exactly how it works for the quarks. So I am hoping we can figure that out, and then deduce the implications for the neutrinos.
This extension may be found in two Phys Rev D papers by Zenczykowski, arXiv:1210.4125 and arXiv:1301.4143. As explained in the first of these papers, what is being generalized is a trigonometric reformulation of Koide's relation due to Carl Brannen (see Z's equation 5), in which an angle of 2/9 radians appears as a parameter. The proposition is, that you can get Koide-like relations for the up-type and down-type quarks, using "angles" of 2/27 and 4/27 radians, respectively. For the record, I have to point out that all this had already appeared in an unpublished preprint by Marni Sheppeard, "On Neutral Particle Gravity with Nonassociative Braids".
However, the second paper by Zenczykowski does break new ground, by arguing that the relations for quarks are further improved, if one considers not masses, but "pseudo-masses" as defined by François Goffinet (PhD thesis, page 72). How are pseudo-masses obtained? One takes the mixing matrix (CKM for quarks, PMNS for leptons), and factors it into a product of two unitary matrices. You then write the masses of a fermion family (e.g. the up quarks) as a 3-vector, multiply that 3-vector by the relevant matrix, and you now have a 3-vector of pseudo-masses.
For the charged leptons, the unitary matrix employed is simply the identity, and so the formula works for the original masses. But for the quarks, Zenczykowski would have us use pseudo-masses. Following Goffinet, he associates this with working in the weak basis. Whether this association makes sense is something I would like to know; another is just how precise are these formulas for the quark masses, compared to the original Koide relation, which is impressively precise. But what I would also like to do, in the spirit of this thread, is to see what this ansatz predicts for neutrino masses, e.g. if one uses angles of 0 or 8/27 radians for Brannen's parameter.
|
2018-01-22 16:54:59
|
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8284083008766174, "perplexity": 749.8815837879303}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-05/segments/1516084891485.97/warc/CC-MAIN-20180122153557-20180122173557-00333.warc.gz"}
|
http://connection.ebscohost.com/c/articles/4820108/serlios-constructions-ovals
|
TITLE
# On Serlio's Constructions of Ovals
AUTHOR(S)
Rosin, Paul L.
PUB. DATE
January 2001
SOURCE
Mathematical Intelligencer;Winter2001, Vol. 23 Issue 1
SOURCE TYPE
DOC. TYPE
Article
ABSTRACT
Focuses on the architectural importance of Sebastiano Serlio's treatise on constructions of ovals. Classes of building construction or technical drawing of ellipse; Difficulties in treating large ellipses; Serlio's four oval constructions; Alternative approaches to oval constructions.
ACCESSION #
4820108
## Related Articles
• Spacio v manýristické architektuÅ™e. Panochová, Ivana // Umeni / Art;2008, Vol. 56 Issue 4, p282
In the recent debate on the architectural concept of space in the 19th and 20th centuries voices were heard to deny the ability of earlier architects to use space as a term. These opinions refer back to the spatialist dispute between advocates and opponents of spatiality in architecture (Bruno...
• Giuseppe Salviati's allegory of architecture for Daniele Barbaro's 1556 edition of Vitruvius. Cellauro, Louis // Storia dell'Arte;2011, Vol. 129 Issue 29, p5
The article discusses the woodcut "Allegory of Architecture," attributed to 16th-century Mannerist artist Giuseppe Porta (Salviati the Younger), which appeared in the 1556 Italian edition of Vitruvius Pollio's treatise "De architectura" by Daniele Barbaro. The author traces Barbaro's career and...
• ‘Symmetry’ for Bilateral Symmetry. Selzer, Michael I. // Notes & Queries;Sep2011, Vol. 58 Issue 3, p417
The article discusses the history of the word "symmetry." The author suggests that the word referred to well-proportioned shapes in the Classical and Renaissance periods, while it first referred to bilateral symmetry in a text by Pope Pius II, followed by the 1499 book "Hypnerotomachia...
• The Dual Language of Geometry in Gothic Architecture: The Symbolic Message of Euclidian Geometry versus the Visual Dialogue of Fractal Geometry. Ramzy, Nelly Shafik // Peregrinations;Autumn2015, Vol. 5 Issue 2, p135
The article discusses the use of Euclidian geometry and fractal geometry to construct gothic windows, tracery, exteriors, vaults and cathedrals during gothic architecture of medieval period.
• THE LUNELLI-SCE HYPEROVAL IN PG(2,16). Brown, Julia M.N.; Cherowitzo, William E. // Journal of Geometry;Nov2000, Vol. 69 Issue 1/2, p15
Provides a synthetic construction of the irregular hyperoval. Use of the construction to determine the full group of automorphisms of the hyperoval; Computer-free proofs of known properties; Intersection of the hyperoval and conics in a given plane.
• Two-transitive parabolic ovals. Biliotti, Mauro; Jha, Vikram; Johnson, Norman L. // Journal of Geometry;2001, Vol. 70 Issue 1/2, p17
We investigate finite affine planes p of even order possessing a parabolic oval O (|O n l[sub 8]| = 1) and a collineation group G which leaves O invariant and acts 2-transitively on its affine points. The main attention is devoted to translation planes. The odd order case has already been...
• SEMIOVALS WITH LARGE COLLINEAR SUBSETS. Dover, Jeremy M. // Journal of Geometry;Nov2000, Vol. 69 Issue 1/2, p58
Considers semiovals in which some line has a large intersection with S. Description of a semioval in a projective plane II; How no semioval can contain a full line in a finite plane II; Consideration of semiovals which contain all but two points of some line.
• What in the World? // National Geographic Kids;Apr2009, Issue 389, p30
A picture game about oval-shaped objects is presented, including jelly bean, fish and cat's eye.
• Configurations of ovals. Penttila, Tim // Journal of Geometry;2003, Vol. 76 Issue 1/2, p233
We survey the known hyperovals in$\mathrm{PG}(2,q) $$*FORMULAINSPRINGER.We then survey the relationship of the study of configurations of ovalsin \mathrm{PG}(2,q)$$*FORMULAINSPRINGER$ called augmented fans to thatof ovoids...
Share
|
2017-08-18 22:59:16
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.35490432381629944, "perplexity": 8470.576959360915}, "config": {"markdown_headings": true, "markdown_code": false, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-34/segments/1502886105187.53/warc/CC-MAIN-20170818213959-20170818233959-00157.warc.gz"}
|
https://library.achievingthedream.org/austinccphysics2/chapter/21-6-dc-circuits-containing-resistors-and-capacitors/
|
# 35 DC Circuits Containing Resistors and Capacitors
### Learning Objectives
By the end of this section, you will be able to:
• Explain the importance of the time constant, τ, and calculate the time constant for a given resistance and capacitance.
• Explain why batteries in a flashlight gradually lose power and the light dims over time.
• Describe what happens to a graph of the voltage across a capacitor over time as it charges.
• Explain how a timing circuit works and list some applications.
• Calculate the necessary speed of a strobe flash needed to “stop” the movement of an object over a particular length.
When you use a flash camera, it takes a few seconds to charge the capacitor that powers the flash. The light flash discharges the capacitor in a tiny fraction of a second. Why does charging take longer than discharging? This question and a number of other phenomena that involve charging and discharging capacitors are discussed in this module.
## RC Circuits
An RC circuit is one containing a resistor R and a capacitor C. The capacitor is an electrical component that stores electric charge.
Figure 1 shows a simple RC circuit that employs a DC (direct current) voltage source. The capacitor is initially uncharged. As soon as the switch is closed, current flows to and from the initially uncharged capacitor. As charge increases on the capacitor plates, there is increasing opposition to the flow of charge by the repulsion of like charges on each plate.
In terms of voltage, this is because voltage across the capacitor is given by VQ/C, where Q is the amount of charge stored on each plate and C is the capacitance. This voltage opposes the battery, growing from zero to the maximum emf when fully charged. The current thus decreases from its initial value of $I_{o}=\frac{\text{emf}}{R}\\$ to zero as the voltage on the capacitor reaches the same value as the emf. When there is no current, there is no IR drop, and so the voltage on the capacitor must then equal the emf of the voltage source. This can also be explained with Kirchhoff’s second rule (the loop rule), discussed in Kirchhoff’s Rules, which says that the algebraic sum of changes in potential around any closed loop must be zero.
The initial current is $I_{o} =\frac{\text{emf}}{R}\\$, because all of the IR drop is in the resistance. Therefore, the smaller the resistance, the faster a given capacitor will be charged. Note that the internal resistance of the voltage source is included in R, as are the resistances of the capacitor and the connecting wires. In the flash camera scenario above, when the batteries powering the camera begin to wear out, their internal resistance rises, reducing the current and lengthening the time it takes to get ready for the next flash.
Voltage on the capacitor is initially zero and rises rapidly at first, since the initial current is a maximum. Figure 1(b) shows a graph of capacitor voltage versus time (t) starting when the switch is closed at = 0. The voltage approaches emf asymptotically, since the closer it gets to emf the less current flows. The equation for voltage versus time when charging a capacitor C through a resistor R, derived using calculus, is
= emf(1 − et/RC) (charging),
where V is the voltage across the capacitor, emf is equal to the emf of the DC voltage source, and the exponential e = 2.718 … is the base of the natural logarithm. Note that the units of RC are seconds. We define
τ RC
where τ (the Greek letter tau) is called the time constant for an RC circuit. As noted before, a small resistance R allows the capacitor to charge faster. This is reasonable, since a larger current flows through a smaller resistance. It is also reasonable that the smaller the capacitor C, the less time needed to charge it. Both factors are contained in τ RC. More quantitatively, consider what happens when τ RC. Then the voltage on the capacitor is
= emf (1 − e−1) = emf (1 − 0.368) = 0.632 ⋅ emf.
This means that in the time τ RC, the voltage rises to 0.632 of its final value. The voltage will rise 0.632 of the remainder in the next time τ. It is a characteristic of the exponential function that the final value is never reached, but 0.632 of the remainder to that value is achieved in every time, τ. In just a few multiples of the time constant τ, then, the final value is very nearly achieved, as the graph in Figure 1(b) illustrates.
## Discharging a Capacitor
Discharging a capacitor through a resistor proceeds in a similar fashion, as Figure 2 illustrates. Initially, the current is ${I}_{0}=\frac{{V}_{0}}{R}\\$, driven by the initial voltage V0 on the capacitor. As the voltage decreases, the current and hence the rate of discharge decreases, implying another exponential formula for V. Using calculus, the voltage V on a capacitor C being discharged through a resistor R is found to be
V = V0e−t/RC(discharging).
The graph in Figure 2(b) is an example of this exponential decay. Again, the time constant is τ RC. A small resistance R allows the capacitor to discharge in a small time, since the current is larger. Similarly, a small capacitance requires less time to discharge, since less charge is stored. In the first time interval τ RC after the switch is closed, the voltage falls to 0.368 of its initial value, since V⋅ e−1 = 0.368V0.
During each successive time τ, the voltage falls to 0.368 of its preceding value. In a few multiples of τ, the voltage becomes very close to zero, as indicated by the graph in Figure 2(b). Now we can explain why the flash camera in our scenario takes so much longer to charge than discharge; the resistance while charging is significantly greater than while discharging. The internal resistance of the battery accounts for most of the resistance while charging. As the battery ages, the increasing internal resistance makes the charging process even slower. (You may have noticed this.)
The flash discharge is through a low-resistance ionized gas in the flash tube and proceeds very rapidly. Flash photographs, such as in Figure 3, can capture a brief instant of a rapid motion because the flash can be less than a microsecond in duration. Such flashes can be made extremely intense. During World War II, nighttime reconnaissance photographs were made from the air with a single flash illuminating more than a square kilometer of enemy territory. The brevity of the flash eliminated blurring due to the surveillance aircraft’s motion. Today, an important use of intense flash lamps is to pump energy into a laser. The short intense flash can rapidly energize a laser and allow it to reemit the energy in another form.
### Example 1. Integrated Concept Problem: Calculating Capacitor Size—Strobe Lights
High-speed flash photography was pioneered by Doc Edgerton in the 1930s, while he was a professor of electrical engineering at MIT. You might have seen examples of his work in the amazing shots of hummingbirds in motion, a drop of milk splattering on a table, or a bullet penetrating an apple (see Figure 3). To stop the motion and capture these pictures, one needs a high-intensity, very short pulsed flash, as mentioned earlier in this module.
Suppose one wished to capture the picture of a bullet (moving at 5.0 × 10m/s) that was passing through an apple. The duration of the flash is related to the RC time constant, τ. What size capacitor would one need in the RC circuit to succeed, if the resistance of the flash tube was 10.0 Ω? Assume the apple is a sphere with a diameter of 8.0 × 10–2m.
#### Strategy
We begin by identifying the physical principles involved. This example deals with the strobe light, as discussed above. Figure 2 shows the circuit for this probe. The characteristic time τ of the strobe is given as τ RC.
#### Solution
We wish to find C, but we don’t know τ. We want the flash to be on only while the bullet traverses the apple. So we need to use the kinematic equations that describe the relationship between distance x, velocity v, and time t:
x = vt or $t=\frac{x}{v}\\$.
The bullet’s velocity is given as 5.0 × 10m/s, and the distance x is 8.0 × 10–2 m The traverse time, then, is
$t=\frac{x}{v}=\frac{8.0\times {10}^{-2}\text{ m}}{5.0\times {10}^{2}\text{ m/s}}=1.6\times {\text{10}}^{-4}\text{ s}\\$.
We set this value for the crossing time t equal to τ. Therefore,
$C=\frac{t}{R}=\frac{1.6\times \text{10}^{-4}\text{ s}}{10.0\text{ }\Omega }=16\text{ }\mu\text{ F}\\$.
(Note: Capacitance C is typically measured in farads, F, defined as Coulombs per volt. From the equation, we see that C can also be stated in units of seconds per ohm.)
#### Discussion
The flash interval of 160 μs (the traverse time of the bullet) is relatively easy to obtain today. Strobe lights have opened up new worlds from science to entertainment. The information from the picture of the apple and bullet was used in the Warren Commission Report on the assassination of President John F. Kennedy in 1963 to confirm that only one bullet was fired.
## RC Circuits for Timing
RC circuits are commonly used for timing purposes. A mundane example of this is found in the ubiquitous intermittent wiper systems of modern cars. The time between wipes is varied by adjusting the resistance in an RC circuit. Another example of an RC circuit is found in novelty jewelry, Halloween costumes, and various toys that have battery-powered flashing lights. (See Figure 4 for a timing circuit.)
A more crucial use of RC circuits for timing purposes is in the artificial pacemaker, used to control heart rate. The heart rate is normally controlled by electrical signals generated by the sino-atrial (SA) node, which is on the wall of the right atrium chamber. This causes the muscles to contract and pump blood. Sometimes the heart rhythm is abnormal and the heartbeat is too high or too low. The artificial pacemaker is inserted near the heart to provide electrical signals to the heart when needed with the appropriate time constant. Pacemakers have sensors that detect body motion and breathing to increase the heart rate during exercise to meet the body’s increased needs for blood and oxygen.
### Example 2. Calculating Time: RC Circuit in a Heart Defibrillator
A heart defibrillator is used to resuscitate an accident victim by discharging a capacitor through the trunk of her body. A simplified version of the circuit is seen in Figure 2. (a) What is the time constant if an 8.00-μF capacitor is used and the path resistance through her body is 1.00 × 10Ω? (b) If the initial voltage is 10.0 kV, how long does it take to decline to 5.00 × 10V?
#### Strategy
Since the resistance and capacitance are given, it is straightforward to multiply them to give the time constant asked for in part (a). To find the time for the voltage to decline to 5.00 × 10V, we repeatedly multiply the initial voltage by 0.368 until a voltage less than or equal to 5.00 × 10V is obtained. Each multiplication corresponds to a time of τ seconds.
#### Solution for (a)
The time constant τ is given by the equation τ RC. Entering the given values for resistance and capacitance (and remembering that units for a farad can be expressed as s/Ω) gives
τ RC (1.00 × 10Ω(8.00 μF8.00 ms.
#### Solution for (b)
In the first 8.00 ms, the voltage (10.0 kV) declines to 0.368 of its initial value. That is:
V 0.368 V3.680 × 10V at t 8.00 ms.
(Notice that we carry an extra digit for each intermediate calculation.) After another 8.00 ms, we multiply by 0.368 again, and the voltage is
$\begin{array}{lll}V′ & =& 0.368\text{ V}\\ & =& \left(0.368\right)\left(3.680\times {10}^{3}\text{ V}\right)\\ & =& 1.354\times {10}^{3}\text{ V}\text{at }t=16.0\text{ ms}\end{array}\\$
Similarly, after another 8.00 ms, the voltage is
$\begin{array}{lll}V'' & =& 0.368\text{ }V' =\left(\text{0.368}\right)\left(\text{1.354}\times{10}^{3}\text{ V}\right)\\ & =& 498\text{ V at }t=24.0\text{ ms}\end{array}\\$.
#### Discussion
So after only 24.0 ms, the voltage is down to 498 V, or 4.98% of its original value.Such brief times are useful in heart defibrillation, because the brief but intense current causes a brief but effective contraction of the heart. The actual circuit in a heart defibrillator is slightly more complex than the one in Figure 2, to compensate for magnetic and AC effects that will be covered in Magnetism.
### Check Your Understanding
When is the potential difference across a capacitor an emf?
#### Solution
Only when the current being drawn from or put into the capacitor is zero. Capacitors, like batteries, have internal resistance, so their output voltage is not an emf unless current is zero. This is difficult to measure in practice so we refer to a capacitor’s voltage rather than its emf. But the source of potential difference in a capacitor is fundamental and it is an emf.
## PhET Explorations: Circuit Construction Kit (DC only)
An electronics kit in your computer! Build circuits with resistors, light bulbs, batteries, and switches. Take measurements with the realistic ammeter and voltmeter. View the circuit as a schematic diagram, or switch to a life-like view.
## Section Summary
• An RC circuit is one that has both a resistor and a capacitor.
• The time constant τ for an RC circuit is τ RC.
• When an initially uncharged (V= 0 at = 0) capacitor in series with a resistor is charged by a DC voltage source, the voltage rises, asymptotically approaching the emf of the voltage source; as a function of time,
= emf(1 − et/RC) (charging),
• Within the span of each time constant τ, the voltage rises by 0.632 of the remaining value, approaching the final voltage asymptotically.
• If a capacitor with an initial voltage V0 is discharged through a resistor starting at= 0, then its voltage decreases exponentially as given by
V = V0e−t/RC(discharging).
• In each time constant τ, the voltage falls by 0.368 of its remaining initial value, approaching zero asymptotically.
### Conceptual questions
1. Regarding the units involved in the relationship τ RC, verify that the units of resistance times capacitance are time, that is, Ω ⋅ F=s.
2. The RC time constant in heart defibrillation is crucial to limiting the time the current flows. If the capacitance in the defibrillation unit is fixed, how would you manipulate resistance in the circuit to adjust the RC constant τ? Would an adjustment of the applied voltage also be needed to ensure that the current delivered has an appropriate value?
3. When making an ECG measurement, it is important to measure voltage variations over small time intervals. The time is limited by the RC constant of the circuit—it is not possible to measure time variations shorter than RC. How would you manipulate and C in the circuit to allow the necessary measurements?
4. Draw two graphs of charge versus time on a capacitor. Draw one for charging an initially uncharged capacitor in series with a resistor, as in the circuit in Figure 1 (above), starting from t = 0. Draw the other for discharging a capacitor through a resistor, as in the circuit in Figure 2 (above), starting at t = 0, with an initial charge Qo. Show at least two intervals of τ.
5. When charging a capacitor, as discussed in conjunction with Figure 2, how long does it take for the voltage on the capacitor to reach emf? Is this a problem?
6. When discharging a capacitor, as discussed in conjunction with Figure 2, how long does it take for the voltage on the capacitor to reach zero? Is this a problem?
7. Referring to Figure 1, draw a graph of potential difference across the resistor versus time, showing at least two intervals of τ. Also draw a graph of current versus time for this situation.
8. A long, inexpensive extension cord is connected from inside the house to a refrigerator outside. The refrigerator doesn’t run as it should. What might be the problem?
9. In Figure 4 (above), does the graph indicate the time constant is shorter for discharging than for charging? Would you expect ionized gas to have low resistance? How would you adjust R to get a longer time between flashes? Would adjusting R affect the discharge time?
10. An electronic apparatus may have large capacitors at high voltage in the power supply section, presenting a shock hazard even when the apparatus is switched off. A “bleeder resistor” is therefore placed across such a capacitor, as shown schematically in Figure 6, to bleed the charge from it after the apparatus is off. Why must the bleeder resistance be much greater than the effective resistance of the rest of the circuit? How does this affect the time constant for discharging the capacitor?
### Problems & Exercises
1. The timing device in an automobile’s intermittent wiper system is based on an RC time constant and utilizes a 0.500-μF capacitor and a variable resistor. Over what range must R be made to vary to achieve time constants from 2.00 to 15.0 s?
2. A heart pacemaker fires 72 times a minute, each time a 25.0-nF capacitor is charged (by a battery in series with a resistor) to 0.632 of its full voltage. What is the value of the resistance?
3. The duration of a photographic flash is related to an RC time constant, which is 0.100 μs for a certain camera. (a) If the resistance of the flash lamp is 0.0400 Ω during discharge, what is the size of the capacitor supplying its energy? (b) What is the time constant for charging the capacitor, if the charging resistance is 800 ?
4. A 2.00- and a 7.50-μF capacitor can be connected in series or parallel, as can a 25.0- and a 100-kΩ resistor. Calculate the four RC time constants possible from connecting the resulting capacitance and resistance in series.
5. After two time constants, what percentage of the final voltage, emf, is on an initially uncharged capacitor C, charged through a resistance R?
6. A 500-Ω resistor, an uncharged 1.50-μF capacitor, and a 6.16-V emf are connected in series. (a) What is the initial current? (b) What is the RC time constant? (c) What is the current after one time constant? (d) What is the voltage on the capacitor after one time constant?
7. A heart defibrillator being used on a patient has an RC time constant of 10.0 ms due to the resistance of the patient and the capacitance of the defibrillator. (a) If the defibrillator has an 8.00-μF capacitance, what is the resistance of the path through the patient? (You may neglect the capacitance of the patient and the resistance of the defibrillator.) (b) If the initial voltage is 12.0 kV, how long does it take to decline to 6.00 × 10V?
8. An ECG monitor must have an RC time constant less than 1.00 × 102 μs to be able to measure variations in voltage over small time intervals. (a) If the resistance of the circuit (due mostly to that of the patient’s chest) is 1.00 kΩ, what is the maximum capacitance of the circuit? (b) Would it be difficult in practice to limit the capacitance to less than the value found in (a)?
9. Figure 7 shows how a bleeder resistor is used to discharge a capacitor after an electronic device is shut off, allowing a person to work on the electronics with less risk of shock. (a) What is the time constant? (b) How long will it take to reduce the voltage on the capacitor to 0.250% (5% of 5%) of its full value once discharge begins? (c) If the capacitor is charged to a voltage V0 through a 100-Ω resistance, calculate the time it takes to rise to 0.865 V0 (This is about two time constants.)
10. Using the exact exponential treatment, find how much time is required to discharge a 250-μF capacitor through a 500-Ω resistor down to 1.00% of its original voltage.
11. Using the exact exponential treatment, find how much time is required to charge an initially uncharged 100-pF capacitor through a 75.0-MΩ resistor to 90.0% of its final voltage.
12. Integrated Concepts If you wish to take a picture of a bullet traveling at 500 m/s, then a very brief flash of light produced by an RC discharge through a flash tube can limit blurring. Assuming 1.00 mm of motion during one RC constant is acceptable, and given that the flash is driven by a 600-μF capacitor, what is the resistance in the flash tube?
13. Integrated Concepts A flashing lamp in a Christmas earring is based on an RC discharge of a capacitor through its resistance. The effective duration of the flash is 0.250 s, during which it produces an average 0.500 W from an average 3.00 V. (a) What energy does it dissipate? (b) How much charge moves through the lamp? (c) Find the capacitance. (d) What is the resistance of the lamp?
14. Integrated Concepts A 160-μF capacitor charged to 450 V is discharged through a 31.2-kΩ resistor. (a) Find the time constant. (b) Calculate the temperature increase of the resistor, given that its mass is 2.50 g and its specific heat is$1.67\frac{\text{kJ}}{\text{kg}\cdotº\text{C}}\\$, noting that most of the thermal energy is retained in the short time of the discharge. (c) Calculate the new resistance, assuming it is pure carbon. (d) Does this change in resistance seem significant?
15. Unreasonable Results (a) Calculate the capacitance needed to get an RC time constant of 1.00 × 103 with a 0.100-Ω resistor. (b) What is unreasonable about this result? (c) Which assumptions are responsible?
16. Construct Your Own Problem Consider a camera’s flash unit. Construct a problem in which you calculate the size of the capacitor that stores energy for the flash lamp. Among the things to be considered are the voltage applied to the capacitor, the energy needed in the flash and the associated charge needed on the capacitor, the resistance of the flash lamp during discharge, and the desired RC time constant.
17. Construct Your Own Problem Consider a rechargeable lithium cell that is to be used to power a camcorder. Construct a problem in which you calculate the internal resistance of the cell during normal operation. Also, calculate the minimum voltage output of a battery charger to be used to recharge your lithium cell. Among the things to be considered are the emf and useful terminal voltage of a lithium cell and the current it should be able to supply to a camcorder.
## Glossary
RC circuit:
a circuit that contains both a resistor and a capacitor
capacitor:
an electrical component used to store energy by separating electric charge on two opposing plates
capacitance:
the maximum amount of electric potential energy that can be stored (or separated) for a given electric potential
### Selected Solutions to Problems & Exercises
1. range 4.00 to 30.0 MΩ
3. (a) 2.50 μF (b) 2.00 s
5. 86.5%
7. (a) 1.25 kΩ (b) 30.0 ms
9. (a) 20.0 s (b) 120 s (c) 16.0 ms
11. 1.73 × 10s
12. 3.33 × 10Ω
14. (a) 4.99 s (b) 3.87ºC (c) 31.1 kΩ (d) No
## License
Physics II by Lumen Learning is licensed under a Creative Commons Attribution 4.0 International License, except where otherwise noted.
|
2022-09-29 04:24:40
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.5114509463310242, "perplexity": 760.4481143086489}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": false}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030335304.71/warc/CC-MAIN-20220929034214-20220929064214-00492.warc.gz"}
|
http://real-world-systems.com/docs/cvs.1.html
|
CVS: Concurrent Versions System cvs - Concurrent Versions System
cvs [ cvs_options ] cvs_command [ command_options ] [ command_args ] www.nongnu.org/cvs/ or CVSnt.org
This summary of some of the features of cvs, is auto-generated from an appendix of the CVS manual. For more in-depth documentation, please consult the Cederqvist manual (via the info CVS command or otherwise, as described in the SEE ALSO section of this manpage). Cross-references in this man page refer to nodes in the same.
The overall format of all cvs commands is:
cvs [ cvs_options ] cvs_command [ command_options ] [ command_args ] cvs_options Some options that affect all sub-commands of cvs.
cvs_command One of several different sub-commands or aliases those aliases are noted in the reference manual for that command. cvs -H elicits a list of available commands, and
cvs -v displays version
Usage: cvs [cvs-options] command [command-options-and-arguments] where cvs-options are -q, -n, etc.
add a new file/directory to the repository
admin istration front end for rcs
annotate Show last revision where each line was modified
checkout sources for editing
commit Check files into the repository
diff Show differences between revisions
edit Get ready to edit a watched file
editors See who is editing a watched file
export sources from CVS, similar to checkout
history Show repository access history
import sources into CVS, using vendor branches
init Create a CVS repository if it doesn't exist
log Print out history information for files
logout Removes entry in .cvspass for remote repository
ls files available from CVS
rannotate Show last revision where each line of module was modified
rdiff Create 'patch' format diffs between releases
release Indicate that a Module is no longer in use
remove an entry from the repository
rlog Print out history information for a module
rls List files in a module
rtag Add a symbolic tag to a module
server mode
status on checked out files
tag Add a symbolic tag to checked out version of files
unedit
update Bring work tree in sync with repository
version
watch Set watches
watchers See who is watching a file
(specify --help-options for a list of options) where command is add, admin, etc. (specify --help-commands for a list of commands or --help-synonyms for a list of command synonyms) where command-options-and-arguments depend on the specific command (specify -H followed by a command name for command-specific help)
command_options
Options that are specific for the command.
command_args
Arguments to the commands.
There is some confusion between cvs_options and command_options. When given as a cvs_option, some options only affect some of the commands.
the cvs diff command returns a successful status if it found no differences, or a failure status if there were differences or if there was an error. Because this behavior provides no good way to detect errors
~/.cvsrc Default options and the ~/.cvsrc file add default options to cvs_commands within cvs, instead of relying on aliases or other shell scripts.
~/.cvsrc is searched for a line that begins with the same name as the cvs_command being executed. If a match is found, then the remainder of the line is split up (at whitespace characters) into separate options and added to the command arguments before any options from the command line.
If a command has two names (e.g., checkout and co), the official name will be used to match against the file. Example
log -N diff -uN rdiff -u update -Pd checkout -P release -d
the command cvs checkout foo would have -P added to the arguments, as well as cvs co foo.
With the example file above, the output from cvs diff foobar will be in unidiff format.
cvs diff -c foobar will provide context diffs .
Getting "old" format diffs is more complicated, because diff doesn't have an option to specify use of the "old" format, so you would need cvs -f diff foobar.
global options are permitted in .cvsrc (see node Global options' in the CVS manual). For example the following line in .cvsrc
cvs -z6
use compression level 6.
### Global options
The available cvs_options (that are given to the left of cvs_command) are:
--allow-root=rootdir
May be invoked multiple times to specify one legal cvsroot directory with each invocation. Also causes CVS to preparse the configuration file for each specified root, which can be useful when configuring write proxies, See see node Password authentication server' in the CVS manual & see node Write proxies' in the CVS manual.
-a
Authenticate all communication between the client and the server. Only has an effect on the cvs client. As of this writing, this is only implemented when using a GSSAPI connec- tion (see node GSSAPI authenticated' in the CVS manual). Authentication prevents certain sorts of attacks involving hijacking the active tcp connection. Enabling authentication does not enable encryption.
-b bindir deprecated
-T tempdir
Use tempdir as the directory where temporary files are located.
The cvs client and server store temporary files in a temporary directory. The path to this temporary directory is set via, in order of precedence:
1. The argument to the global -T option.
2. The value set for TmpDir in the config file (server only - see node config' in the CVS manual).
3. The contents of the $TMPDIR environment variable (%TMPDIR% on Windows - see node Envi- ronment variables' in the CVS manual). 4. /tmp Specify temporary directories as an absolute pathname. When running a CVS client, -T affects only the local process; specifying -T for the client has no effect on the server and vice versa. -d cvs_root_directory Use cvs_root_directory as the root directory pathname of the repository. Overrides the setting of the$CVSROOT environment variable. see node Repository' in the CVS manual.
-e editor
Use editor to enter revision log information. Overrides the setting of the $CVSEDITOR and$EDITOR environment variables. For more information, see see node Committing your changes' in the CVS manual.
-f
Do not read the ~/.cvsrc file. This option is most often used because of the non-orthogo- nality of the cvs option set. For example, the cvs log option -N (turn off display of tag names) does not have a corresponding option to turn the display on. So if you have -N in the ~/.cvsrc entry for log, you may need to use -f to show the tag names.
-H --help
Display usage information about the specified cvs_command
-R
Turns on read-only repository mode. This allows one to check out from a read-only reposi- tory, such as within an anoncvs server, or from a cd-rom repository.
Same effect as if the CVSREADONLYFS environment variable is set. Using -R can also consid- erably speed up checkouts over NFS.
-n
Do not change any files. Attempt to execute the cvs_command, but only to issue reports; do not remove, update, or merge any existing files, or create any new files.
Note that cvs will not necessarily produce exactly the same output as without -n. In some cases the output will be the same, but in other cases cvs will skip some of the processing that would have been required to produce the exact same output.
-q
somewhat quiet; informational messages, such as reports of recursion through subdirectories, are suppressed. -Q
really quiet; the command will only generate output for serious problems.
-r
Make new working files read-only. Same effect as if the $CVSREAD environment variable is set (see node Environment variables' in the CVS manual). The default is to make working files writable, unless watches are on (see node Watches' in the CVS manual). -s variable=value Set a user variable (see node Variables' in the CVS manual). -t Trace program execution; display messages showing the steps of cvs activity. Particularly useful with -n to explore the potential impact of an unfamiliar command. -v --version Display version and copyright information for cvs. -w Make new working files read-write. Overrides the setting of the$CVSREAD environment variable. Files are created read-write by default, unless $CVSREAD is set or -r is given. -x Encrypt all communication between the client and the server. Only has an effect on the cvs client. As of this writing, this is only implemented when using a GSSAPI connection (see node GSSAPI authenticated' in the CVS manual) or a Kerberos connection (see node Kerberos authenticated' in the CVS manual). Enabling encryption implies that message traffic is also authenticated. Encryption support is not available by default; it must be enabled using a special configure option, --enable-encryption, when you build cvs. -z level Request compression level for network traffic. cvs interprets level identically to the gzip program. Valid levels are 1 (high speed, low compression) to 9 (low speed, high com- pression), or 0 to disable compression (the default). Data sent to the server will be compressed at the requested level and the client will request the server use the same com- pression level for data returned. The server will use the closest level allowed by the server administrator to compress returned data. This option only has an effect when passed to the cvs client. ### Common options This section describes the command_options that are available across several cvs commands. These options are always given to the right of cvs_command. Not all commands support all of these options; each option is only supported for commands where it makes sense. However, when a command has one of these options you can almost always count on the same behavior of the option as in other commands. (Other command options, which are listed with the individ- ual commands, may have different behavior from one cvs command to the other). Note: the history command supports many options that conflict with these options. -D date_spec Use the most recent revision no later than date_spec. date_spec is a single argument, a date description specifying a date in the past. The specification is sticky when you use it to make a private copy of a source file; that is, when you get a working file using -D, cvs records the date you specified, so that fur- ther updates in the same directory will use the same date (for more information on sticky tags/dates, see node Sticky tags' in the CVS manual). -D is available with the annotate, checkout, diff, export, history, ls, rdiff, rls, rtag, tag, and update commands. (The history command uses this option in a slightly different way; see node history options' in the CVS manual). For a complete description of the date formats accepted by cvs, see node Date input formats' in the CVS manual. Remember to quote the argument to the -D flag so that your shell doesn't interpret spaces as argument separators. A command using the -D flag can look like this:$ cvs diff -D "1 hour ago" cvs.texinfo
-f
When you specify a particular date or tag to cvs commands, they normally ignore files that do not contain the tag (or did not exist prior to the date) that you specified. Use -f if you want files retrieved even when there is no match for the tag or date. (The most recent revision of the file will be used).
Note that even with -f, a tag that you specify must exist (that is, in some file, not nec- essary in every file). This is so that cvs will continue to give an error if you mistype a tag name.
-f is available with these commands: annotate, checkout, export, rdiff, rtag, and update.
WARNING: The commit and remove commands also have a -f option, but it has a different behavior for those commands. See see node commit options' in the CVS manual, and see node Removing files' in the CVS manual.
-k kflag
Override the default processing of RCS keywords other than -kb. see node Keyword substi- tution' in the CVS manual, for the meaning of kflag. Used with the checkout and update commands, your kflag specification is sticky; that is, when you use this option with a checkout or update command, cvs associates your selected kflag with any files it operates on, and continues to use that kflag with future commands on the same files until you spec- ify otherwise.
The -k option is available with the add, checkout, diff, export, import, rdiff, and update commands.
WARNING: Prior to CVS version 1.12.2, the -k flag overrode the -kb indication for a binary file. This could sometimes corrupt binary files. see node Merging and keywords' in the CVS manual, for more.
-l
Local; run only in current working directory, rather than recursing through subdirectories.
Available with the following commands: annotate, checkout, commit, diff, edit, editors, export, log, rdiff, remove, rtag, status, tag, unedit, update, watch, and watchers.
-m message
Available with: add, commit and import.
-n
Do not run any tag program. (A program can be specified to run in the modules database (see node modules' in the CVS manual); this option bypasses it).
not the same as the cvs -n program option, which you can specify to the left of a cvs command!
Available with the checkout, commit, export, and rtag commands.
-P
Prune empty directories. See see node Removing directories' in the CVS manual.
-p
Pipe the files retrieved from the repository to standard output, rather than writing them in the current directory. Available with the checkout and update commands. -R
Process directories recursively. This is the default for all cvs commands, with the exception of ls & rls.
Available with the following commands: annotate, checkout, commit, diff, edit, editors, export, ls, rdiff, remove, rls, rtag, status, tag, unedit, update, watch, and watchers.
-r tag
-r tag[:date]
Use the revision specified by the tag argument (and the date argument for the commands which accept it) instead of the default head revision. As well as arbitrary tags defined with the tag or rtag command, two special tags are always available: HEAD refers to the most recent version available in the repository, and BASE refers to the revision you last checked out into the current working directory.
The tag specification is sticky when you use this with checkout or update to make your own copy of a file: cvs remembers the tag and continues to use it on future update commands, until you specify otherwise (for more information on sticky tags/dates, see node Sticky tags' in the CVS manual).
The tag can be either a symbolic or numeric tag, as described in see node Tags' in the CVS manual, or the name of a branch, as described in see node Branching and merging' in the CVS manual. When tag is the name of a branch, some commands accept the optional date argument to specify the revision as of the given date on the branch. When a command expects a specific revision, the name of a branch is interpreted as the most recent revi- sion on that branch.
Specifying the -q global option along with the -r command option is often useful, to sup- press the warning messages when the rcs file does not contain the specified tag.
not the same as the overall cvs -r option, which you can specify to the left of a cvs command!
-r tag is available with the commit and history commands.
-r tag[:date] available with annotate, checkout, diff, export, rdiff, rtag, and update
-W
Specify file names that should be filtered. You can use this option repeatedly. The spec can be a file name pattern of the same type that you can specify in the .cvswrappers file. Available with the following commands: import, and update.
admin Administration o Requires: repository, working directory. o Changes: repository. o Synonym: rcs
This is the cvs interface to assorted administrative facilities. Some of them have ques- tionable usefulness for cvs but exist for historical purposes. Some of the questionable options are likely to disappear in the future. This command does work recursively, so extreme care should be used.
On unix, if there is a group named cvsadmin, only members of that group can run cvs admin commands, except for those specified using the UserAdminOptions configuration option in the CVSROOT/config file. Options specified using UserAdminOptions can be run by any user. See see node config' in the CVS manual for more on UserAdminOptions.
The cvsadmin group should exist on the server, or any system running the non-client/server cvs. To disallow cvs admin for all users, create a group with no users in it. On NT, the cvsadmin feature does not exist and all users can run cvs admin.
admin options Some of these options have questionable usefulness for cvs but exist for historical purposes. Some even make it impossible to use cvs until you undo the effect!
-Aoldfile
Might not work together with cvs. Append the access list of oldfile to the access list of the rcs file.
Might not work together with cvs. Append the login names appearing in the comma-separated list logins to the access list of the rcs file.
-b[rev]
Set the default branch to rev. In cvs, you normally do not manipulate default branches; sticky tags (see node Sticky tags' in the CVS manual) are a better way to decide which branch you want to work on. There is one reason to run cvs admin -b: to revert to the vendor's version when using vendor branches (see node Reverting local changes' in the CVS manual). There can be no space between -b and its argument.
-cstring
Sets the comment leader to string. The comment leader is not used by current versions of cvs or rcs 5.7. Therefore, you can almost surely not worry about it. see node Keyword substitution' in the CVS manual.
Might not work together with cvs. Erase the login names appearing in the comma-separated list logins from the access list of the RCS file. If logins is omitted, erase the entire access list. There can be no space between -e and its argument.
-I
Run interactively, even if the standard input is not a terminal. This option does not work with the client/server cvs and is likely to disappear in a future release of cvs.
-i
depricated
-ksubst
Set the default keyword substitution to subst. see node Keyword substitution' in the CVS manual. Giving an explicit -k option to cvs update, cvs export, or cvs checkout overrides this default.
-l[rev]
Lock the revision with number rev. If a branch is given, lock the latest revision on that branch. If rev is omitted, lock the latest revision on the default branch. There can be no space between -l and its argument.
This can be used in conjunction with the rcslock.pl script in the contrib directory of the cvs source distribution to provide reserved checkouts (where only one user can be editing a given file at a time). See the comments in that file for details (and see the README file in that directory for disclaimers about the unsupported nature of contrib). Accord- ing to comments in that file, locking must set to strict (which is the default).
-L
Set locking to strict. Strict locking means that the owner of an RCS file is not exempt from locking for checkin. For use with cvs, strict locking must be set; see the discussion under -l .
-mrev:msg
Replace the log message of revision rev with msg.
-Nname[:[rev]]
Act like -n, except override any previous assignment of name. For use with magic branches, see see node Magic branch numbers' in the CVS manual.
-nname[:[rev]]
Associate the symbolic name name with the branch or revision rev. It is normally better to use cvs tag or cvs rtag instead. Delete the symbolic name if both : and rev are omit- ted; otherwise, print an error message if name is already associated with another number. If rev is symbolic, it is expanded before association. A rev consisting of a branch num- ber followed by a . stands for the current latest revision in the branch. A : with an empty rev stands for the current latest revision on the default branch, normally the trunk. For example, cvs admin -nname: associates name with the current latest revision of all the RCS files; this contrasts with cvs admin -nname:$which associates name with the revision numbers extracted from keyword strings in the corresponding working files. -orange Deletes (outdates) the revisions given by range. dangerous unless you know what you are doing (for example see the warnings below about how the rev1:rev2 syntax is confusing). If you are short on disc this option might help you. But think twice before using it-- there is no way short of restoring the latest backup to undo this command! If you delete different revisions than you planned, either due to carelessness or (heaven forbid) a cvs bug, there is no opportunity to correct the error before the revisions are deleted. It probably would be a good idea to experiment on a copy of the repository first. Specify range in one of the following ways: rev1::rev2 Collapse all revisions between rev1 and rev2, so that cvs only stores the differences associated with going from rev1 to rev2, not intermediate steps. For example, after -o 1.3::1.5 one can retrieve revision 1.3, revision 1.5, or the differences to get from 1.3 to 1.5, but not the revision 1.4, or the differences between 1.3 and 1.4. Other exam- ples: -o 1.3::1.4 and -o 1.3::1.3 have no effect, because there are no intermediate revisions to remove. ::rev Collapse revisions between the beginning of the branch containing rev and rev itself. The branchpoint and rev are left intact. For example, -o ::1.3.2.6 deletes revision 1.3.2.1, revision 1.3.2.5, and everything in between, but leaves 1.3 and 1.3.2.6 intact. rev:: Collapse revisions between rev and the end of the branch containing rev. Revision rev is left intact but the head revision is deleted. rev Delete the revision rev. For example, -o 1.3 is equivalent to -o 1.2::1.4. rev1:rev2 Delete the revisions from rev1 to rev2, inclusive, on the same branch. One will not be able to retrieve rev1 or rev2 or any of the revisions in between. For example, the com- mand cvs admin -oR_1_01:R_1_02 . is rarely useful. It means to delete revisions up to, and including, the tag R_1_02. But beware! If there are files that have not changed between R_1_02 and R_1_03 the file will have the same numerical revision number assigned to the tags R_1_02 and R_1_03. So not only will it be impossible to retrieve R_1_02; R_1_03 will also have to be restored from the tapes! In most cases you want to specify rev1::rev2 instead. :rev Delete revisions from the beginning of the branch containing rev up to and including rev. rev: Delete revisions from revision rev, including rev itself, to the end of the branch containing rev. None of the revisions to be deleted may have branches or locks. If any of the revisions to be deleted have symbolic names, and one specifies one of the :: syntaxes, then cvs will give an error and not delete any revisions. If you really want to delete both the symbolic names and the revisions, first delete the symbolic names with cvs tag -d, then run cvs admin -o. If one specifies the non-:: syntaxes, then cvs will delete the revisions but leave the symbolic names pointing to nonexistent revisions. This behavior is preserved for compatibility with previous versions of cvs, but because it isn't very useful, in the future it may change to be like the :: case. Due to the way cvs handles branches rev cannot be specified symbolically if it is a branch. see node Magic branch numbers' in the CVS manual, for an explanation. Make sure that no-one has checked out a copy of the revision you outdate. Strange things will happen if he starts to edit it and tries to check it back in. For this rea- son, this option is not a good way to take back a bogus commit; commit a new revision undoing the bogus change instead (see node Merging two revisions' in the CVS manual). -q Run quietly; do not print diagnostics. -sstate[:rev] Set the state attribute of the revision rev to state. If rev is a branch number, assume the latest revision on that branch. If rev is omitted, assume the latest revision on the default branch. Any identifier is acceptable for state. A useful set of states is Exp (for experimental), Stab (for stable), and Rel (for released). By default, the state of a new revision is set to Exp when it is created. The state is visi- ble in the output from cvs log (see node log' in the CVS manual), and in the$Log$and$State$keywords (see node Keyword substitution' in the CVS manual). Note that cvs uses the dead state for its own purposes (see node Attic' in the CVS manual); to take a file to or from the dead state use commands like cvs remove and cvs add (see node Adding and removing' in the CVS manual), not cvs admin -s. -t[file] Write descriptive text from the contents of the named file into the RCS file, deleting the existing text. The file pathname may not begin with -. The descrip- tive text can be seen in the output from cvs log (see node log' in the CVS manual). There can be no space between -t and its argument. If file is omitted, obtain the text from standard input, terminated by end-of-file or by a line containing . by itself. Prompt for the text if interaction is possible; see -I. -t-string Similar to -tfile. Write descriptive text from the string into the rcs file, deleting the existing text. There can be no space between -t and its argument. -U Set locking to non-strict. Non-strict locking means that the owner of a file need not lock a revision for checkin. For use with cvs, strict locking must be set; see the dis- cussion under the -l option above. -u[rev] See l for a discussion of using this option . Unlock the revision with number rev. If a branch is given, unlock the latest revision on that branch. If rev is omitted, remove the latest lock held by the caller. Normally, only the locker of a revision may unlock it; somebody else unlocking a revision breaks the lock. This causes the original locker to be sent a commit notification (see node Getting Notified' in the CVS manual). There can be no space between -u and its argument. -Vn depricated -xsuffixes deprectated annotate What revision modified each line of a file? 1. Synopsis: annotate [options] files... 2. Requires: repository. 3. Changes: nothing. For each file in files, print the head revision of the trunk, together with information on the last modification for each line. annotate options These standard options are supported by annotate (see node Common options' in the CVS man- ual, for a complete description of them): -l Local directory only, no recursion. -R Process directories recursively. -f Use head revision if tag/date not found. -F Annotate binary files. -r tag[:date] Annotate file as of specified revision/tag or, when date is specified and tag is a branch tag, the version from the branch tag as it existed on date. See see node Common options' in the CVS manual. -D date Annotate file as of specified date. annotate example For example: $ cvs annotate ssfile
Annotations for ssfile
***************
1.1 (mary 27-Mar-96): ssfile line 1
1.2 (joe 28-Mar-96): ssfile line 2
The file ssfile currently contains two lines. The ssfile line 1 line was checked in by mary on March 27. Then, on March 28, joe added a line ssfile line 2, without modifying the ssfile line 1 line. This report doesn't tell you anything about lines which have been deleted or replaced; you need to use cvs diff for that (see node diff' in the CVS manual).
The options to cvs annotate are listed in see node Invoking CVS' in the CVS manual, and can be used to select the files and revisions to annotate. The options are described in more detail there and in see node Common options' in the CVS manual.
checkout Check out sources for editing
1. Synopsis: checkout [options] modules...
2. Requires: repository.
3. Changes: working directory.
4. Synonyms: co, get
Create or update a working directory containing copies of the source files specified by modules. You must execute checkout before using most of the other cvs commands, since most of them operate on your working directory.
The modules are either symbolic names for some collection of source directories and files, or paths to directories or files in the repository. The symbolic names are defined in the modules file. see node modules' in the CVS manual.
Depending on the modules you specify, checkout may recursively create directories and pop- ulate them with the appropriate source files. You can then edit these source files at any time (regardless of whether other software developers are editing their own copies of the sources); update them to include new changes applied by others to the source repository; or commit your work as a permanent change to the source repository.
Note that checkout is used to create directories. The top-level directory created is always added to the directory where checkout is invoked, and usually has the same name as the specified module. In the case of a module alias, the created sub-directory may have a different name, but you can be sure that it will be a sub-directory, and that checkout will show the relative path leading to each file as it is extracted into your private work area (unless you specify the -Q global option).
The files created by checkout are created read-write, unless the -r option to cvs (see node Global options' in the CVS manual) is specified, the CVSREAD environment variable is specified (see node Environment variables' in the CVS manual), or a watch is in effect for that file (see node Watches' in the CVS manual).
Note that running checkout on a directory that was already built by a prior checkout is also permitted. This is similar to specifying the -d option to the update command in the sense that new directories that have been created in the repository will appear in your work area. However, checkout takes a module name whereas update takes a directory name. Also to use checkout this way it must be run from the top level directory (where you orig- inally ran checkout from), so before you run checkout to update an existing directory, don't forget to change your directory to the top level directory.
For the output produced by the checkout command see see node update output' in the CVS manual.
#### checkout options
These standard options are supported by checkout (see node Common options' in the CVS man- ual, for a complete description of them):
-D date
Use the most recent revision no later than date. This option is sticky, and implies -P. See see node Sticky tags' in the CVS manual, for more information on sticky tags/dates.
-f
Only useful with the -D or -r flags. If no matching revision is found, retrieve the most recent revision (instead of ignoring the file).
-k kflag
Process keywords according to kflag. See see node Keyword substitution' in the CVS manual. This option is sticky; future updates of this file in this working directory will use the same kflag. The status command can be viewed to see the sticky options. See see node Invoking CVS' in the CVS manual, for more information on the status command.
-l
Local; run only in current working directory.
-n
Do not run any checkout program (as specified with the -o option in the modules file; see node modules' in the CVS manual).
-P
Prune empty directories. See see node Moving directories' in the CVS manual.
-p
Pipe files to the standard output.
-R
Checkout directories recursively. This option is on by default.
-r tag[:date]
Checkout the revision specified by tag or, when date is specified and tag is a branch tag, the version from the branch tag as it existed on date. This option is sticky, and implies -P. See see node Sticky tags' in the CVS manual, for more information on sticky tags/dates. Also, see see node Common options' in the CVS manual.
In addition to those, you can use these special command options with checkout:
-A
Reset any sticky tags, dates, or -k options. See see node Sticky tags' in the CVS man- ual, for more information on sticky tags/dates.
-c
Copy the module file, sorted, to the standard output, instead of creating or modifying any files or directories in your working directory.
-d dir
Create a directory called dir for the working files, instead of using the module name. In general, using this flag is equivalent to using mkdir dir; cd dir followed by the checkout command without the -d flag.
convenient when checking out a single item to have the output appear in a directory that doesn't contain empty intermediate directories. In this case only, cvs tries to shorten'' pathnames to avoid those empty directories.
For example, given a module foo that contains the file bar.c, the command cvs co -d dir foo will create directory dir and place bar.c inside. Similarly, given a module bar which has subdirectory baz wherein there is a file quux.c, the command cvs co -d dir bar/baz will create directory dir and place quux.c inside.
Using the -N flag will defeat this behavior. Given the same module definitions above, cvs co -N -d dir foo will create directories dir/foo and place bar.c inside, while cvs co -N -d dir bar/baz will create directories dir/bar/baz and place quux.c inside.
-j tag
With two -j options, merge changes from the revision specified with the first -j option to the revision specified with the second j option, into the working directory.
With one -j option, merge changes from the ancestor revision to the revision specified with the -j option, into the working directory. The ancestor revision is the common ancestor of the revision which the working directory is based on, and the revision speci- fied in the -j option.
In addition, each -j option can contain an optional date specification which, when used with branches, can limit the chosen revision to one within a specific date. An optional date is specified by adding a colon (:) to the tag: -jSymbolic_Tag:Date_Specifier.
see node Branching and merging' in the CVS manual.
-N
Only useful together with -d dir. With this option, cvs will not shorten'' module paths in your working directory when you check out a single module. See the -d flag for exam- ples and a discussion.
-s
Like -c, but include the status of all modules, and sort it by the status string. see node modules' in the CVS manual, for info about the -s option that is used inside the modules file to set the module status.
checkout examples
#### Get a copy of the module tc:
$cvs checkout tc Get a copy of the module tc as it looked one day ago:$ cvs checkout -D yesterday tc
commit Check files into the repository
1. Synopsis: commit [-lnRf] [-m 'log_message' | -F file] [-r revision] [files...]
2. Requires: working directory, repository.
3. Changes: repository.
4. Synonym: ci
Use commit when you want to incorporate changes from your working source files into the source repository.
If you don't specify particular files to commit, all of the files in your working current directory are examined. commit is careful to change in the repository only those files that you have really changed. By default (or if you explicitly specify the -R option), files in subdirectories are also examined and committed if they have changed; you can use the -l option to limit commit to the current directory only.
commit verifies that the selected files are up to date with the current revisions in the source repository; it will notify you, and exit without committing, if any of the speci- fied files must be made current first with update (see node update' in the CVS manual). commit does not call the update command for you, but rather leaves that for you to do when the time is right.
When all is well, an editor is invoked to allow you to enter a log message that will be written to one or more logging programs (see node modules' in the CVS manual, and see node loginfo' in the CVS manual) and placed in the rcs file inside the repository. This log message can be retrieved with the log command; see see node log' in the CVS manual. You can specify the log message on the command line with the -m message option, and thus avoid the editor invocation, or use the -F file option to specify that the argument file contains the log message.
At commit, a unique commitid is placed in the rcs file inside the repository. All files committed at once get the same commitid. The commitid can be retrieved with the log and status command; see see node log' in the CVS manual, see node File status' in the CVS manual.
commit options These standard options are supported by commit (see node Common options' in the CVS manual, for a complete description of them):
-l
Local; run only in current working directory.
-R
Commit directories recursively. default.
-r revision
Commit to revision. revision must be either a branch, or a revision on the main trunk that is higher than any existing revision number (see node Assigning revisions' in the CVS manual). You cannot commit to a specific revision on a branch.
commit also supports these options:
-c
Refuse to commit files unless the user has registered a valid edit on the file via cvs edit. This is most useful when commit -c and edit -c have been placed in all .cvsrc files. A commit can be forced anyways by either regestering an edit retroactively via cvs edit (no changes to the file will be lost) or using the -f option to commit. Support for commit -c requires both client and a server versions 1.12.10 or greater.
-F file
-f
Note that this is not the standard behavior of the -f option as defined in see node Com- mon options' in the CVS manual.
Force cvs to commit a new revision even if you haven't made any changes to the file. As of cvs version 1.12.10, it also causes the -c option to be ignored. If the current revi- sion of file is 1.7, then the following two commands are equivalent:
$cvs commit -f file$ cvs commit -r 1.8 file
The -f option disables recursion (i.e., it implies -l). To force cvs to commit a new revision for all files in all subdirectories, you must use -f -R.
-m message
Use message as the log message, instead of invoking an editor.
#### commit examples
Committing to a branch You can commit to a branch revision (one that has an even number of dots) with the -r option. To create a branch revision, use the -b option of the rtag or tag commands (see node Branching and merging' in the CVS manual). Then, either checkout or update can be used to base your sources on the newly created branch. From that point on, all commit changes made within these working sources will be automatically added to a branch revision, thereby not disturbing main-line development in any way. For example, if you had to create a patch to the 1.2 version of the product, even though the 2.0 version is already under development, you might do:
$cvs rtag -b -r FCS1_2 FCS1_2_Patch product_module$ cvs checkout -r FCS1_2_Patch product_module
$cd product_module [[ hack away ]]$ cvs commit
This works automatically since the -r option is sticky.
Creating the branch after editing Say you have been working on some extremely experimental software, based on whatever revi- sion you happened to checkout last week. If others in your group would like to work on this software with you, but without disturbing main-line development, you could commit your change to a new branch. Others can then checkout your experimental stuff and utilize the full benefit of cvs conflict resolution. The scenario might look like:
[[ hacked sources are present ]] $cvs tag -b EXPR1$ cvs update -r EXPR1 $cvs commit The update command will make the -r EXPR1 option sticky on all files. Note that your changes to the files will never be removed by the update command. The commit will automati- cally commit to the correct branch, because the -r is sticky. You could also do like this: [[ hacked sources are present ]]$ cvs tag -b EXPR1 $cvs commit -r EXPR1 but then, only those files that were changed by you will have the -r EXPR1 sticky flag. If you hack away, and commit without specifying the -r EXPR1 flag, some files may accidentally end up on the main trunk. To work with you on the experimental change, others would simply do$ cvs checkout -r EXPR1 whatever_module
#### diff
Show differences between revisions o Synopsis: diff [-lR] [-k kflag] [format_options] [(-r rev1[:date1] | -D date1) [-r rev2[:date2] | -D date2]] [files...]
o Requires: working directory, repository.
o Changes: nothing.
The diff command is used to compare different revisions of files. The default action is to compare your working files with the revisions they were based on, and report any differences that are found.
If any file names are given, only those files are compared. If any directories are given, all files under them will be compared.
The exit status for diff is different than for other cvs commands; for details see node Exit status' in the CVS manual.
diff options These standard options are supported by diff (see node Common options' in the CVS manual, for a complete description of them):
-D date
Use the most recent revision no later than date. See -r for how this affects the comparison.
-k kflag
Process keywords according to kflag. See see node Keyword substitution' in the CVS man- ual.
-l
Local; run only in current working directory.
-R
Examine directories recursively. This option is on by default.
-r tag[:date]
Compare with revision specified by tag or, when date is specified and tag is a branch tag, the version from the branch tag as it existed on date. Zero, one or two -r options can be present. With no -r option, the working file will be compared with the revision it was based on. With one -r, that revision will be compared to your current working file. With two -r options those two revisions will be compared (and your working file will not affect the outcome in any way).
One or both -r options can be replaced by a -D date option, described above.
The following options specify the format of the output. They have the same meaning as in GNU diff. Most options have two equivalent names, one of which is a single letter pre- ceded by -, and the other of which is a long name preceded by --.
-lines
obsolete.
-a
Treat all files as text and compare them line-by-line, even if they do not seem to be text.
-b
Ignore trailing white space and consider all other sequences of one or more white space characters to be equivalent.
-B
Ignore changes that just insert or delete blank lines.
--binary
Read and write data in binary mode.
--brief
Report only whether the files differ, not the details of the differences.
-c
Use the context output format.
-C lines
--context[=lines]
Use the context output format, showing lines (an integer) lines of context, or three if lines is not given. For proper operation, patch typically needs at least two lines of context.
--changed-group-format=format
Use format to output a line group containing differing lines from both files in if-then- else format. see node Line group formats' in the CVS manual.
-d
Change the algorithm to perhaps find a smaller set of changes. This makes diff slower (sometimes much slower).
-e
--ed
Make output that is a valid ed script.
--expand-tabs
Expand tabs to spaces in the output, to preserve the alignment of tabs in the input files.
-f
Make output that looks vaguely like an ed script but has changes in the order they appear in the file.
-F regexp
In context and unified format, for each hunk of differences, show some of the last preced- ing line that matches regexp.
--forward-ed
Make output that looks vaguely like an ed script but has changes in the order they appear in the file.
-H
Use heuristics to speed handling of large files that have numerous scattered small changes.
--horizon-lines=lines
Do not discard the last lines lines of the common prefix and the first lines lines of the common suffix.
-i
Ignore changes in case; consider upper- and lower-case letters equivalent.
-I regexp
Ignore changes that just insert or delete lines that match regexp.
--ifdef=name
Make merged if-then-else output using name.
--ignore-all-space
Ignore white space when comparing lines.
--ignore-blank-lines
Ignore changes that just insert or delete blank lines.
--ignore-case
Ignore changes in case; consider upper- and lower-case to be the same.
--ignore-matching-lines=regexp
Ignore changes that just insert or delete lines that match regexp.
--ignore-space-change
Ignore trailing white space and consider all other sequences of one or more white space characters to be equivalent.
--initial-tab
Output a tab rather than a space before the text of a line in normal or context format. This causes the alignment of tabs in the line to look normal.
-L label
Use label instead of the file name in the context format and unified format headers.
--label=label
Use label instead of the file name in the context format and unified format headers.
--left-column
Print only the left column of two common lines in side by side format.
--line-format=format
Use format to output all input lines in if-then-else format. see node Line formats' in the CVS manual.
--minimal
Change the algorithm to perhaps find a smaller set of changes. This makes diff slower (sometimes much slower).
-n
Output RCS-format diffs; like -f except that each command specifies the number of lines affected.
-N
--new-file
In directory comparison, if a file is found in only one directory, treat it as present but empty in the other directory.
--new-group-format=format
Use format to output a group of lines taken from just the second file in if-then-else for- mat. see node Line group formats' in the CVS manual.
--new-line-format=format
Use format to output a line taken from just the second file in if-then-else format. see node Line formats' in the CVS manual.
--old-group-format=format
Use format to output a group of lines taken from just the first file in if-then-else for- mat. see node Line group formats' in the CVS manual.
--old-line-format=format
Use format to output a line taken from just the first file in if-then-else format. see node Line formats' in the CVS manual.
-p
Show which C function each change is in.
--rcs
Output RCS-format diffs; like -f except that each command specifies the number of lines affected.
--report-identical-files
-s
Report when two files are the same.
--show-c-function
Show which C function each change is in.
--show-function-line=regexp
In context and unified format, for each hunk of differences, show some of the last preced- ing line that matches regexp.
--side-by-side
Use the side by side output format.
--speed-large-files
Use heuristics to speed handling of large files that have numerous scattered small changes.
--suppress-common-lines
Do not print common lines in side by side format.
-t
Expand tabs to spaces in the output, to preserve the alignment of tabs in the input files.
-T
Output a tab rather than a space before the text of a line in normal or context format. This causes the alignment of tabs in the line to look normal.
--text
Treat all files as text and compare them line-by-line, even if they do not appear to be text.
-u
Use the unified output format.
--unchanged-group-format=format
Use format to output a group of common lines taken from both files in if-then-else format. see node Line group formats' in the CVS manual.
--unchanged-line-format=format
Use format to output a line common to both files in if-then-else format. see node Line formats' in the CVS manual.
-U lines
--unified[=lines]
Use the unified output format, showing lines (an integer) lines of context, or three if lines is not given. For proper operation, patch typically needs at least two lines of context.
-w
Ignore white space when comparing lines.
-W columns
--width=columns
Use an output width of columns in side by side format.
-y
Use the side by side output format.
Line group formats Line group formats let you specify formats suitable for many applications that allow if- then-else input, including programming languages and text formatting languages. A line group format specifies the output format for a contiguous group of similar lines.
For example, the following command compares the TeX file myfile with the original version from the repository, and outputs a merged file in which old regions are surrounded by \begin{em}-\end{em} lines, and new regions are surrounded by \begin{bf}-\end{bf} lines.
cvs diff \ --old-group-format='\begin{em} %<\end{em} ' \ --new-group-format='\begin{bf} %>\end{bf} ' \ myfile
The following command is equivalent to the above example, but it is a little more verbose, because it spells out the default line group formats.
cvs diff \ --old-group-format='\begin{em} %<\end{em} ' \ --new-group-format='\begin{bf} %>\end{bf} ' \ --unchanged-group-format='%=' \ --changed-group-format='\begin{em} %<\end{em} \begin{bf} %>\end{bf} ' \ myfile
Here is a more advanced example, which outputs a diff listing with headers containing line numbers in a plain English'' style.
cvs diff \ --unchanged-group-format='' \ --old-group-format='-------- %dn line%(n=1?:s) deleted at %df: %<' \ --new-group-format='-------- %dN line%(N=1?:s) added after %de: %>' \ --changed-group-format='-------- %dn line%(n=1?:s) changed at %df: %<-------- to: %>' \ myfile
To specify a line group format, use one of the options listed below. You can specify up to four line group formats, one for each kind of line group. You should quote format, because it typically contains shell metacharacters.
--old-group-format=format
These line groups are hunks containing only lines from the first file. The default old group format is the same as the changed group format if it is specified; otherwise it is a format that outputs the line group as-is.
--new-group-format=format
These line groups are hunks containing only lines from the second file. The default new group format is same as the changed group format if it is specified; otherwise it is a format that outputs the line group as-is.
--changed-group-format=format
These line groups are hunks containing lines from both files. The default changed group format is the concatenation of the old and new group formats.
--unchanged-group-format=format
These line groups contain lines common to both files. The default unchanged group format is a format that outputs the line group as-is.
In a line group format, ordinary characters represent themselves; conversion specifica- tions start with % and have one of the following forms.
%<
stands for the lines from the first file, including the trailing newline. Each line is formatted according to the old line format (see node Line formats' in the CVS manual).
%>
stands for the lines from the second file, including the trailing newline. Each line is formatted according to the new line format.
%=
stands for the lines common to both files, including the trailing newline. Each line is formatted according to the unchanged line format.
%%
stands for %.
%c'C'
where C is a single character, stands for C. C may not be a backslash or an apostrophe. For example, %c':' stands for a colon, even inside the then-part of an if-then-else for- mat, which a colon would normally terminate.
%c'\O'
where O is a string of 1, 2, or 3 octal digits, stands for the character with octal code O. For example, %c'\0' stands for a null character.
Fn
where F is a printf conversion specification and n is one of the following letters, stands for n's value formatted with F.
e
The line number of the line just before the group in the old file.
f
The line number of the first line in the group in the old file; equals e + 1.
l
The line number of the last line in the group in the old file.
m
The line number of the line just after the group in the old file; equals l + 1.
n
The number of lines in the group in the old file; equals l - f + 1.
E, F, L, M, N
Likewise, for lines in the new file.
The printf conversion specification can be %d, %o, %x, or %X, specifying decimal, octal, lower case hexadecimal, or upper case hexadecimal output respectively. After the % the following options can appear in sequence: a - specifying left-justification; an integer specifying the minimum field width; and a period followed by an optional integer speci- fying the minimum number of digits. For example, %5dN prints the number of new lines in the group in a field of width 5 characters, using the printf format "%5d".
(A=B?T:E)
If A equals B then T else E. A and B are each either a decimal constant or a single let- ter interpreted as above. This format spec is equivalent to T if A's value equals B's; otherwise it is equivalent to E.
For example, %(N=0?no:%dN) line%(N=1?:s) is equivalent to no lines if N (the number of lines in the group in the new file) is 0, to 1 line if N is 1, and to %dN lines otherwise.
##### Line formats
Line formats control how each line taken from an input file is output as part of a line group in if-then-else format.
For example, the following command outputs text with a one-column change indicator to the left of the text. The first column of output is - for deleted lines, | for added lines, and a space for unchanged lines. The formats contain newline characters where newlines are desired on output.
cvs diff \ --old-line-format='-%l ' \ --new-line-format='|%l ' \ --unchanged-line-format=' %l ' \ myfile
To specify a line format, use one of the following options. You should quote format, since it often contains shell metacharacters.
--old-line-format=format
formats lines just from the first file.
--new-line-format=format
formats lines just from the second file.
--unchanged-line-format=format
formats lines common to both files.
--line-format=format
formats all lines; in effect, it sets all three above options simultaneously.
In a line format, ordinary characters represent themselves; conversion specifications start with % and have one of the following forms.
%l
stands for the contents of the line, not counting its trailing newline (if any). This format ignores whether the line is incomplete.
%L
stands for the contents of the line, including its trailing newline (if any). If a line is incomplete, this format preserves its incompleteness.
%%
stands for %.
%c'C'
where C is a single character, stands for C. C may not be a backslash or an apostrophe. For example, %c':' stands for a colon.
%c'\O'
where O is a string of 1, 2, or 3 octal digits, stands for the character with octal code O. For example, %c'\0' stands for a null character.
Fn
where F is a printf conversion specification, stands for the line number formatted with F. For example, %.5dn prints the line number using the printf format "%.5d". see node Line group formats' in the CVS manual, for more about printf conversion specifications.
The default line format is %l followed by a newline character.
If the input contains tab characters and it is important that they line up on output, you should ensure that %l or %L in a line format is just after a tab stop (e.g. by preceding %l or %L with a tab character), or you should use the -t or --expand-tabs option.
Taken together, the line and line group formats let you specify many different formats. For example, the following command uses a format similar to diff's normal format. You can tailor this command to get fine control over diff's output.
cvs diff \ --old-line-format='< %l ' \ --new-line-format='> %l ' \ --old-group-format='%df%(f=l?:,%dl)d%dE %<' \ --new-group-format='%dea%dF%(F=L?:,%dL) %>' \ --changed-group-format='%df%(f=l?:,%dl)c%dF%(F=L?:,%dL) %<-- %>' \ --unchanged-group-format='' \ myfile
diff examples The following line produces a Unidiff (-u flag) between revision 1.14 and 1.19 of backend.c. Due to the -kk flag no keywords are substituted, so differences that only depend on keyword substitution are ignored.
$cvs diff -kk -u -r 1.14 -r 1.19 backend.c Suppose the experimental branch EXPR1 was based on a set of files tagged RELEASE_1_0. To see what has happened on that branch, the following can be used:$ cvs diff -r RELEASE_1_0 -r EXPR1
A command like this can be used to produce a context diff between two releases:
$cvs diff -c -r RELEASE_1_0 -r RELEASE_1_1 > diffs If you are maintaining ChangeLogs, a command like the following just before you commit your changes may help you write the ChangeLog entry. All local modifications that have not yet been committed will be printed.$ cvs diff -u | less
##### export
Export sources from CVS, similar to checkout o Synopsis: export [-flNnR] (-r rev[:date] | -D date) [-k subst] [-d dir] module...
o Requires: repository.
o Changes: current directory.
This command is a variant of checkout; use it when you want a copy of the source for mod- ule without the cvs administrative directories. For example, you might use export to pre- pare source for shipment off-site. This command requires that you specify a date or tag (with -D or -r), so that you can count on reproducing the source you ship to others (and thus it always prunes empty directories).
One often would like to use -kv with cvs export. This causes any keywords to be expanded such that an import done at some other site will not lose the keyword revision informa- tion. But be aware that doesn't handle an export containing binary files correctly. Also be aware that after having used -kv, one can no longer use the ident command (which is part of the rcs suite--see ident(1)) which looks for keyword strings. If you want to be able to use ident you must not use -kv.
export options These standard options are supported by export (see node Common options' in the CVS manual, for a complete description of them):
-D date
Use the most recent revision no later than date.
-f
If no matching revision is found, retrieve the most recent revision (instead of ignoring the file).
-l
Local; run only in current working directory.
-n
Do not run any checkout program.
-R
Export directories recursively. This is on by default.
-r tag[:date]
Export the revision specified by tag or, when date is specified and tag is a branch tag, the version from the branch tag as it existed on date. See see node Common options' in the CVS manual.
In addition, these options (that are common to checkout and export) are also supported:
-d dir
Create a directory called dir for the working files, instead of using the module name. see node checkout options' in the CVS manual, for complete details on how cvs handles this flag.
-k subst
Set keyword expansion mode (see node Substitution modes' in the CVS manual).
-N
Only useful together with -d dir. see node checkout options' in the CVS manual, for com- plete details on how cvs handles this flag.
##### history
Show status of files and users o history [-report] [-flags] [-options args] [files...]
o Requires: the file $CVSROOT/CVSROOT/history cvs can keep a history log that tracks each use of most cvs commands. You can use history to display this information in various formats. To enable logging, the LogHistory config option must be set to some value other than the empty string and the history file specified by the HistoryLogPath option must be writable by all users who may run the cvs executable (see node config' in the CVS manual). To enable the history command, logging must be enabled as above and the HistorySearchPath config option (see node config' in the CVS manual) must be set to specify some number of the history logs created thereby and these files must be readable by each user who might run the history command. Creating a repository via the cvs init command will enable logging of all possible events to a single history log file ($CVSROOT/CVSROOT/history) with read and write permissions for all users (see node Creating a repository' in the CVS manual).
Note: history uses -f, -l, -n, and -p in ways that conflict with the normal use inside cvs (see node Common options' in the CVS manual).
###### history options
Several options (shown above as -report) control what kind of report is generated:
-c
Report on each time commit was used (i.e., each time the repository was modified).
-e
Everything (all record types). Equivalent to specifying -x with all record types. Of course, -e will also include record types which are added in a future version of cvs; if you are writing a script which can only handle certain record types, you'll want to specify -x.
-m module
Report on a particular module. (You can meaningfully use -m more than once on the command line.)
-o
Report on checked-out modules. This is the default report type.
-T
Report on all tags.
-x type
Extract a particular set of record types type from the cvs history. The types are indi- cated by single letters, which you may specify in combination.
Certain commands have a single record type:
F
release
O
checkout
E
export
T
rtag
One of five record types may result from an update:
C
A merge was necessary but collisions were detected (requiring manual merging).
G
A merge was necessary and it succeeded.
U
A working file was copied from the repository.
P
A working file was patched to match the repository.
W
The working copy of a file was deleted during update (because it was gone from the repository).
One of three record types results from commit: A
A file was added for the first time.
M
A file was modified.
R
A file was removed.
The options shown as -flags constrain or expand the report without requiring option arguments:
-a
Show data for all users (the default is to show data only for the user executing history).
-l
Show last modification only.
-w
Show only the records for modifications done from the same working directory where history is executing.
The options shown as -options args constrain the report based on an argument:
-b str
Show data back to a record containing the string str in either the module name, the file name, or the repository path.
-D date
Show data since date. This is slightly different from the normal use of -D date, which selects the newest revision older than date.
-f file
Show data for a particular file (you can specify several -f options on the same command line). This is equivalent to specifying the file on the command line.
-n module
Show data for a particular module (you can specify several -n options on the same command line).
-p repository
Show data for a particular source repository (you can specify several -p options on the same command line).
-r rev
Show records referring to revisions since the revision or tag named rev appears in indi- vidual rcs files. Each rcs file is searched for the revision or tag.
-t tag
Show records since tag tag was last added to the history file. This differs from the -r flag above in that it reads only the history file, not the rcs files, and is much faster.
-u name
Show records for user name.
-z timezone
Show times in the selected records using the specified time zone instead of UTC.
### import<.h3> Import sources into CVS, using vendor branches o Synopsis: import [-options] repository vendortag releasetag... o Requires: Repository, source distribution directory. o Changes: repository. Use import to incorporate an entire source distribution from an outside source (e.g., a source vendor) into your source repository directory. You can use this command both for initial creation of a repository, and for wholesale updates to the module from the outside source. see node Tracking sources' in the CVS manual, for a discussion on this subject. The repository argument gives a directory name (or a path to a directory) under the cvs root directory for repositories; if the directory did not exist, import creates it. When you use import for updates to source that has been modified in your source repository (since a prior import), it will notify you of any files that conflict in the two branches of development; use checkout -j to reconcile the differences, as import instructs you to do. If cvs decides a file should be ignored (see node cvsignore' in the CVS manual), it does not import it and prints I followed by the filename (see node import output' in the CVS manual, for a complete description of the output). If the file $CVSROOT/CVSROOT/cvswrappers exists, any file whose names match the specifica- tions in that file will be treated as packages and the appropriate filtering will be per- formed on the file/directory before being imported. see node Wrappers' in the CVS man- ual. The outside source is saved in a first-level branch, by default 1.1.1. Updates are leaves of this branch; for example, files from the first imported collection of source will be revision 1.1.1.1, then files from the first imported update will be revision 1.1.1.2, and so on. At least three arguments are required. repository is needed to identify the collection of source. vendortag is a tag for the entire branch (e.g., for 1.1.1). You must also spec- ify at least one releasetag to uniquely identify the files at the leaves created each time you execute import. The releasetag should be new, not previously existing in the reposi- tory file, and uniquely identify the imported release, Note that import does not change the directory in which you invoke it. In particular, it does not set up that directory as a cvs working directory; if you want to work with the sources import them first and then check them out into a different directory (see node Getting the source' in the CVS manual). ##### import options This standard option is supported by import (see node Common options' in the CVS manual, for a complete description): -m message Use message as log information, instead of invoking an editor. There are the following additional special options. -b branch See see node Multiple vendor branches' in the CVS manual. -k subst Indicate the keyword expansion mode desired. This setting will apply to all files created during the import, but not to any files that previously existed in the repository. See see node Substitution modes' in the CVS manual, for a list of valid -k settings. -I name Specify file names that should be ignored during import. You can use this option repeat- edly. To avoid ignoring any files at all (even those ignored by default), specify -I !'. name can be a file name pattern of the same type that you can specify in the .cvsignore file. see node cvsignore' in the CVS manual. -W spec Specify file names that should be filtered during import. You can use this option repeat- edly. spec can be a file name pattern of the same type that you can specify in the .cvswrappers file. see node Wrappers' in the CVS manual. -X Modify the algorithm used by cvs when importing new files so that new files do not immedi- ately appear on the main trunk. Specifically, this flag causes cvs to mark new files as if they were deleted on the main trunk, by taking the following steps for each file in addition to those normally taken on import: creating a new revision on the main trunk indicating that the new file is dead, resetting the new file's default branch, and placing the file in the Attic (see node Attic' in the CVS manual) directory. Use of this option can be forced on a repository-wide basis by setting the ImportNew- FilesToVendorBranchOnly option in CVSROOT/config (see node config' in the CVS manual). ##### import output import keeps you informed of its progress by printing a line for each file, preceded by one character indicating the status of the file: U file The file already exists in the repository and has not been locally modified; a new revi- sion has been created (if necessary). N file The file is a new file which has been added to the repository. C file The file already exists in the repository but has been locally modified; you will have to merge the changes. I file The file is being ignored (see node cvsignore' in the CVS manual). L file The file is a symbolic link; cvs import ignores symbolic links. People periodically sug- gest that this behavior should be changed, but if there is a consensus on what it should be changed to, it is not apparent. (Various options in the modules file can be used to recreate symbolic links on checkout, update, etc.; see node modules' in the CVS manual.) ##### import examples See see node Tracking sources' in the CVS manual, and see node From files' in the CVS man- ual. #### log Print out log information for files o Synopsis: log [options] [files...] o Requires: repository, working directory. Display log information for files. log used to call the rcs utility rlog. Although this is no longer true in the current sources, this history determines the format of the output and the options, which are not quite in the style of the other cvs commands. The output includes the location of the rcs file, the head revision (the latest revision on the trunk), all symbolic names (tags) and some other things. For each revision, the revision number, the date, the author, the number of lines added/deleted, the commitid and the log message are printed. All dates are displayed in local time at the client. This is typically specified in the$TZ environment variable, which can be set to govern how log displays dates.
Note: log uses -R in a way that conflicts with the normal use inside cvs (see node Common options' in the CVS manual).
##### log options
By default, log prints all information that is available. All other options restrict the output. Note that the revision selection options (-d, -r, -s, and -w) have no effect, other than possibly causing a search for files in Attic directories, when used in conjunction with the options that restrict the output to only log header fields (-b, -h, -R, and -t) unless the -S option is also specified.
-b
Print information about the revisions on the default branch, normally the highest branch on the trunk.
-d dates
Print information about revisions with a checkin date/time in the range given by the semi- colon-separated list of dates. The date formats accepted are those accepted by the -D option to many other cvs commands (see node Common options' in the CVS manual). Dates can be combined into ranges as follows:
d1 d2>d1
Select the revisions that were deposited between d1 and d2.
d>
Select all revisions dated d or earlier.
d<
>d
Select all revisions dated d or later.
d
Select the single, latest revision dated d or earlier.
The > or < characters may be followed by = to indicate an inclusive range rather than an exclusive one.
Note that the separator is a semicolon (;).
-h
Print only the name of the rcs file, name of the file in the working directory, head, default branch, access list, locks, symbolic names, and suffix.
-l
Local; run only in current working directory. (Default is to run recursively).
-N
Do not print the list of tags for this file. This option can be very useful when your site uses a lot of tags, so rather than "more"'ing over 3 pages of tag information, the log information is presented without tags at all.
-R
Print only the name of the rcs file.
-rrevisions
Print information about revisions given in the comma-separated list revisions of revisions and ranges. The following table explains the available range formats:
rev1:rev2
Revisions rev1 to rev2 (which must be on the same branch).
rev1::rev2
The same, but excluding rev1.
:rev
::rev
Revisions from the beginning of the branch up to and including rev.
rev:
Revisions starting with rev to the end of the branch containing rev.
rev::
Revisions starting just after rev to the end of the branch containing rev.
branch
An argument that is a branch means all revisions on that branch.
branch1:branch2
branch1::branch2
A range of branches means all revisions on the branches in that range.
branch.
The latest revision in branch.
A bare -r with no revisions means the latest revision on the default branch, normally the trunk. There can be no space between the -r option and its argument.
-S
Suppress the header if no revisions are selected.
-s states
Print information about revisions whose state attributes match one of the states given in the comma-separated list states. Individual states may be any text string, though cvs commonly only uses two states, Exp and dead. See see node admin options' in the CVS man- ual for more information.
-t
Print the same as -h, plus the descriptive text.
Print information about revisions checked in by users with login names appearing in the comma-separated list logins. If logins is omitted, the user's login is assumed. There can be no space between the -w option and its argument.
log prints the intersection of the revisions selected with the options -d, -s, and -w, intersected with the union of the revisions selected by -b and -r.
Since log shows dates in local time, you might want to see them in Coordinated Universal Time (UTC) or some other timezone. To do this you can set your $TZ environment variable before invoking cvs:$ TZ=UTC cvs log foo.c $TZ=EST cvs log bar.c (If you are using a csh-style shell, like tcsh, you would need to prefix the examples above with env.) ls & rls o ls [-e | -l] [-RP] [-r tag[:date]] [-D date] [path...] o Requires: repository for rls, repository & working directory for ls. o Changes: nothing. o Synonym: dir & list are synonyms for ls and rdir & rlist are synonyms for rls. The ls and rls commands are used to list files and directories in the repository. By default ls lists the files and directories that belong in your working directory, what would be there after an update. By default rls lists the files and directories on the tip of the trunk in the topmost directory of the repository. Both commands accept an optional list of file and directory names, relative to the working directory for ls and the topmost directory of the repository for rls. Neither is recur- sive by default. ### ls & rls These standard options are supported by ls & rls: -d Show dead revisions (with tag when specified). -e Display in CVS/Entries format. This format is meant to remain easily parsable by automation. -l Display all details. -P Don't list contents of empty directories when recursing. -R List recursively. -r tag[:date] Show files specified by tag or, when date is specified and tag is a branch tag, the ver- sion from the branch tag as it existed on date. See see node Common options' in the CVS manual. -D date Show files from date. ##### rls examples$ cvs rls cvs rls: Listing module: .' CVSROOT first-dir
$cvs rls CVSROOT cvs rls: Listing module: CVSROOT' checkoutlist commitinfo config cvswrappers loginfo modules notify rcsinfo taginfo verifymsg #### rdiff 'patch' format diffs between releases o rdiff [-flags] [-V vn] (-r tag1[:date1] | -D date1) [-r tag2[:date2] | -D date2] mod- ules... o Requires: repository. o Synonym: patch Builds a Larry Wall format patch(1) file between two releases, that can be fed directly into the patch program to bring an old release up-to-date with the new release. (This is one of the few cvs commands that operates directly from the repository, and doesn't require a prior checkout.) The diff output is sent to the standard output device. You can specify (using the standard -r and -D options) any combination of one or two revi- sions or dates. If only one revision or date is specified, the patch file reflects dif- ferences between that revision or date and the current head revisions in the rcs file. Note that if the software release affected is contained in more than one directory, then it may be necessary to specify the -p option to the patch command when patching the old sources, so that patch is able to find the files that are located in other directories. ##### rdiff options These standard options are supported by rdiff (see node Common options' in the CVS manual, for a complete description of them): -D date Use the most recent revision no later than date. -f If no matching revision is found, retrieve the most recent revision (instead of ignoring the file). -k kflag Process keywords according to kflag. See see node Keyword substitution' in the CVS man- ual. -l Local; don't descend subdirectories. -R Examine directories recursively. This option is on by default. -r tag Use the revision specified by tag, or when date is specified and tag is a branch tag, the version from the branch tag as it existed on date. See see node Common options' in the CVS manual. In addition to the above, these options are available: -c Use the context diff format. This is the default format. -s Create a summary change report instead of a patch. The summary includes information about files that were changed or added between the releases. It is sent to the standard output device. This is useful for finding out, for example, which files have changed between two dates or revisions. -t A diff of the top two revisions is sent to the standard output device. This is most use- ful for seeing what the last change to a file was. -u Use the unidiff format for the context diffs. Remember that old versions of the patch program can't handle the unidiff format, so if you plan to post this patch to the net you should probably not use -u. -V vn Expand keywords according to the rules current in rcs version vn (the expansion format changed with rcs version 5). Note that this option is no longer accepted. cvs will always expand keywords the way that rcs version 5 does. ###### rdiff examples Suppose you receive mail from foo@example.net asking for an update from release 1.2 to 1.4 of the tc compiler. You have no such patches on hand, but with cvs that can easily be fixed with a command such as this:$ cvs rdiff -c -r FOO1_2 -r FOO1_4 tc | \ Mail -s 'The patches you asked for' foo@example.net
Suppose you have made release 1.3, and forked a branch called R_1_3fix for bug fixes. R_1_3_1 corresponds to release 1.3.1, which was made some time ago. Now, you want to see how much development has been done on the branch. This command can be used:
### server & pserver
Act as a server for a client on stdin/stdout o pserver [-c path]
server [-c path]
o Requires: repository, client conversation on stdin/stdout
o Changes: Repository or, indirectly, client working directory.
The cvs server and pserver commands are used to provide repository access to remote clients and expect a client conversation on stdin & stdout. Typically these commands are launched from inetd or via ssh (see node Remote repositories' in the CVS manual).
server expects that the client has already been authenticated somehow, typically via ssh, and pserver attempts to authenticate the client itself.
Option serverand pserver commands:
-c path
Load configuration from path rather than the default location \$CVSROOT/CVSROOT/config (see node config' in the CVS manual). path must be /etc/cvs.conf or prefixed by /etc/cvs/. This option is supported beginning with cvs release 1.12.13.
### tag
o Synonym: freeze. Add a symbolic tag to checked out version of RCS file o tag [-lQqR] [-b] [-d] symbolic_tag [files...]
o Requires: repository, working directory.
o Changes: repository.
Use this command to assign symbolic tags to the nearest repository versions to your work- ing sources. The tags are applied immediately to the repository, as with rtag, but the versions are supplied implicitly by the CVS records of your working files' history rather than applied explicitly.
One use for tags is to record a snapshot of the current sources when the software freeze date of a project arrives. As bugs are fixed after the freeze date, only those changed sources that are to be part of the release need be re-tagged.
The symbolic tags are meant to permanently record which revisions of which files were used in creating a software distribution. The checkout and update commands allow you to extract an exact copy of a tagged release at any time in the future, regardless of whether files have been changed, added, or removed since the release was tagged.
This command can also be used to delete a symbolic tag, or to create a branch. See the options section below.
#### tag options
These standard options are supported by tag (see node Common options' in the CVS manual, for a complete description of them):
-l
Local; run only in current working directory.
-R
Commit directories recursively. This is on by default.
-Q
Really quiet.
-q
Somewhat quiet.
Two special options are available:
-b
The -b option makes the tag a branch tag (see node Branches' in the CVS manual), allowing concurrent, isolated development. This is most useful for creating a patch to a previously released software distribution.
-d
Delete a tag.
If you use cvs tag -d symbolic_tag, the symbolic tag you specify is deleted instead of being added. Warning: Be very certain of your ground before you delete a tag; doing this effectively discards some historical information, which may later turn out to have been valuable.
### update
Bring work tree in sync with repository o update [-ACdflPpR] [-I name] [-j rev [-j rev]] [-k kflag] [-r tag[:date] | -D date] [-W spec] files...
o Requires: repository, working directory.
o Changes: working directory.
After you've run checkout to create your private copy of source from the common reposi- tory, other developers will continue changing the central source. From time to time, when it is convenient in your development process, you can use the update command from within your working directory to reconcile your work with any revisions applied to the source repository since your last checkout or update. Without the -C option, update will also merge any differences between the local copy of files and their base revisions into any destination revisions specified with -r, -D, or -A.
#### update output
update and checkout keep you informed of their progress by printing a line for each file, preceded by one character indicating the status of the file:
U file
The file was brought up to date with respect to the repository. This is done for any file that exists in the repository but not in your working directory, and for files that you haven't changed but are not the most recent versions available in the repository.
P file
Like U, but the cvs server sends a patch instead of an entire file. This accomplishes the same thing as U using less bandwidth.
A file
The file has been added to your private copy of the sources, and will be added to the source repository when you run commit on the file. This is a reminder to you that the file needs to be committed.
R file
The file has been removed from your private copy of the sources, and will be removed from the source repository when you run commit on the file. This is a reminder to you that the file needs to be committed.
M file
The file is modified in your working directory.
M can indicate one of two states for a file you're working on: either there were no modi- fications to the same file in the repository, so that your file remains as you last saw it; or there were modifications in the repository as well as in your copy, but they were merged successfully, without conflict, in your working directory.
cvs will print some messages if it merges your work, and a backup copy of your working file (as it looked before you ran update) will be made. The exact name of that file is printed while update runs.
C file
A conflict was detected while trying to merge your changes to file with changes from the source repository. file (the copy in your working directory) is now the result of attempting to merge the two revisions; an unmodified copy of your file is also in your working directory, with the name .#file.revision where revision is the revision that your modified file started from. Resolve the conflict as described in see node Conflicts example' in the CVS manual. (Note that some systems automatically purge files that begin with .# if they have not been accessed for a few days. If you intend to keep a copy of your original file, it is a very good idea to rename it.) Under vms, the file name starts with __ rather than .#.
? file
file is in your working directory, but does not correspond to anything in the source repository, and is not in the list of files for cvs to ignore (see the description of the -I option, and see node `cvsignore' in the CVS manual).
## AUTHORS
Dick Grune Original author of the cvs shell script version posted to comp.sources.unix in the volume6 release of December, 1986. Credited with much of the cvs conflict resolution algorithms.
Brian Berliner Coder and designer of the cvs program itself in April, 1989, based on the original work done by Dick.
Jeff Polk Helped Brian with the design of the cvs module and vendor branch support and author of the checkin(1) shell script (the ancestor of cvs import).
Larry Jones, Derek R. Price, and Mark D. Baushke Have helped maintain cvs for many years.
And many others too numerous to mention here.
SEE ALSO The most comprehensive manual for CVS is Version Management with CVS by Per Cederqvist et al. Depending on your system, you may be able to get it with the info CVS command or it may be available as cvs.pdf (Portable Document Format), cvs.ps (PostScript), cvs.texinfo (Tex- info source), or cvs.html.
For CVS updates, more information on documentation, software related to CVS, development of CVS, and more, see:
http://www.nongnu.org/cvs/
ci(1), co(1), cvs(5), cvsbug(8), diff(1), grep(1), patch(1), rcs(1), rcsdiff(1), rcsmerge(1), rlog(1).
|
2017-01-16 21:48:14
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.5602023601531982, "perplexity": 5295.318864466037}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560279368.44/warc/CC-MAIN-20170116095119-00183-ip-10-171-10-70.ec2.internal.warc.gz"}
|
http://www.fightfinance.com/?q=580,502,252,151,526,575,221,443,404,155,446,452,137,499,7,201,216,497,264,289,158,51,50,270,352,31,148,488,16,265,152,727,503,733,730,731,732,734,211,347,364,330,26,374,234,57,239,11,15,33,56,207,213,229,255,460,281,333,120,524,533,280,462,406,68,176,350,359,360,226,492,300,366,512,273,563,565,285,561,564,307,414,483,169,184,186,196,331,619,655,616,620,411,738,739,740,745,748,749,541,752,758,764,765,546,143,361,25,35,573,63,380,52,101,117,706,705,704,697,79,674,116,112,621,119,417,340,93,628,232,92,98,248,626,104,113,369,568,202,721,772,773,775,777,778,622,715,717,
|
# Fight Finance
#### CoursesTagsRandomAllRecentScores
How many years will it take for an asset's price to quadruple (be four times as big, say from $1 to$4) if the price grows by 15% pa?
An investor owns an empty block of land that has local government approval to be developed into a petrol station, car wash or car park. The council will only allow a single development so the projects are mutually exclusive.
All of the development projects have the same risk and the required return of each is 10% pa. Each project has an immediate cost and once construction is finished in one year the land and development will be sold. The table below shows the estimated costs payable now, expected sale prices in one year and the internal rates of returns (IRR's).
Mutually Exclusive Projects Project Costnow ($) Sale price inone year ($) IRR(% pa) Petrol station 9,000,000 11,000,000 22.22 Car wash 800,000 1,100,000 37.50 Car park 70,000 110,000 57.14
Which project should the investor accept?
You have $100,000 in the bank. The bank pays interest at 10% pa, given as an effective annual rate. You wish to consume an equal amount now (t=0), in one year (t=1) and in two years (t=2), and still have$50,000 in the bank after that (t=2).
How much can you consume at each time?
A share was bought for $30 (at t=0) and paid its annual dividend of$6 one year later (at t=1).
Just after the dividend was paid, the share price fell to $27 (at t=1). What were the total, capital and income returns given as effective annual rates? The choices are given in the same order: $r_\text{total}$ , $r_\text{capital}$ , $r_\text{dividend}$. How can a nominal cash flow be precisely converted into a real cash flow? You expect a nominal payment of$100 in 5 years. The real discount rate is 10% pa and the inflation rate is 3% pa. Which of the following statements is NOT correct?
You're considering making an investment in a particular company. They have preference shares, ordinary shares, senior debt and junior debt.
Which is the safest investment? Which will give the highest returns?
Business people make lots of important decisions. Which of the following is the most important long term decision?
One and a half years ago Frank bought a house for $600,000. Now it's worth only$500,000, based on recent similar sales in the area.
The expected total return on Frank's residential property is 7% pa.
He rents his house out for $1,600 per month, paid in advance. Every 12 months he plans to increase the rental payments. The present value of 12 months of rental payments is$18,617.27.
The future value of 12 months of rental payments one year in the future is $19,920.48. What is the expected annual rental yield of the property? Ignore the costs of renting such as maintenance, real estate agent fees and so on. You are a banker about to grant a 2 year loan to a customer. The loan's principal and interest will be repaid in a single payment at maturity, sometimes called a zero-coupon loan, discount loan or bullet loan. You require a real return of 6% pa over the two years, given as an effective annual rate. Inflation is expected to be 2% this year and 4% next year, both given as effective annual rates. You judge that the customer can afford to pay back$1,000,000 in 2 years, given as a nominal cash flow. How much should you lend to her right now?
The working capital decision primarily affects which part of a business?
What is the lowest and highest expected share price and expected return from owning shares in a company over a finite period of time?
Let the current share price be $p_0$, the expected future share price be $p_1$, the expected future dividend be $d_1$ and the expected return be $r$. Define the expected return as:
$r=\dfrac{p_1-p_0+d_1}{p_0}$
The answer choices are stated using inequalities. As an example, the first answer choice "(a) $0≤p<∞$ and $0≤r< 1$", states that the share price must be larger than or equal to zero and less than positive infinity, and that the return must be larger than or equal to zero and less than one.
The following cash flows are expected:
• 10 yearly payments of $60, with the first payment in 3 years from now (first payment at t=3). • 1 payment of$400 in 5 years and 6 months (t=5.5) from now.
What is the NPV of the cash flows if the discount rate is 10% given as an effective annual rate?
Some countries' interest rates are so low that they're zero.
If interest rates are 0% pa and are expected to stay at that level for the foreseeable future, what is the most that you would be prepared to pay a bank now if it offered to pay you $10 at the end of every year for the next 5 years? In other words, what is the present value of five$10 payments at time 1, 2, 3, 4 and 5 if interest rates are 0% pa?
For a price of $1040, Camille will sell you a share which just paid a dividend of$100, and is expected to pay dividends every year forever, growing at a rate of 5% pa.
So the next dividend will be $100(1+0.05)^1=105.00$, and the year after it will be $100(1+0.05)^2=110.25$ and so on.
The required return of the stock is 15% pa.
Would you like to the share or politely ?
The following is the Dividend Discount Model (DDM) used to price stocks:
$$P_0=\dfrac{C_1}{r-g}$$
If the assumptions of the DDM hold, which one of the following statements is NOT correct? The long term expected:
A stock just paid its annual dividend of $9. The share price is$60. The required return of the stock is 10% pa as an effective annual rate.
What is the implied growth rate of the dividend per year?
A stock will pay you a dividend of $10 tonight if you buy it today. Thereafter the annual dividend is expected to grow by 5% pa, so the next dividend after the$10 one tonight will be $10.50 in one year, then in two years it will be$11.025 and so on. The stock's required return is 10% pa.
What is the stock price today and what do you expect the stock price to be tomorrow, approximately?
The following equation is the Dividend Discount Model, also known as the 'Gordon Growth Model' or the 'Perpetuity with growth' equation.
$$P_0=\frac{d_1}{r-g}$$
A stock pays dividends annually. It just paid a dividend, but the next dividend ($d_1$) will be paid in one year.
According to the DDM, what is the correct formula for the expected price of the stock in 2.5 years?
In the dividend discount model:
$$P_0 = \dfrac{C_1}{r-g}$$
The return $r$ is supposed to be the:
The following equation is the Dividend Discount Model, also known as the 'Gordon Growth Model' or the 'Perpetuity with growth' equation.
$$p_0=\frac{d_1}{r_\text{eff}-g_\text{eff}}$$
Which expression is NOT equal to the expected capital return?
A stock pays semi-annual dividends. It just paid a dividend of $10. The growth rate in the dividend is 1% every 6 months, given as an effective 6 month rate. You estimate that the stock's required return is 21% pa, as an effective annual rate. Using the dividend discount model, what will be the share price? Most listed Australian companies pay dividends twice per year, the 'interim' and 'final' dividends, which are roughly 6 months apart. You are an equities analyst trying to value the company BHP. You decide to use the Dividend Discount Model (DDM) as a starting point, so you study BHP's dividend history and you find that BHP tends to pay the same interim and final dividend each year, and that both grow by the same rate. You expect BHP will pay a$0.55 interim dividend in six months and a $0.55 final dividend in one year. You expect each to grow by 4% next year and forever, so the interim and final dividends next year will be$0.572 each, and so on in perpetuity.
Assume BHP's cost of equity is 8% pa. All rates are quoted as nominal effective rates. The dividends are nominal cash flows and the inflation rate is 2.5% pa.
What is the current price of a BHP share?
You own an apartment which you rent out as an investment property.
What is the price of the apartment using discounted cash flow (DCF, same as NPV) valuation?
Assume that:
• You just signed a contract to rent the apartment out to a tenant for the next 12 months at $2,000 per month, payable in advance (at the start of the month, t=0). The tenant is just about to pay you the first$2,000 payment.
• The contract states that monthly rental payments are fixed for 12 months. After the contract ends, you plan to sign another contract but with rental payment increases of 3%. You intend to do this every year.
So rental payments will increase at the start of the 13th month (t=12) to be $2,060 (=2,000(1+0.03)), and then they will be constant for the next 12 months. Rental payments will increase again at the start of the 25th month (t=24) to be$2,121.80 (=2,000(1+0.03)2), and then they will be constant for the next 12 months until the next year, and so on.
• The required return of the apartment is 8.732% pa, given as an effective annual rate.
• Ignore all taxes, maintenance, real estate agent, council and strata fees, periods of vacancy and other costs. Assume that the apartment will last forever and so will the rental payments.
Two years ago Fred bought a house for $300,000. Now it's worth$500,000, based on recent similar sales in the area.
Fred's residential property has an expected total return of 8% pa.
He rents his house out for $2,000 per month, paid in advance. Every 12 months he plans to increase the rental payments. The present value of 12 months of rental payments is$23,173.86.
The future value of 12 months of rental payments one year ahead is $25,027.77. What is the expected annual growth rate of the rental payments? In other words, by what percentage increase will Fred have to raise the monthly rent by each year to sustain the expected annual total return of 8%? What is the NPV of the following series of cash flows when the discount rate is 5% given as an effective annual rate? The first payment of$10 is in 4 years, followed by payments every 6 months forever after that which shrink by 2% every 6 months. That is, the growth rate every 6 months is actually negative 2%, given as an effective 6 month rate. So the payment at $t=4.5$ years will be $10(1-0.02)^1=9.80$, and so on.
The following equation is the Dividend Discount Model, also known as the 'Gordon Growth Model' or the 'Perpetuity with growth' equation.
$$p_0 = \frac{d_1}{r - g}$$
Which expression is NOT equal to the expected dividend yield?
Two companies BigDiv and ZeroDiv are exactly the same except for their dividend payouts.
BigDiv pays large dividends and ZeroDiv doesn't pay any dividends.
Currently the two firms have the same earnings, assets, number of shares, share price, expected total return and risk.
Assume a perfect world with no taxes, no transaction costs, no asymmetric information and that all assets including business projects are fairly priced and therefore zero-NPV.
All things remaining equal, which of the following statements is NOT correct?
A credit card offers an interest rate of 18% pa, compounding monthly.
Find the effective monthly rate, effective annual rate and the effective daily rate. Assume that there are 365 days in a year.
All answers are given in the same order:
$$r_\text{eff monthly} , r_\text{eff yearly} , r_\text{eff daily}$$
On his 20th birthday, a man makes a resolution. He will deposit $30 into a bank account at the end of every month starting from now, which is the start of the month. So the first payment will be in one month. He will write in his will that when he dies the money in the account should be given to charity. The bank account pays interest at 6% pa compounding monthly, which is not expected to change. If the man lives for another 60 years, how much money will be in the bank account if he dies just after making his last (720th) payment? The following cash flows are expected: • 10 yearly payments of$80, with the first payment in 3 years from now (first payment at t=3).
• 1 payment of $600 in 5 years and 6 months (t=5.5) from now. What is the NPV of the cash flows if the discount rate is 10% given as an effective annual rate? The Australian Federal Government lends money to domestic students to pay for their university education. This is known as the Higher Education Contribution Scheme (HECS). The nominal interest rate on the HECS loan is set equal to the consumer price index (CPI) inflation rate. The interest is capitalised every year, which means that the interest is added to the principal. The interest and principal does not need to be repaid by students until they finish study and begin working. Which of the following statements about HECS loans is NOT correct? A share currently worth$100 is expected to pay a constant dividend of $4 for the next 5 years with the first dividend in one year (t=1) and the last in 5 years (t=5). The total required return is 10% pa. What do you expected the share price to be in 5 years, just after the dividend at that time has been paid? A share’s current price is$60. It’s expected to pay a dividend of $1.50 in one year. The growth rate of the dividend is 0.5% pa and the stock’s required total return is 3% pa. The stock’s price can be modeled using the dividend discount model (DDM): $P_0=\dfrac{C_1}{r-g}$ Which of the following methods is NOT equal to the stock’s expected price in one year and six months (t=1.5 years)? Note that the symbolic formulas shown in each line below do equal the formulas with numbers. The formula is just repeated with symbols and then numbers in case it helps you to identify the incorrect statement more quickly. A stock’s current price is$1. Its expected total return is 10% pa and its long term expected capital return is 4% pa. It pays an annual dividend and the next one will be paid in one year. All rates are given as effective annual rates. The dividend discount model is thought to be a suitable model for the stock. Ignore taxes. Which of the following statements about the stock is NOT correct?
In the dividend discount model (DDM), share prices fall when dividends are paid. Let the high price before the fall be called the peak, and the low price after the fall be called the trough.
$$P_0=\dfrac{C_1}{r-g}$$
Which of the following statements about the DDM is NOT correct?
An investor bought a bond for $100 (at t=0) and one year later it paid its annual coupon of$1 (at t=1). Just after the coupon was paid, the bond price was $100.50 (at t=1). Inflation over the past year (from t=0 to t=1) was 3% pa, given as an effective annual rate. Which of the following statements is NOT correct? The bond investment produced a: An equities analyst is using the dividend discount model to price a company's shares. The company operates domestically and has no plans to expand overseas. It is part of a mature industry with stable positive growth prospects. The analyst has estimated the real required return (r) of the stock and the value of the dividend that the stock just paid a moment before $(C_\text{0 before})$. What is the highest perpetual real growth rate of dividends (g) that can be justified? Select the most correct statement from the following choices. The highest perpetual real expected growth rate of dividends that can be justified is the country's expected: You're advising your superstar client 40-cent who is weighing up buying a private jet or a luxury yacht. 40-cent is just as happy with either, but he wants to go with the more cost-effective option. These are the cash flows of the two options: • The private jet can be bought for$6m now, which will cost $12,000 per month in fuel, piloting and airport costs, payable at the end of each month. The jet will last for 12 years. • Or the luxury yacht can be bought for$4m now, which will cost $20,000 per month in fuel, crew and berthing costs, payable at the end of each month. The yacht will last for 20 years. What's unusual about 40-cent is that he is so famous that he will actually be able to sell his jet or yacht for the same price as it was bought since the next generation of superstar musicians will buy it from him as a status symbol. Bank interest rates are 10% pa, given as an effective annual rate. You can assume that 40-cent will live for another 60 years and that when the jet or yacht's life is at an end, he will buy a new one with the same details as above. Would you advise 40-cent to buy the or the ? Note that the effective monthly rate is $r_\text{eff monthly}=(1+0.1)^{1/12}-1=0.00797414$ Which of the following investable assets are NOT suitable for valuation using PE multiples techniques? Which firms tend to have high forward-looking price-earnings (PE) ratios? Which of the following statements about effective rates and annualised percentage rates (APR's) is NOT correct? A European bond paying annual coupons of 6% offers a yield of 10% pa. Convert the yield into an effective monthly rate, an effective annual rate and an effective daily rate. Assume that there are 365 days in a year. All answers are given in the same order: $$r_\text{eff, monthly} , r_\text{eff, yearly} , r_\text{eff, daily}$$ Which of the following statements is NOT equivalent to the yield on debt? Assume that the debt being referred to is fairly priced, but do not assume that it's priced at par. An 'interest only' loan can also be called a: You just borrowed$400,000 in the form of a 25 year interest-only mortgage with monthly payments of $3,000 per month. The interest rate is 9% pa which is not expected to change. You actually plan to pay more than the required interest payment. You plan to pay$3,300 in mortgage payments every month, which your mortgage lender allows. These extra payments will reduce the principal and the minimum interest payment required each month.
At the maturity of the mortgage, what will be the principal? That is, after the last (300th) interest payment of $3,300 in 25 years, how much will be owing on the mortgage? A bank grants a borrower an interest-only residential mortgage loan with a very large 50% deposit and a nominal interest rate of 6% that is not expected to change. Assume that inflation is expected to be a constant 2% pa over the life of the loan. Ignore credit risk. From the bank's point of view, what is the long term expected nominal capital return of the loan asset? For a price of$100, Vera will sell you a 2 year bond paying semi-annual coupons of 10% pa. The face value of the bond is $100. Other bonds with similar risk, maturity and coupon characteristics trade at a yield of 8% pa. Would you like to her bond or politely ? For a price of$95, Nicole will sell you a 10 year bond paying semi-annual coupons of 8% pa. The face value of the bond is 100. Other bonds with the same risk, maturity and coupon characteristics trade at a yield of 8% pa. Would you like to the bond or politely ? Bonds A and B are issued by the same company. They have the same face value, maturity, seniority and coupon payment frequency. The only difference is that bond A has a 5% coupon rate, while bond B has a 10% coupon rate. The yield curve is flat, which means that yields are expected to stay the same. Which bond would have the higher current price? Which of the following statements about risk free government bonds is NOT correct? Hint: Total return can be broken into income and capital returns as follows: \begin{aligned} r_\text{total} &= \frac{c_1}{p_0} + \frac{p_1-p_0}{p_0} \\ &= r_\text{income} + r_\text{capital} \end{aligned} The capital return is the growth rate of the price. The income return is the periodic cash flow. For a bond this is the coupon payment. For a bond that pays fixed semi-annual coupons, how is the annual coupon rate defined, and how is the bond's annual income yield from time 0 to 1 defined mathematically? Let: $P_0$ be the bond price now, $F_T$ be the bond's face value, $T$ be the bond's maturity in years, $r_\text{total}$ be the bond's total yield, $r_\text{income}$ be the bond's income yield, $r_\text{capital}$ be the bond's capital yield, and $C_t$ be the bond's coupon at time t in years. So $C_{0.5}$ is the coupon in 6 months, $C_1$ is the coupon in 1 year, and so on. The coupon rate of a fixed annual-coupon bond is constant (always the same). What can you say about the income return ($r_\text{income}$) of a fixed annual coupon bond? Remember that: $$r_\text{total} = r_\text{income} + r_\text{capital}$$ $$r_\text{total, 0 to 1} = \frac{c_1}{p_0} + \frac{p_1-p_0}{p_0}$$ Assume that there is no change in the bond's total annual yield to maturity from when it is issued to when it matures. Select the most correct statement. From its date of issue until maturity, the income return of a fixed annual coupon: An investor bought two fixed-coupon bonds issued by the same company, a zero-coupon bond and a 7% pa semi-annual coupon bond. Both bonds have a face value of1,000, mature in 10 years, and had a yield at the time of purchase of 8% pa.
A few years later, yields fell to 6% pa. Which of the following statements is correct? Note that a capital gain is an increase in price.
In these tough economic times, central banks around the world have cut interest rates so low that they are practically zero. In some countries, government bond yields are also very close to zero.
A three year government bond with a face value of $100 and a coupon rate of 2% pa paid semi-annually was just issued at a yield of 0%. What is the price of the bond? Below are some statements about loans and bonds. The first descriptive sentence is correct. But one of the second sentences about the loans' or bonds' prices is not correct. Which statement is NOT correct? Assume that interest rates are positive. Note that coupons or interest payments are the periodic payments made throughout a bond or loan's life. The face or par value of a bond or loan is the amount paid at the end when the debt matures. You just bought a nice dress which you plan to wear once per month on nights out. You bought it a moment ago for$600 (at t=0). In your experience, dresses used once per month last for 6 years.
Your younger sister is a student with no money and wants to borrow your dress once a month when she hits the town. With the increased use, your dress will only last for another 3 years rather than 6.
What is the present value of the cost of letting your sister use your current dress for the next 3 years?
Assume: that bank interest rates are 10% pa, given as an effective annual rate; you will buy a new dress when your current one wears out; your sister will only use the current dress, not the next one that you will buy; and the price of a new dress never changes.
When using the dividend discount model, care must be taken to avoid using a nominal dividend growth rate that exceeds the country's nominal GDP growth rate. Otherwise the firm is forecast to take over the country since it grows faster than the average business forever.
Suppose a firm's nominal dividend grows at 10% pa forever, and nominal GDP growth is 5% pa forever. The firm's total dividends are currently $1 billion (t=0). The country's GDP is currently$1,000 billion (t=0).
In approximately how many years will the company's total dividends be as large as the country's GDP?
A newly floated farming company is financed with senior bonds, junior bonds, cumulative non-voting preferred stock and common stock. The new company has no retained profits and due to floods it was unable to record any revenues this year, leading to a loss. The firm is not bankrupt yet since it still has substantial contributed equity (same as paid-up capital).
On which securities must it pay interest or dividend payments in this terrible financial year?
Which of the following statements is NOT correct?
You have $100,000 in the bank. The bank pays interest at 10% pa, given as an effective annual rate. You wish to consume twice as much now (t=0) as in one year (t=1) and have nothing left in the bank at the end. How much can you consume at time zero and one? The answer choices are given in the same order. You own a nice suit which you wear once per week on nights out. You bought it one year ago for$600. In your experience, suits used once per week last for 6 years. So you expect yours to last for another 5 years.
Your younger brother said that retro is back in style so he wants to wants to borrow your suit once a week when he goes out. With the increased use, your suit will only last for another 4 years rather than 5.
What is the present value of the cost of letting your brother use your current suit for the next 4 years?
Assume: that bank interest rates are 10% pa, given as an effective annual rate; you will buy a new suit when your current one wears out and your brother will not use the new one; your brother will only use your current suit so he will only use it for the next four years; and the price of a new suit never changes.
You own some nice shoes which you use once per week on date nights. You bought them 2 years ago for $500. In your experience, shoes used once per week last for 6 years. So you expect yours to last for another 4 years. Your younger sister said that she wants to borrow your shoes once per week. With the increased use, your shoes will only last for another 2 years rather than 4. What is the present value of the cost of letting your sister use your current shoes for the next 2 years? Assume: that bank interest rates are 10% pa, given as an effective annual rate; you will buy a new pair of shoes when your current pair wears out and your sister will not use the new ones; your sister will only use your current shoes so she will only use it for the next 2 years; and the price of new shoes never changes. One year ago you bought$100,000 of shares partly funded using a margin loan. The margin loan size was $70,000 and the other$30,000 was your own wealth or 'equity' in the share assets.
The interest rate on the margin loan was 7.84% pa.
Over the year, the shares produced a dividend yield of 4% pa and a capital gain of 5% pa.
What was the total return on your wealth? Ignore taxes, assume that all cash flows (interest payments and dividends) were paid and received at the end of the year, and all rates above are effective annual rates.
Hint: Remember that wealth in this context is your equity (E) in the house asset (V = D+E) which is funded by the loan (D) and your deposit or equity (E).
A manufacturing company is considering a new project in the more risky services industry. The cash flows from assets (CFFA) are estimated for the new project, with interest expense excluded from the calculations. To get the levered value of the project, what should these unlevered cash flows be discounted by?
Assume that the manufacturing firm has a target debt-to-assets ratio that it sticks to.
Why is Capital Expenditure (CapEx) subtracted in the Cash Flow From Assets (CFFA) formula?
$$CFFA=NI+Depr-CapEx - \Delta NWC+IntExp$$
Find Sidebar Corporation's Cash Flow From Assets (CFFA), also known as Free Cash Flow to the Firm (FCFF), over the year ending 30th June 2013.
Sidebar Corp Income Statement for year ending 30th June 2013 $m Sales 405 COGS 100 Depreciation 34 Rent expense 22 Interest expense 39 Taxable Income 210 Taxes at 30% 63 Net income 147 Sidebar Corp Balance Sheet as at 30th June 2013 2012$m $m Inventory 70 50 Trade debtors 11 16 Rent paid in advance 4 3 PPE 700 680 Total assets 785 749 Trade creditors 11 19 Bond liabilities 400 390 Contributed equity 220 220 Retained profits 154 120 Total L and OE 785 749 Note: All figures are given in millions of dollars ($m).
The cash flow from assets was:
Which one of the following will have no effect on net income (NI) but decrease cash flow from assets (CFFA or FFCF) in this year for a tax-paying firm, all else remaining constant?
Remember:
$$NI=(Rev-COGS-FC-Depr-IntExp).(1-t_c )$$ $$CFFA=NI+Depr-CapEx - ΔNWC+IntExp$$
Find Ching-A-Lings Corporation's Cash Flow From Assets (CFFA), also known as Free Cash Flow to the Firm (FCFF), over the year ending 30th June 2013.
Ching-A-Lings Corp Income Statement for year ending 30th June 2013 $m Sales 100 COGS 20 Depreciation 20 Rent expense 11 Interest expense 19 Taxable Income 30 Taxes at 30% 9 Net income 21 Ching-A-Lings Corp Balance Sheet as at 30th June 2013 2012$m $m Inventory 49 38 Trade debtors 14 2 Rent paid in advance 5 5 PPE 400 400 Total assets 468 445 Trade creditors 4 10 Bond liabilities 200 190 Contributed equity 145 145 Retained profits 119 100 Total L and OE 468 445 Note: All figures are given in millions of dollars ($m).
The cash flow from assets was:
Find World Bar's Cash Flow From Assets (CFFA), also known as Free Cash Flow to the Firm (FCFF), over the year ending 30th June 2013.
World Bar Income Statement for year ending 30th June 2013 $m Sales 300 COGS 150 Operating expense 50 Depreciation 40 Interest expense 10 Taxable income 50 Tax at 30% 15 Net income 35 World Bar Balance Sheet as at 30th June 2013 2012$m $m Assets Current assets 200 230 PPE Cost 400 400 Accumul. depr. 75 35 Carrying amount 325 365 Total assets 525 595 Liabilities Current liabilities 150 205 Non-current liabilities 235 250 Owners' equity Retained earnings 100 100 Contributed equity 40 40 Total L and OE 525 595 Note: all figures above and below are given in millions of dollars ($m).
A man has taken a day off from his casual painting job to relax.
It's the end of the day and he's thinking about the hours that he could have spent working (in the past) which are now:
What is the net present value (NPV) of undertaking a full-time Australian undergraduate business degree as an Australian citizen? Only include the cash flows over the duration of the degree, ignore any benefits or costs of the degree after it's completed.
Assume the following:
• The degree takes 3 years to complete and all students pass all subjects.
• There are 2 semesters per year and 4 subjects per semester.
• University fees per subject per semester are $1,277, paid at the start of each semester. Fees are expected to stay constant for the next 3 years. • There are 52 weeks per year. • The first semester is just about to start (t=0). The first semester lasts for 19 weeks (t=0 to 19). • The second semester starts immediately afterwards (t=19) and lasts for another 19 weeks (t=19 to 38). • The summer holidays begin after the second semester ends and last for 14 weeks (t=38 to 52). Then the first semester begins the next year, and so on. • Working full time at the grocery store instead of studying full-time pays$20/hr and you can work 35 hours per week. Wages are paid at the end of each week.
• Full-time students can work full-time during the summer holiday at the grocery store for the same rate of $20/hr for 35 hours per week. Wages are paid at the end of each week. • The discount rate is 9.8% pa. All rates and cash flows are real. Inflation is expected to be 3% pa. All rates are effective annual. The NPV of costs from undertaking the university degree is: Your friend is trying to find the net present value of a project. The project is expected to last for just one year with: • a negative cash flow of -$1 million initially (t=0), and
• a positive cash flow of $1.1 million in one year (t=1). The project has a total required return of 10% pa due to its moderate level of undiversifiable risk. Your friend is aware of the importance of opportunity costs and the time value of money, but he is unsure of how to find the NPV of the project. He knows that the opportunity cost of investing the$1m in the project is the expected gain from investing the money in shares instead. Like the project, shares also have an expected return of 10% since they have moderate undiversifiable risk. This opportunity cost is $0.1m $(=1m \times 10\%)$ which occurs in one year (t=1). He knows that the time value of money should be accounted for, and this can be done by finding the present value of the cash flows in one year. Your friend has listed a few different ways to find the NPV which are written down below. (I) $-1m + \dfrac{1.1m}{(1+0.1)^1}$ (II) $-1m + \dfrac{1.1m}{(1+0.1)^1} - \dfrac{1m}{(1+0.1)^1} \times 0.1$ (III) $-1m + \dfrac{1.1m}{(1+0.1)^1} - \dfrac{1.1m}{(1+0.1)^1} \times 0.1$ (IV) $-1m + 1.1m - \dfrac{1.1m}{(1+0.1)^1} \times 0.1$ (V) $-1m + 1.1m - 1.1m \times 0.1$ Which of the above calculations give the correct NPV? Select the most correct answer. Find the cash flow from assets (CFFA) of the following project. Project Data Project life 2 years Initial investment in equipment$6m Depreciation of equipment per year for tax purposes $1m Unit sales per year 4m Sale price per unit$8 Variable cost per unit $3 Fixed costs per year, paid at the end of each year$1.5m Tax rate 30%
Note 1: The equipment will have a book value of $4m at the end of the project for tax purposes. However, the equipment is expected to fetch$0.9 million when it is sold at t=2.
Note 2: Due to the project, the firm will have to purchase $0.8m of inventory initially, which it will sell at t=1. The firm will buy another$0.8m at t=1 and sell it all again at t=2 with zero inventory left. The project will have no effect on the firm's current liabilities.
Find the project's CFFA at time zero, one and two. Answers are given in millions of dollars ($m). Value the following business project to manufacture a new product. Project Data Project life 2 yrs Initial investment in equipment$6m Depreciation of equipment per year $3m Expected sale price of equipment at end of project$0.6m Unit sales per year 4m Sale price per unit $8 Variable cost per unit$5 Fixed costs per year, paid at the end of each year $1m Interest expense per year 0 Tax rate 30% Weighted average cost of capital after tax per annum 10% Notes 1. The firm's current assets and current liabilities are$3m and $2m respectively right now. This net working capital will not be used in this project, it will be used in other unrelated projects. Due to the project, current assets (mostly inventory) will grow by$2m initially (at t = 0), and then by $0.2m at the end of the first year (t=1). Current liabilities (mostly trade creditors) will increase by$0.1m at the end of the first year (t=1).
At the end of the project, the net working capital accumulated due to the project can be sold for the same price that it was bought.
2. The project cost $0.5m to research which was incurred one year ago. Assumptions • All cash flows occur at the start or end of the year as appropriate, not in the middle or throughout the year. • All rates and cash flows are real. The inflation rate is 3% pa. • All rates are given as effective annual rates. • The business considering the project is run as a 'sole tradership' (run by an individual without a company) and is therefore eligible for a 50% capital gains tax discount when the equipment is sold, as permitted by the Australian Tax Office. What is the expected net present value (NPV) of the project? What is the correlation of a variable X with itself? The corr(X, X) or $\rho_{X,X}$ equals: What is the correlation of a variable X with a constant C? The corr(X, C) or $\rho_{X,C}$ equals: Two risky stocks A and B comprise an equal-weighted portfolio. The correlation between the stocks' returns is 70%. If the variance of stock A increases but the: • Prices and expected returns of each stock stays the same, • Variance of stock B's returns stays the same, • Correlation of returns between the stocks stays the same. Which of the following statements is NOT correct? The covariance and correlation of two stocks X and Y's annual returns are calculated over a number of years. The units of the returns are in percent per annum $(\% pa)$. What are the units of the covariance $(\sigma_{X,Y})$ and correlation $(\rho_{X,Y})$ of returns respectively? Hint: Visit Wikipedia to understand the difference between percentage points $(\text{pp})$ and percent $(\%)$. What is the covariance of a variable X with a constant C? The cov(X, C) or $\sigma_{X,C}$ equals: Let the variance of returns for a share per month be $\sigma_\text{monthly}^2$. What is the formula for the variance of the share's returns per year $(\sigma_\text{yearly}^2)$? Assume that returns are independently and identically distributed (iid) so they have zero auto correlation, meaning that if the return was higher than average today, it does not indicate that the return tomorrow will be higher or lower than average. A mature firm has constant expected future earnings and dividends. Both amounts are equal. So earnings and dividends are expected to be equal and unchanging. Which of the following statements is NOT correct? The below screenshot of Microsoft's (MSFT) details were taken from the Google Finance website on 28 Nov 2014. Some information has been deliberately blanked out. What was MSFT's backwards-looking price-earnings ratio? A stock is expected to pay the following dividends: Cash Flows of a Stock Time (yrs) 0 1 2 3 4 ... Dividend ($) 8 8 8 20 8 ...
After year 4, the dividend will grow in perpetuity at 4% pa. The required return on the stock is 10% pa. Both the growth rate and required return are given as effective annual rates.
What is the current price of the stock?
A stock is expected to pay the following dividends:
Cash Flows of a Stock Time (yrs) 0 1 2 3 4 ... Dividend ($) 2 2 2 10 3 ... After year 4, the dividend will grow in perpetuity at 4% pa. The required return on the stock is 10% pa. Both the growth rate and required return are given as effective annual rates. What is the current price of the stock? The following is the Dividend Discount Model used to price stocks: $$p_0=\frac{d_1}{r-g}$$ All rates are effective annual rates and the cash flows ($d_1$) are received every year. Note that the r and g terms in the above DDM could also be labelled as below: $$r = r_{\text{total, 0}\rightarrow\text{1yr, eff 1yr}}$$ $$g = r_{\text{capital, 0}\rightarrow\text{1yr, eff 1yr}}$$ Which of the following statements is NOT correct? A share pays annual dividends. It just paid a dividend of$2. The growth rate in the dividend is 3% pa. You estimate that the stock's required return is 8% pa. Both the discount rate and growth rate are given as effective annual rates.
Using the dividend discount model, what is the share price?
The following equation is the Dividend Discount Model, also known as the 'Gordon Growth Model' or the 'Perpetuity with growth' equation.
$$p_0= \frac{c_1}{r-g}$$
Which expression is equal to the expected dividend return?
To value a business's assets, the free cash flow of the firm (FCFF, also called CFFA) needs to be calculated. This requires figures from the firm's income statement and balance sheet. For what figures is the balance sheet needed? Note that the balance sheet is sometimes also called the statement of financial position.
The 'time value of money' is most closely related to which of the following concepts?
"Buy low, sell high" is a phrase commonly heard in financial markets. It states that traders should try to buy assets at low prices and sell at high prices.
Traders in the fixed-coupon bond markets often quote promised bond yields rather than prices. Fixed-coupon bond traders should try to:
Let the 'income return' of a bond be the coupon at the end of the period divided by the market price now at the start of the period $(C_1/P_0)$. The expected income return of a premium fixed coupon bond is:
A firm plans to issue equity and use the cash raised to pay off its debt. No assets will be bought or sold. Ignore the costs of financial distress.
Which of the following statements is NOT correct, all things remaining equal?
Where can a private firm's market value of equity be found? It can be sourced from the company's:
There are a number of different formulas involving real and nominal returns and cash flows. Which one of the following formulas is NOT correct? All returns are effective annual rates. Note that the symbol $\approx$ means 'approximately equal to'.
Taking inflation into account when using the DDM can be hard. Which of the following formulas will NOT give a company's current stock price $(P_0)$? Assume that the annual dividend was just paid $(C_0)$, and the next dividend will be paid in one year $(C_1)$.
If the nominal gold price is expected to increase at the same rate as inflation which is 3% pa, which of the following statements is NOT correct?
A stock will pay you a dividend of $2 tonight if you buy it today. Thereafter the annual dividend is expected to grow by 3% pa, so the next dividend after the$2 one tonight will be $2.06 in one year, then in two years it will be$2.1218 and so on. The stock's required return is 8% pa.
What is the stock price today and what do you expect the stock price to be tomorrow, approximately?
A real estate agent says that the price of a house in Sydney Australia is approximately equal to the gross weekly rent times 1000.
What type of valuation method is the real estate agent using?
Which of the following statements is NOT correct? Bond investors:
All other things remaining equal, a project is worse if its:
Two years ago you entered into a fully amortising home loan with a principal of $1,000,000, an interest rate of 6% pa compounding monthly with a term of 25 years. Then interest rates suddenly fall to 4.5% pa (t=0), but you continue to pay the same monthly home loan payments as you did before. How long will it now take to pay off your home loan? Measure the time taken to pay off the home loan from the current time which is 2 years after the home loan was first entered into. Assume that the lower interest rate was given to you immediately after the loan repayment at the end of year 2, which was the 24th payment since the loan was granted. Also assume that rates were and are expected to remain constant. A 4.5% fixed coupon Australian Government bond was issued at par in mid-April 2009. Coupons are paid semi-annually in arrears in mid-April and mid-October each year. The face value is$1,000. The bond will mature in mid-April 2020, so the bond had an original tenor of 11 years.
Today is mid-September 2015 and similar bonds now yield 1.9% pa.
What is the bond's new price? Note: there are 10 semi-annual coupon payments remaining from now (mid-September 2015) until maturity (mid-April 2020); both yields are given as APR's compounding semi-annually; assume that the yield curve was flat before the change in yields, and remained flat afterwards as well.
An investor bought a 5 year government bond with a 2% pa coupon rate at par. Coupons are paid semi-annually. The face value is $100. Calculate the bond's new price 8 months later after yields have increased to 3% pa. Note that both yields are given as APR's compounding semi-annually. Assume that the yield curve was flat before the change in yields, and remained flat afterwards as well. Which of the following statements about the capital and income returns of an interest-only loan is correct? Assume that the yield curve (which shows total returns over different maturities) is flat and is not expected to change. An interest-only loan's expected: An Australian company just issued two bonds: • A 6-month zero coupon bond at a yield of 6% pa, and • A 12 month zero coupon bond at a yield of 7% pa. What is the company's forward rate from 6 to 12 months? Give your answer as an APR compounding every 6 months, which is how the above bond yields are quoted. Over the next year, the management of an unlevered company plans to: • Make$5m in sales, $1.9m in net income and$2m in equity free cash flow (EFCF).
• Pay dividends of $1m. • Complete a$1.3m share buy-back.
Assume that:
• All amounts are received and paid at the end of the year so you can ignore the time value of money.
• The firm has sufficient retained profits to legally pay the dividend and complete the buy back.
• The firm plans to run a very tight ship, with no excess cash above operating requirements currently or over the next year.
How much new equity financing will the company need? In other words, what is the value of new shares that will need to be issued?
A European company just issued two bonds, a
• 2 year zero coupon bond at a yield of 8% pa, and a
• 3 year zero coupon bond at a yield of 10% pa.
What is the company's forward rate over the third year (from t=2 to t=3)? Give your answer as an effective annual rate, which is how the above bond yields are quoted.
A European company just issued two bonds, a
• 1 year zero coupon bond at a yield of 8% pa, and a
• 2 year zero coupon bond at a yield of 10% pa.
What is the company's forward rate over the second year (from t=1 to t=2)? Give your answer as an effective annual rate, which is how the above bond yields are quoted.
In the below term structure of interest rates equation, all rates are effective annual yields and the numbers in subscript represent the years that the yields are measured over:
$$(1+r_{0-3})^3 = (1+r_{0-1})(1+r_{1-2})(1+r_{2-3})$$
Which of the following statements is NOT correct?
The theory of fixed interest bond pricing is an application of the theory of Net Present Value (NPV). Also, a 'fairly priced' asset is not over- or under-priced. Buying or selling a fairly priced asset has an NPV of zero.
Considering this, which of the following statements is NOT correct?
The "interest expense" on a company's annual income statement is equal to the cash interest payments (but not principal payments) made to debt holders during the year. or ?
A three year project's NPV is negative. The cash flows of the project include a negative cash flow at the very start and positive cash flows over its short life. The required return of the project is 10% pa. Select the most correct statement.
An established mining firm announces that it expects large losses over the following year due to flooding which has temporarily stalled production at its mines. Which statement(s) are correct?
(i) If the firm adheres to a full dividend payout policy it will not pay any dividends over the following year.
(ii) If the firm wants to signal that the loss is temporary it will maintain the same level of dividends. It can do this so long as it has enough retained profits.
(iii) By law, the firm will be unable to pay a dividend over the following year because it cannot pay a dividend when it makes a loss.
Select the most correct response:
A firm can issue 5 year annual coupon bonds at a yield of 8% pa and a coupon rate of 12% pa.
The beta of its levered equity is 1. Five year government bonds yield 5% pa with a coupon rate of 6% pa. The market's expected dividend return is 4% pa and its expected capital return is 6% pa.
The firm's debt-to-equity ratio is 2:1. The corporate tax rate is 30%.
What is the firm's after-tax WACC? Assume a classical tax system.
Mr Blue, Miss Red and Mrs Green are people with different utility functions.
Note that a fair gamble is a bet that has an expected value of zero, such as paying $0.50 to win$1 in a coin flip with heads or nothing if it lands tails. Fairly priced insurance is when the expected present value of the insurance premiums is equal to the expected loss from the disaster that the insurance protects against, such as the cost of rebuilding a home after a catastrophic fire.
Which of the following statements is NOT correct?
Mr Blue, Miss Red and Mrs Green are people with different utility functions.
Which of the following statements is NOT correct?
Mr Blue, Miss Red and Mrs Green are people with different utility functions.
Each person has $256 of initial wealth. A coin toss game is offered to each person at a casino where the player can win or lose$256. Each player can flip a coin and if they flip heads, they receive $256. If they flip tails then they will lose$256. Which of the following statements is NOT correct?
Mr Blue, Miss Red and Mrs Green are people with different utility functions. Which of the statements about the 3 utility functions is NOT correct?
Which statement is the most correct?
A stock has a beta of 1.5. The market's expected total return is 10% pa and the risk free rate is 5% pa, both given as effective annual rates.
Over the last year, bad economic news was released showing a higher chance of recession. Over this time the share market fell by 1%. The risk free rate was unchanged.
What do you think was the stock's historical return over the last year, given as an effective annual rate?
A firm changes its capital structure by issuing a large amount of equity and using the funds to repay debt. Its assets are unchanged. Ignore interest tax shields.
According to the Capital Asset Pricing Model (CAPM), which statement is correct?
According to the theory of the Capital Asset Pricing Model (CAPM), total risk can be broken into two components, systematic risk and idiosyncratic risk. Which of the following events would be considered a systematic, undiversifiable event according to the theory of the CAPM?
Your friend claims that by reading 'The Economist' magazine's economic news articles, she can identify shares that will have positive abnormal expected returns over the next 2 years. Assuming that her claim is true, which statement(s) are correct?
(i) Weak form market efficiency is broken.
(ii) Semi-strong form market efficiency is broken.
(iii) Strong form market efficiency is broken.
(iv) The asset pricing model used to measure the abnormal returns (such as the CAPM) is either wrong (mis-specification error) or is measured using the wrong inputs (data errors) so the returns may not be abnormal but rather fair for the level of risk.
Select the most correct response:
A managed fund charges fees based on the amount of money that you keep with them. The fee is 2% of the end-of-year amount, paid at the end of every year.
This fee is charged regardless of whether the fund makes gains or losses on your money.
The fund offers to invest your money in shares which have an expected return of 10% pa before fees.
You are thinking of investing $100,000 in the fund and keeping it there for 40 years when you plan to retire. How much money do you expect to have in the fund in 40 years? Also, what is the future value of the fees that the fund expects to earn from you? Give both amounts as future values in 40 years. Assume that: • The fund has no private information. • Markets are weak and semi-strong form efficient. • The fund's transaction costs are negligible. • The cost and trouble of investing your money in shares by yourself, without the managed fund, is negligible. • The fund invests its fees in the same companies as it invests your funds in, but with no fees. The below answer choices list your expected wealth in 40 years and then the fund's expected wealth in 40 years. A managed fund charges fees based on the amount of money that you keep with them. The fee is 2% of the start-of-year amount, but it is paid at the end of every year. This fee is charged regardless of whether the fund makes gains or losses on your money. The fund offers to invest your money in shares which have an expected return of 10% pa before fees. You are thinking of investing$100,000 in the fund and keeping it there for 40 years when you plan to retire.
What is the Net Present Value (NPV) of investing your money in the fund? Note that the question is not asking how much money you will have in 40 years, it is asking: what is the NPV of investing in the fund? Assume that:
• The fund has no private information.
• Markets are weak and semi-strong form efficient.
• The fund's transaction costs are negligible.
• The cost and trouble of investing your money in shares by yourself, without the managed fund, is negligible.
A stock's correlation with the market portfolio increases while its total risk is unchanged. What will happen to the stock's expected return and systematic risk?
Assets A, B, M and $r_f$ are shown on the graphs above. Asset M is the market portfolio and $r_f$ is the risk free yield on government bonds. Assume that investors can borrow and lend at the risk free rate. Which of the below statements is NOT correct?
A stock has a beta of 0.5. Its next dividend is expected to be 3, paid one year from now. Dividends are expected to be paid annually and grow by 2% pa forever. Treasury bonds yield 5% pa and the market portfolio's expected return is 10% pa. All returns are effective annual rates. What is the price of the stock now? Which statement(s) are correct? (i) All stocks that plot on the Security Market Line (SML) are fairly priced. (ii) All stocks that plot above the Security Market Line (SML) are overpriced. (iii) All fairly priced stocks that plot on the Capital Market Line (CML) have zero idiosyncratic risk. Select the most correct response: A firm changes its capital structure by issuing a large amount of debt and using the funds to repurchase shares. Its assets are unchanged. Ignore interest tax shields. According to the Capital Asset Pricing Model (CAPM), which statement is correct? The total return of any asset can be broken down in different ways. One possible way is to use the dividend discount model (or Gordon growth model): $$p_0 = \frac{c_1}{r_\text{total}-r_\text{capital}}$$ Which, since $c_1/p_0$ is the income return ($r_\text{income}$), can be expressed as: $$r_\text{total}=r_\text{income}+r_\text{capital}$$ So the total return of an asset is the income component plus the capital or price growth component. Another way to break up total return is to use the Capital Asset Pricing Model: $$r_\text{total}=r_\text{f}+β(r_\text{m}- r_\text{f})$$ $$r_\text{total}=r_\text{time value}+r_\text{risk premium}$$ So the risk free rate is the time value of money and the term $β(r_\text{m}- r_\text{f})$ is the compensation for taking on systematic risk. Using the above theory and your general knowledge, which of the below equations, if any, are correct? (I) $r_\text{income}=r_\text{time value}$ (II) $r_\text{income}=r_\text{risk premium}$ (III) $r_\text{capital}=r_\text{time value}$ (IV) $r_\text{capital}=r_\text{risk premium}$ (V) $r_\text{income}+r_\text{capital}=r_\text{time value}+r_\text{risk premium}$ Which of the equations are correct? The Australian cash rate is expected to be 2% pa over the next one year, while the Japanese cash rate is expected to be 0% pa, both given as nominal effective annual rates. The current exchange rate is 100 JPY per AUD. What is the implied 1 year forward foreign exchange rate? Assume that there exists a perfect world with no transaction costs, no asymmetric information, no taxes, no agency costs, equal borrowing rates for corporations and individual investors, the ability to short the risk free asset, semi-strong form efficient markets, the CAPM holds, investors are rational and risk-averse and there are no other market frictions. For a firm operating in this perfect world, which statement(s) are correct? (i) When a firm changes its capital structure and/or payout policy, share holders' wealth is unaffected. (ii) When the idiosyncratic risk of a firm's assets increases, share holders do not expect higher returns. (iii) When the systematic risk of a firm's assets increases, share holders do not expect higher returns. Select the most correct response: The US firm Google operates in the online advertising business. In 2011 Google bought Motorola Mobility which manufactures mobile phones. Assume the following: • Google had a 10% after-tax weighted average cost of capital (WACC) before it bought Motorola. • Motorola had a 20% after-tax WACC before it merged with Google. • Google and Motorola have the same level of gearing. • Both companies operate in a classical tax system. You are a manager at Motorola. You must value a project for making mobile phones. Which method(s) will give the correct valuation of the mobile phone manufacturing project? Select the most correct answer. The mobile phone manufacturing project's: One formula for calculating a levered firm's free cash flow (FFCF, or CFFA) is to use earnings before interest and tax (EBIT). \begin{aligned} FFCF &= (EBIT)(1-t_c) + Depr - CapEx -\Delta NWC + IntExp.t_c \\ &= (Rev - COGS - Depr - FC)(1-t_c) + Depr - CapEx -\Delta NWC + IntExp.t_c \\ \end{aligned} \\ Does this annual FFCF or the annual interest tax shield? A company conducts a 1 for 5 rights issue at a subscription price of7 when the pre-announcement stock price was $10. What is the percentage change in the stock price and the number of shares outstanding? The answers are given in the same order. Ignore all taxes, transaction costs and signalling effects. Currently, a mining company has a share price of$6 and pays constant annual dividends of $0.50. The next dividend will be paid in 1 year. Suddenly and unexpectedly the mining company announces that due to higher than expected profits, all of these windfall profits will be paid as a special dividend of$0.30 in 1 year.
If investors believe that the windfall profits and dividend is a one-off event, what will be the new share price? If investors believe that the additional dividend is actually permanent and will continue to be paid, what will be the new share price? Assume that the required return on equity is unchanged. Choose from the following, where the first share price includes the one-off increase in earnings and dividends for the first year only $(P_\text{0 one-off})$ , and the second assumes that the increase is permanent $(P_\text{0 permanent})$:
Note: When a firm makes excess profits they sometimes pay them out as special dividends. Special dividends are just like ordinary dividends but they are one-off and investors do not expect them to continue, unlike ordinary dividends which are expected to persist.
Fred owns some Commonwealth Bank (CBA) shares. He has calculated CBA’s monthly returns for each month in the past 20 years using this formula:
$$r_\text{t monthly}=\ln \left( \dfrac{P_t}{P_{t-1}} \right)$$
He then took the arithmetic average and found it to be 1% per month using this formula:
$$\bar{r}_\text{monthly}= \dfrac{ \displaystyle\sum\limits_{t=1}^T{\left( r_\text{t monthly} \right)} }{T} =0.01=1\% \text{ per month}$$
He also found the standard deviation of these monthly returns which was 5% per month:
$$\sigma_\text{monthly} = \dfrac{ \displaystyle\sum\limits_{t=1}^T{\left( \left( r_\text{t monthly} - \bar{r}_\text{monthly} \right)^2 \right)} }{T} =0.05=5\%\text{ per month}$$
Which of the below statements about Fred’s CBA shares is NOT correct? Assume that the past historical average return is the true population average of future expected returns.
A firm issues debt and uses the funds to buy back equity. Assume that there are no costs of financial distress or transactions costs. Which of the following statements about interest tax shields is NOT correct?
Use the below information to value a levered company with constant annual perpetual cash flows from assets. The next cash flow will be generated in one year from now, so a perpetuity can be used to value this firm. Both the cash flow from assets including and excluding interest tax shields are constant (but not equal to each other).
Data on a Levered Firm with Perpetual Cash Flows Item abbreviation Value Item full name $\text{CFFA}_\text{U}$ $48.5m Cash flow from assets excluding interest tax shields (unlevered) $\text{CFFA}_\text{L}$$50m Cash flow from assets including interest tax shields (levered) $g$ 0% pa Growth rate of cash flow from assets, levered and unlevered $\text{WACC}_\text{BeforeTax}$ 10% pa Weighted average cost of capital before tax $\text{WACC}_\text{AfterTax}$ 9.7% pa Weighted average cost of capital after tax $r_\text{D}$ 5% pa Cost of debt $r_\text{EL}$ 11.25% pa Cost of levered equity $D/V_L$ 20% pa Debt to assets ratio, where the asset value includes tax shields $t_c$ 30% Corporate tax rate
What is the value of the levered firm including interest tax shields?
Below is a graph of 3 peoples’ utility functions, Mr Blue (U=W^(1/2) ), Miss Red (U=W/10) and Mrs Green (U=W^2/1000). Assume that each of them currently have $50 of wealth. Which of the following statements about them is NOT correct? (a) Mr Blue would prefer to invest his wealth in a well diversified portfolio of stocks rather than a single stock, assuming that all stocks had the same total risk and return. The market's expected total return is 10% pa and the risk free rate is 5% pa, both given as effective annual rates. A stock has a beta of 0.5. In the last 5 minutes, the federal government unexpectedly raised taxes. Over this time the share market fell by 3%. The risk free rate was unchanged. What do you think was the stock's historical return over the last 5 minutes, given as an effective 5 minute rate? The capital market line (CML) is shown in the graph below. The total standard deviation is denoted by σ and the expected return is μ. Assume that markets are efficient so all assets are fairly priced. Which of the below statements is NOT correct? An economy has only two investable assets: stocks and cash. Stocks had a historical nominal average total return of negative two percent per annum (-2% pa) over the last 20 years. Stocks are liquid and actively traded. Stock returns are variable, they have risk. Cash is riskless and has a nominal constant return of zero percent per annum (0% pa), which it had in the past and will have in the future. Cash can be kept safely at zero cost. Cash can be converted into shares and vice versa at zero cost. The nominal total return of the shares over the next year is expected to be: If a variable, say X, is normally distributed with mean $\mu$ and variance $\sigma^2$ then mathematicians write $X \sim \mathcal{N}(\mu, \sigma^2)$. If a variable, say Y, is log-normally distributed and the underlying normal distribution has mean $\mu$ and variance $\sigma^2$ then mathematicians write $Y \sim \mathbf{ln} \mathcal{N}(\mu, \sigma^2)$. The below three graphs show probability density functions (PDF) of three different random variables Red, Green and Blue. Select the most correct statement: The below three graphs show probability density functions (PDF) of three different random variables Red, Green and Blue. Let $P_1$ be the unknown price of a stock in one year. $P_1$ is a random variable. Let $P_0 = 1$, so the share price now is$1. This one dollar is a constant, it is not a variable.
Which of the below statements is NOT correct? Financial practitioners commonly assume that the shape of the PDF represented in the colour:
|
2019-09-19 08:12:45
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 2, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.3479754328727722, "perplexity": 1937.1724994655456}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-39/segments/1568514573465.18/warc/CC-MAIN-20190919081032-20190919103032-00376.warc.gz"}
|
https://math.libretexts.org/Courses/Mission_College/Math_C_Intermediate_Algebra_(Carr)_Spring_2020/04%3A_Quadratic_Functions_and_Equations/4.07%3A_Solve_Applications_of_Quadratic_Equations
|
Section 4.7: Solve Applications of Quadratic Equations
Learning Objectives
By the end of this section, you will be able to:
• Solve applications modeled by quadratic equations
Before you get started, take this readiness quiz.
1. The sum of two consecutive odd numbers is $$−100$$. Find the numbers.
If you missed this problem, review Example 2.18.
2. Solve: $$\frac{2}{x+1}+\frac{1}{x-1}=\frac{1}{x^{2}-1}$$.
If you missed this problem, review Example 7.35.
3. Find the length of the hypotenuse of a right triangle with legs $$5$$ inches and $$12$$ inches.
If you missed this problem, review Example 2.34.
Solve Applications Modeled by Quadratic Equations
We solved some applications that are modeled by quadratic equations earlier, when the only method we had to solve them was factoring. Now that we have more methods to solve quadratic equations, we will take another look at applications.
Let’s first summarize the methods we now have to solve quadratic equations.
Methods to Solve Quadratic Equations
1. Factoring
2. Square Root Property
3. Completing the Square
As you solve each equation, choose the method that is most convenient for you to work the problem. As a reminder, we will copy our usual Problem-Solving Strategy here so we can follow the steps.
Use a Problem-Solving Strategy
1. Read the problem. Make sure all the words and ideas are understood.
2. Identify what we are looking for.
3. Name what we are looking for. Choose a variable to represent that quantity.
4. Translate into an equation. It may be helpful to restate the problem in one sentence with all the important information. Then, translate the English sentence into an algebraic equation.
5. Solve the equation using algebra techniques.
6. Check the answer in the problem and make sure it makes sense.
7. Answer the question with a complete sentence.
We have solved number applications that involved consecutive even and odd integers, by modeling the situation with linear equations. Remember, we noticed each even integer is $$2$$ more than the number preceding it. If we call the first one $$n$$, then the next one is $$n+2$$. The next one would be $$n+2+2$$ or $$n+4$$. This is also true when we use odd integers. One set of even integers and one set of odd integers are shown below.
$$\begin{array}{cl}{}&{\text{Consecutive even integers}}\\{}& {64,66,68}\\ {n} & {1^{\text { st }} \text { even integer }} \\ {n+2} & {2^{\text { nd }} \text { consecutive even integer }} \\ {n+4} & {3^{\text { rd }} \text { consecutive even integer }}\end{array}$$
$$\begin{array}{cl}{}&{\text{Consecutive odd integers}}\\{}& {77,79,81}\\ {n} & {1^{\text { st }} \text { odd integer }} \\ {n+2} & {2^{\text { nd }} \text { consecutive odd integer }} \\ {n+4} & {3^{\text { rd }} \text { consecutive odd integer }}\end{array}$$
Some applications of odd or even consecutive integers are modeled by quadratic equations. The notation above will be helpful as you name the variables.
Example $$\PageIndex{1}$$
The product of two consecutive odd integers is $$195$$. Find the integers.
Solution:
Step 1: Read the problem
Step 2: Identify what we are looking for.
We are looking for two consecutive odd integers.
Step 3: Name what we are looking for.
Let $$n=$$ the first odd integer.
$$n+2=$$ the next odd integer.
Step 4: Translate into an equation. State the problem in one sentence.
“The product of two consecutive odd integers is $$195$$.” The product of the first odd integer and the second odd integer is $$195$$.
Translate into an equation.
$$n(n+2)=195$$
Step 5: Solve the equation. Distribute.
$$n^{2}+2 n=195$$
Write the equation in standard form.
$$n^{2}+2 n-195=0$$
Factor.
$$(n+15)(n-13)=0$$
Use the Zero Product Property.
$$n+15=0 \quad n-13=0$$
Solve each equation.
$$n=-15, \quad n=13$$
There are two values of $$n$$ that are solutions. This will give us two pairs of consecutive odd integers for our solution.
$$\begin{array}{cc}{\text { First odd integer } n=13} & {\text { First odd integer } n=-15} \\ {\text { next odd integer } n+2} & {\text { next odd integer } n+2} \\ {13+2} & {-15+2} \\ {15} & {-13}\end{array}$$
Step 6: Check the answer.
Do these pairs work? Are they consecutive odd integers?
\begin{aligned} 13,15 & \text { yes } \\-13,-15 & \text { yes } \end{aligned}
Is their product $$195$$?
\begin{aligned} 13 \cdot 15 &=195 &\text{yes} \\-13(-15) &=195 & \text { yes } \end{aligned}
Step 7: Answer the question.
Two consecutive odd integers whose product is $$195$$ are $$13,15$$ and $$-13,-15$$.
Exercise $$\PageIndex{1}$$
The product of two consecutive odd integers is $$99$$. Find the integers.
The two consecutive odd integers whose product is $$99$$ are $$9, 11$$, and $$−9, −11$$.
Exercise $$\PageIndex{2}$$
The product of two consecutive even integers is $$168$$. Find the integers.
The two consecutive even integers whose product is $$128$$ are $$12, 14$$ and $$−12, −14$$.
We will use the formula for the area of a triangle to solve the next example.
Definition $$\PageIndex{1}$$
Area of a Triangle
For a triangle with base, $$b$$, and height, $$h$$, the area, $$A$$, is given by the formula $$A=\frac{1}{2} b h$$.
Recall that when we solve geometric applications, it is helpful to draw the figure.
Example $$\PageIndex{2}$$
An architect is designing the entryway of a restaurant. She wants to put a triangular window above the doorway. Due to energy restrictions, the window can only have an area of $$120$$ square feet and the architect wants the base to be $$4$$ feet more than twice the height. Find the base and height of the window.
Solution:
Step 1: Read the problem. Draw a picture. Step 2: Identify what we are looking for. We are looking for the base and height. Step 3: Name what we are looking for. Let $$h=$$ the height of the triangle. $$2h+4=$$ the base of the triangle. Step 4: Translate into an equation. We know the area. Write the formula for the area of a triangle. $$A=\frac{1}{2} b h$$ Step 5: Solve the equation. Substitute in the values. $$120=\frac{1}{2}(2 h+4) h$$ Distribute. $$120=h^{2}+2 h$$ This is a quadratic equation, rewrite it in standard form. $$h^{2}+2 h-120=0$$ Factor. $$(h-10)(h+12)=0$$ Use the Zero Product Property. $$h-10=0 \quad h+12=0$$ Simplify. $$h=10, \quad \cancel{h=-12}$$ Since $$h$$ is the height of a window, a value of $$h=-12$$ does not make sense. The height of the triangle $$h=10$$. The base of the triangle $$2h+4$$. $$2 \cdot 10+4$$ $$24$$ Step 6: Check the answer. Does a triangle with height $$10$$ and base $$24$$ have area $$120$$? Yes. Step 7: Answer the question. The height of the triangular window is $$10$$ feet and the base is $$24$$ feet.
Exercise $$\PageIndex{3}$$
Find the base and height of a triangle whose base is four inches more than six times its height and has an area of $$456$$ square inches.
The height of the triangle is $$12$$ inches and the base is $$76$$ inches.
Exercise $$\PageIndex{4}$$
If a triangle that has an area of $$110$$ square feet has a base that is two feet less than twice the height, what is the length of its base and height?
The height of the triangle is $$11$$ feet and the base is $$20$$ feet.
In the two preceding examples, the number in the radical in the Quadratic Formula was a perfect square and so the solutions were rational numbers. If we get an irrational number as a solution to an application problem, we will use a calculator to get an approximate value.
We will use the formula for the area of a rectangle to solve the next example.
Definition $$\PageIndex{2}$$
Area of a Rectangle
For a rectangle with length, $$L$$, and width, $$W$$, the area, $$A$$, is given by the formula $$A=LW$$.
Example $$\PageIndex{3}$$
Mike wants to put $$150$$ square feet of artificial turf in his front yard. This is the maximum area of artificial turf allowed by his homeowners association. He wants to have a rectangular area of turf with length one foot less than $$3$$ times the width. Find the length and width. Round to the nearest tenth of a foot.
Solution:
Step 1: Read the problem. Draw a picture. Step 2: Identify what we are looking for. We are looking for the length and width. Step 3: Name what we are looking for. Let $$w=$$ the width of the rectangle. $$3w-1=$$ the length of the rectangle Step 4: Translate into an equation. We know the area. Write the formula for the area of a rectangle. Step 5: Solve the equation. Substitute in the values. Distribute. This is a quadratic equation; rewrite it in standard form. Solve the equation using the Quadratic Formula. Identify the $$a,b,c$$ values. Write the Quadratic Formula. Then substitute in the values of $$a,b,c$$. Simplify. Figure 9.5.13 Rewrite to show two solutions. Approximate the answers using a calculator. We eliminate the negative solution for the width. Step 6: Check the answer. Make sure that the answers make sense. Since the answers are approximate, the area will not come out exactly to $$150$$. Step 7: Answer the question. The width of the rectangle is approximately $$7.2$$ feet and the length is approximately $$20.6$$ feet.
Exercise $$\PageIndex{5}$$
The length of a $$200$$ square foot rectangular vegetable garden is four feet less than twice the width. Find the length and width of the garden, to the nearest tenth of a foot.
The length of the garden is approximately $$18$$ feet and the width $$11$$ feet.
Exercise $$\PageIndex{6}$$
A rectangular tablecloth has an area of $$80$$ square feet. The width is $$5$$ feet shorter than the length.What are the length and width of the tablecloth to the nearest tenth of a foot?
The length of the tablecloth is approximately $$11.8$$ feet and the width $$6.8$$ feet.
The Pythagorean Theorem gives the relation between the legs and hypotenuse of a right triangle. We will use the Pythagorean Theorem to solve the next example.
Definition $$\PageIndex{3}$$
Pythagorean Theorem
In any right triangle, where $$a$$ and $$b$$ are the lengths of the legs, and $$c$$ is the length of the hypotenuse, $$a^{2}+b^{2}=c^{2}$$.
Example $$\PageIndex{4}$$
Rene is setting up a holiday light display. He wants to make a ‘tree’ in the shape of two right triangles, as shown below, and has two $$10$$-foot strings of lights to use for the sides. He will attach the lights to the top of a pole and to two stakes on the ground. He wants the height of the pole to be the same as the distance from the base of the pole to each stake. How tall should the pole be?
Solution:
Step 1: Read the problem. Draw a picture. Step 2: Identify what we are looking for. We are looking for the height of the pole. Step 3: Name what we are looking for. The distance from the base of the pole to either stake is the same as the height of the pole. Let $$x=$$ the height of the pole. $$x=$$ the distance from pole to stake Each side is a right triangle. We draw a picture of one of them. Figure 9.5.18 Step 4: Translate into an equation. We can use the Pythagorean Theorem to solve for $$x$$. Write the Pythagorean Theorem. $$a^{2}+b^{2}=c^{2}$$ Step 5: Solve the equation. Substitute. $$x^{2}+x^{2}=10^{2}$$ Simplify. $$2 x^{2}=100$$ Divide by $$2$$ to isolate the variable. $$\frac{2 x^{2}}{2}=\frac{100}{2}$$ Simplify. $$x^{2}=50$$ Use the Square Root Property. $$x=\pm \sqrt{50}$$ Simplify the radical. $$x=\pm 5 \sqrt{2}$$ Rewrite to show two solutions. If we approximate this number to the nearest tenth with a calculator, we find $$x≈7.1$$. Step 6: Check the answer. Check on your own in the Pythagorean Theorem. Step 7: Answer the question. The pole should be about $$7.1$$ feet tall.
Exercise $$\PageIndex{7}$$
The sun casts a shadow from a flag pole. The height of the flag pole is three times the length of its shadow. The distance between the end of the shadow and the top of the flag pole is $$20$$ feet. Find the length of the shadow and the length of the flag pole. Round to the nearest tenth.
The length of the flag pole’s shadow is approximately $$6.3$$ feet and the height of the flag pole is $$18.9$$ feet.
Exercise $$\PageIndex{8}$$
The distance between opposite corners of a rectangular field is four more than the width of the field. The length of the field is twice its width. Find the distance between the opposite corners. Round to the nearest tenth.
The distance between the opposite corners is approximately $$7.2$$ feet.
The height of a projectile shot upward from the ground is modeled by a quadratic equation. The initial velocity, $$v_{0}$$, propels the object up until gravity causes the object to fall back down.
Definition $$\PageIndex{4}$$
The height in feet, $$h$$, of an object shot upwards into the air with initial velocity, $$v_{0}$$, after $$t$$ seconds is given by the formula
We can use this formula to find how many seconds it will take for a firework to reach a specific height.
Example $$\PageIndex{5}$$
A firework is shot upwards with initial velocity $$130$$ feet per second. How many seconds will it take to reach a height of $$260$$ feet? Round to the nearest tenth of a second.
Solution:
Step 1: Read the problem. Step 2: Identify what we are looking for. We are looking for the number of seconds, which is time. Step 3: Name what we are looking for. Let $$t=$$ the number of seconds. Step 4: Translate into an equation. Use the formula. Step 5: Solve the equation. We know the velocity $$v_{0}$$ is $$130$$ feet per second. The height is $$260$$ feet. Substitute the values. This is a quadratic equation, rewrite it in standard form. Solve the equation using the Quadratic Formula. Identify the values of $$a, b, c$$. Write the Quadratic Formula. Then substitute in the values of $$a,b,c$$. Simplify. Figure 9.5.26 Rewrite to show two solutions. Approximate the answer with a calculator. Step 6: Check the answer. The check is left to you. Step 7: Answer the question. The firework will go up and then fall back down. As the firework goes up, it will reach $$260$$ feet after approximately $$3.6$$ seconds. It will also pass that height on the way down at $$4.6$$ seconds.
Exercise $$\PageIndex{9}$$
An arrow is shot from the ground into the air at an initial speed of $$108$$ ft/s. Use the formula $$h=-16 t^{2}+v_{0} t$$ to determine when the arrow will be $$180$$ feet from the ground. Round the nearest tenth.
The arrow will reach $$180$$ feet on its way up after $$3$$ seconds and again on its way down after approximately $$3.8$$ seconds.
Exercise $$\PageIndex{10}$$
A man throws a ball into the air with a velocity of $$96$$ ft/s. Use the formula $$h=-16 t^{2}+v_{0} t$$ to determine when the height of the ball will be $$48$$ feet. Round to the nearest tenth.
The ball will reach $$48$$ feet on its way up after approximately $$.6$$ second and again on its way down after approximately $$5.4$$ seconds.
We have solved uniform motion problems using the formula $$D=rt$$ in previous chapters. We used a table like the one below to organize the information and lead us to the equation.
The formula $$D=rt$$ assumes we know $$r$$ and $$t$$ and use them to find $$D$$. If we know $$D$$ and $$r$$ and need to find $$t$$, we would solve the equation for $$t$$ and get the formula $$t=\frac{D}{r}$$.
Some uniform motion problems are also modeled by quadratic equations.
Example $$\PageIndex{6}$$
Professor Smith just returned from a conference that was $$2,000$$ miles east of his home. His total time in the airplane for the round trip was $$9$$ hours. If the plane was flying at a rate of $$450$$ miles per hour, what was the speed of the jet stream?
Solution:
This is a uniform motion situation. A diagram will help us visualize the situation.
We fill in the chart to organize the information.
We are looking for the speed of the jet stream. Let $$r=$$ the speed of the jet stream.
When the plane flies with the wind, the wind increases its speed and so the rate is $$450 + r$$.
When the plane flies against the wind, the wind decreases its speed and the rate is $$450 − r$$.
Write in the rates. Write in the distances. Since $$D=r⋅t$$, we solve for $$t$$ and get $$t=\frac{D}{r}$$. We divide the distance by the rate in each row, and place the expression in the time column. We know the times add to $$9$$ and so we write our equation. $$\frac{2000}{450-r}+\frac{2000}{450+r}=9$$ We multiply both sides by the LCD. $$(450-r)(450+r)\left(\frac{2000}{450-r}+\frac{2000}{450+r}\right)=9(450-r)(450+r)$$ Simplify. $$2000(450+r)+2000(450-r)=9(450-r)(450+r)$$ Factor the $$2,000$$. $$2000(450+r+450-r)=9\left(450^{2}-r^{2}\right)$$ Solve. $$2000(900)=9\left(450^{2}-r^{2}\right)$$ Divide by $$9$$. $$2000(100)=450^{2}-r^{2}$$ Simplify. \begin{aligned}200000&=202500-r^{2} \\ -2500&=-r^{2}\\ 50&=r\end{aligned}\ The speed of the jet stream is $$50$$ mph. Check: Is $$50$$ mph a reasonable speed for the jet stream? Yes. If the plane is traveling $$450$$ mph and the wind is $$50$$ mph, Tailwind $$450+50=500 \mathrm{mph} \quad \frac{2000}{500}=4$$ hours Headwind $$450-50=400 \mathrm{mph} \quad \frac{2000}{400}=5$$ hours The times add to $$9$$ hours, so it checks.
The speed of the jet stream was $$50$$ mph.
Exercise $$\PageIndex{11}$$
MaryAnne just returned from a visit with her grandchildren back east. The trip was $$2400$$ miles from her home and her total time in the airplane for the round trip was $$10$$ hours. If the plane was flying at a rate of $$500$$ miles per hour, what was the speed of the jet stream?
The speed of the jet stream was $$100$$ mph.
Exercise $$\PageIndex{12}$$
Gerry just returned from a cross country trip. The trip was $$3000$$ miles from his home and his total time in the airplane for the round trip was $$11$$ hours. If the plane was flying at a rate of $$550$$ miles per hour, what was the speed of the jet stream?
The speed of the jet stream was $$50$$ mph.
Work applications can also be modeled by quadratic equations. We will set them up using the same methods we used when we solved them with rational equations.We’ll use a similar scenario now.
Example $$\PageIndex{7}$$
The weekly gossip magazine has a big story about the presidential election and the editor wants the magazine to be printed as soon as possible. She has asked the printer to run an extra printing press to get the printing done more quickly. Press #1 takes $$12$$ hours more than Press #2 to do the job and when both presses are running they can print the job in $$8$$ hours. How long does it take for each press to print the job alone?
Solution:
This is a work problem. A chart will help us organize the information.
We are looking for how many hours it would take each press separately to complete the job.
Let $$x=$$ the number of hours for Press #2 to complete the job. Enter the hours per job for Press #1, Press #2, and when they work together. The part completed by Press #1 plus the part completed by Press #2 equals the amount completed together. Translate to an equation. Solve. Multiply by the LCD, 8$$x(x+12)$$. Simplify. Figure 9.5.37 Figure 9.5.38 Solve. Figure 9.5.40 Figure 9.5.41 Since the idea of negative hours does not make sense, we use the values $$x=12$$. Figure 9.5.43 Write our sentence answer. Press #1 would take $$24$$ hours and Press #2 would take $$12$$ hours to do the job alone.
Exercise $$\PageIndex{13}$$
The weekly news magazine has a big story naming the Person of the Year and the editor wants the magazine to be printed as soon as possible. She has asked the printer to run an extra printing press to get the printing done more quickly. Press #1 takes $$6$$ hours more than Press #2 to do the job and when both presses are running they can print the job in $$4$$ hours. How long does it take for each press to print the job alone?
Press #1 would take $$12$$ hours, and Press #2 would take $$6$$ hours to do the job alone.
Exercise $$\PageIndex{14}$$
Erlinda is having a party and wants to fill her hot tub. If she only uses the red hose it takes $$3$$ hours more than if she only uses the green hose. If she uses both hoses together, the hot tub fills in $$2$$ hours. How long does it take for each hose to fill the hot tub?
The red hose take $$6$$ hours and the green hose take $$3$$ hours alone.
Access these online resources for additional instruction and practice with solving applications modeled by quadratic equations.
Key Concepts
• Methods to Solve Quadratic Equations
• Factoring
• Square Root Property
• Completing the Square
• How to use a Problem-Solving Strategy.
1. Read the problem. Make sure all the words and ideas are understood.
2. Identify what we are looking for.
3. Name what we are looking for. Choose a variable to represent that quantity.
4. Translate into an equation. It may be helpful to restate the problem in one sentence with all the important information. Then, translate the English sentence into an algebra equation.
5. Solve the equation using good algebra techniques.
6. Check the answer in the problem and make sure it makes sense.
7. Answer the question with a complete sentence.
• Area of a Triangle
• For a triangle with base, $$b$$, and height, $$h$$, the area, $$A$$, is given by the formula $$A=\frac{1}{2}bh$$.
• Area of a Rectangle
• For a rectangle with length,$$L$$, and width, $$W$$, the area, $$A$$, is given by the formula $$A=LW$$.
• Pythagorean Theorem
• In any right triangle, where $$a$$ and $$b$$ are the lengths of the legs, and $$c$$ is the length of the hypotenuse, $$a^{2}+b^{2}=c^{2}$$.
• Projectile motion
• The height in feet, $$h$$, of an object shot upwards into the air with initial velocity, $$v_{0}$$, after $$t$$ seconds is given by the formula $$h=-16 t^{2}+v_{0} t$$.
|
2021-05-05 22:56:43
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.6788803339004517, "perplexity": 350.8254434473763}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-21/segments/1620243988696.23/warc/CC-MAIN-20210505203909-20210505233909-00576.warc.gz"}
|
https://math.meta.stackexchange.com/questions/743/why-use-mathjax-its-so-slow/11520
|
# Why use MathJax? It's so slow [closed]
I found that MathJax is extremely slow. Is there a better, lighter and faster Javascript library that can render mathematical notation equally well?
• It may be your browser that is slow. What browser and version are you using? Have you tried using Chrome or Firefox? – Bill Dubuque Aug 30 '10 at 15:03
• @Bill, I am using Chrome, latest dev build. – Graviton Sep 2 '10 at 14:37
• I agree that it's very slow. I often see the LaTeX formatted text as the page is being Typeset by MathJax. I'm using Chrome 12, Ubuntu 11.04, and a fast computer, and the performance even for a single equation on an empty page is quite poor. – slacy May 25 '11 at 23:54
• @Graviton Huh? MathJax slow? Maybe you encountered a page entirely filled with math formulas, which MathJax will take time compiling it. – user93957 Nov 3 '13 at 15:38
• Since some feel it more important to engage in useless debate and downvote actual concrete answers to the question, I guess I'll post this as a comment: See meta.math.stackexchange.com/questions/16809/… and khan.github.io/KaTeX – Jeff Ward Jun 27 '16 at 22:10
• @JeffWard While the written question is indeed "is there a faster alternative", the obvious implicit context is "that we can use on math.SE". People were telling you in the comments that it had already been discussed here and found not satisfactory. This isn't a general software recommendation Q&A website, this is a website to discuss what happens on math.SE, in case you hadn't noticed. – Najib Idrissi Jun 28 '16 at 7:51
• I'm voting to close this question as off-topic because it is not current anymore. If somebody wishes to revisit this matter I feel it is better to make a new start. – quid Jun 28 '16 at 9:29
• @quid: Thank you. – Asaf Karagila Jun 28 '16 at 21:32
• Not sure if this is relevant anymore, but Mathjax has a 1 second delay when doing transforms. You can speed this up by setting MathJax.Hub.processSectionDelay = 0 – Matt Aug 19 '16 at 13:43
Mathjax performance depends on several factors, like:
• the browser you use
• the hardware in your computer
If you want to improve the performance of mathjax, you think of a couple of things:
1. Use a better computer. I did a short test, and if run a page with much mathjax at my 5 year old laptop, it takes around 4 times longer, then if I run the page at my newest computer.
2. Download local fonts. There are two options. You can either download the STIX fonts locally, or download the TeX fonts locally. For the TeX fonts, download mathjax 2.2 from this page: http://www.mathjax.org/download/. Once downloaded, go to this map: fonts\HTML-CSS\TeX\otf and install all the fonts in this map. If you prefer the look of the STIX fonts, you can download them from here: http://www.stixfonts.org/
3. Use Firefox in combination with MathML rendering I just found this out, and I'm amazed by how much faster MathML rendering is compared with HTML-CSS rendering. This only works in firefox. You can turn the MathML rendering on by right clicking on a math formula:
Math Settings -> Math Renderer -> MathML
The NativeMML output processor uses the browser’s internal MathML support (if any) to render the mathematics. Currently, Firefox has native support for MathML, and IE has the MathPlayer plugin for rendering MathML. Opera has some built-in support for MathML that works well with simple equations, but fails with more complex formulas, so we don’t recommend using the NativeMML output processor with Opera. Safari has some support for MathML since version 5.1, but the quality is not as high as either Firefox’s implementation or IE with MathPlayer. Chrome, Konqueror, and most other browsers don’t support MathML natively, but this may change in the future, since MathML is part of the HTML5 specification.
The advantage of the NativeMML output Processor is its speed, since native MathML support is much faster than using complicated HTML and CSS to typeset mathematics, as the HTML-CSS output processor does. The disadvantage is that you are dependent on the browser’s MathML implementation for your rendering, and these vary in quality of output and completeness of implementation. MathJax relies on features that are not available in some renderers (for example, Firefox’s MathML support does not implement the features needed for labeled equations). The results using the NativeMML output processor may have spacing or other rendering problems that are outside of MathJax’s control.
• I tried FF's MathML renderer, and it introduces vertical whitespace above and below maths expressions. This makes any use of inline maths very ugly. But I have to concur -- it's fast. – Lord_Farin Nov 2 '13 at 20:45
I don't think it is MathJax that is the problem, but rather the nature of the way web pages are formatted. MathJax has to generate a bunch of div and span blocks, which takes time for a browser to render. While we're writing posts, these get (re)rendered all the time.
The solution to this problem might be implementation of one or more of the feature requests:
1. Make a better SE parser of what formulas need rerendering, so that MathJax has less rerendering to do.
I expect this to be very hard to implement, and would probably be buggy.
2. Make a delay in rendering, as it is with the syntax highlighting.
In other words, formulas get displayed as a source ($formula$), until the poster has stopped typing for a short period of time, let's say 3 or 5 seconds. After such a delay, post's formulas would get rendered as we're used to.
3. Add a "Don't process formulas while I'm typing" checkbox.
This would go either somewhere near the post-writing area, or in the profile (or, preferably, both, with the one in the profile being the default state), and could mean either "don't process at all" or "behave as I've described in the item 2 above".
4. Some kind of delay as described in 2 and, implicitly, in 3, but with the delay time growing with the post size (up to a limit of, IMO, no more than 30 seconds).
This way, shorter posts (which are not troublesome) would not be affected, while the longer ones would be so hard on our computers. I leave the definition of "length" here opened. It might be the number of characters, which implements trivially, but could also be the number of formulas (which in itself takes some parsing).
I think that this (as well as, maybe, item 3) would warrant a "Process now" button to do a single rendering when the poster requests it, so that (s)he doesn't have to wait unnecessarily.
The way things are now, I type my longer posts in gvim, and then copy/paste them here. It's not ideal, but for me it is an acceptable workaround.
• I perceive that MathJax performance has improved over the time since the Question was originally posed, but there are a lot of variables to try and account for. To me the speed is adequate for me, running Chrome on Windows 7 or a Linux platform. On Editing existing Answers, there is a Hide Preview option (fine print at left between the Answer box and the Preview window), so it might be simple to give that on initial Answers (although this place is now used for "draft saved" indiciations). – hardmath Nov 2 '13 at 17:18
• I knew I saw it somewhere, but I couldn't remember where! Thank you. As for the speed, I had a hard time and I had to switch to gvim when I was writing this answer. Usually, it's not as much of a problem, but it would still be better if the site was more friendly towards slower computers. – Vedran Šego Nov 2 '13 at 18:46
• Point 1 would probably be fairly simple to do - just cache the generated DOM object mapped from the TeX. The problem then is keeping memory usage under control. But this is probably something which doesn't help much in most contexts in which MathJax is used: I suspect that MSE is unusual in using it in a continuous edit mode. – Peter Taylor Nov 2 '13 at 23:27
• @PeterTaylor MathJax actually has quite a decent support for rerendering. What I see as a problem in point 1 is how to determine what in the post to rerender. That would have to include some LaTeX parsing by the MSE before invoking MathJax renderer, because formulas don't go just in $...$ and $$...$$. Maybe rerendering just the edited paragraph would be sufficient, given that formulas cannot include empty lines. – Vedran Šego Nov 3 '13 at 2:07
|
2020-02-29 04:13:06
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.5710905194282532, "perplexity": 1782.124904153659}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-10/segments/1581875148375.36/warc/CC-MAIN-20200229022458-20200229052458-00387.warc.gz"}
|
http://tug.org/pipermail/xy-pic/2001-October/000018.html
|
# [Xy-pic] XYpic oztex driver option problem
Ingmar Visser op_visser@macmail.psy.uva.nl
Fri, 05 Oct 2001 15:21:03 +0200
I use oztex 4.0 running on an iMac, system 9.1. I use xy-pic to make
diagrams and the like and also use special features that require the
PostScript back end to work such variable line thickness for example.
I managed to make a .ps file that prints correctly by invoking
\xyoption{ps} and \xyoption{dvips}. However, the
correct pictures do not appear in the OzTeX dvi viewer, also not when
I invoke the \xyoption{oztex}.
In particular the following error message is produced:
(SystemDisk:Applications:tex:Packs:Xy-pic:xyps-r.tex)
! Undefined control sequence.
\installPSrotscale@ \xyPSsh...
l.31 \begin{document}
?
(SystemDisk:Applications:tex:Packs:Xy-pic:xyps-l.tex)
This also happens even with an empty document.
Anyone familiar with the problem and solved it?
ingmar
|
2018-02-23 14:42:21
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.5306174755096436, "perplexity": 13417.886297529578}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-09/segments/1518891814787.54/warc/CC-MAIN-20180223134825-20180223154825-00430.warc.gz"}
|
https://www.physics-in-a-nutshell.com/article/17/surface-temperature-of-the-earth
|
Physics in a nutshell
$\renewcommand{\D}[2][]{\,\text{d}^{#1} {#2}}$ $\DeclareMathOperator{\Tr}{Tr}$
# Surface Temperature of the Earth
The mean surface temperature of the earth is around $T = 288 \,\text{K}$.[1] Can this number be derived by means of a simple model?
## Heat Exchange with the Environment
A glance at the recent history of the earth's climate shows that the mean surface temparature has been relatively constant. During the past 40 million years the global mean temperature has been varying by only 10 degrees Kelvin and in the past 10.000 years the variation was only of about 1 degree.[2]
Thus, even on large time scales the earth's temperature can be regarded as constant. In other words: There is no net transfer of heat from or to the earth's surface. It is in equilibrium with its environment and it therefore exhibits a constant temperature. This implies that the incident heat flux $P_\text{in}$ has to be equal to the outgoing $P_\text{out}$: [3] [4] \begin{align} P_\text{in} &= P_\text{out} \label{equilibriumCondition} \\ \left[ P \right] &= \text{W}\,\text{m}^{-2} \end{align} Since the earth is essentially surrounded by a vacuum, the only way of exchanging heat with its environment is through electromagnetic radiation because in contrast to conduction and convection this process does not require a propagation medium.[5] [6] Indeed, there is a minor additional contribution through geothermal activity but this will be neglected in the following due to its relative weakness compared to the solar radiation.
However, we have to specify the expressions for the incoming and outgoing radiant flux. Obviously, the incident radiation originates basically from the sun with its high surface temperature whereas the outgoing radiation has to be related somehow to the earth's temperature.
What is the origin of thermal radiation? On a microscopic level an object's temperature is a measure of the kinetic energy of its constituents (eventually charged particles). Thus thermal radiation originates from the thermal motion of charged particles which implicates emission of electromagnetic waves. The kinetic energy of these particles is distributed statistically around a mean value and the same applies for its radiation spectrum.
### Planck's Law
In general it is rather difficult to predict the exact emission spectrum. But Max Planck was able to derive such an expression for so called black bodies (objects that are in thermodynamic equilibrium with their environment and perfect absorbers and emitters of radiation). It can be obtained by applying the laws of statistical and quantum mechanics to the radiation (which is treated as a photon gas). The resulting Planck law relates the energy density per frequency $u(\nu)$ and the frequency $\nu$ itself by \begin{align} u(\nu) \D \nu = \left( \frac{8 \pi h}{c^3} \right) \cdot \frac{\nu^3 \D\nu}{e^{\frac{h\nu}{k_\text{B}T}}-1} \end{align}
### Stefan-Boltzmann Law
The energy emitted from the surface of a black body can be obtained by integrating Planck's law wich yields \begin{align} J(T) &= \sigma \cdot T^4 \label{stefanBoltzmann} \end{align} where $J$ [W m$^{-2}$] is the radiated energy per time and per unit area (energy flux density), ${\sigma \approx 5.67 \cdot 10^{-8} \;\text{W}\,\text{m}^{-2}\,\text{K}^{-4}}$ is the so called Stefan-Boltzmann constant and $T$ is the surface temperature. This equation is known as Stefan-Boltzmann law. The total rate of energy transfer $P$ through any surface is the product of the perpendicular component of the surface and the energy flux density $J(T)$ as given in eq. \eqref{stefanBoltzmann}.[7]
The total rate of incident energy $P_\text{in}$ [W] is the product of the energy flux density $J_\text{sun}$ [W m$^{-2}$] of the solar irradiance at the mean radius of the earth's orbit $d = 1 \,\text{au}$ (astronomical unit) and the projection surface of the earth ${ A_\text{e}^\perp = \pi r_\text{e}^2 }$.
#### The Solar Constant
The former quantity is in general referred to as the solar constant and its experimental value is \begin{align} S &:= J_\text{sun} \left( d \right) \\ &= 1370 \,\text{W}\,\text{m}^{-2}. \end{align} This number is a measure of how much energy we can receive from the sun and you can illustrate it as follows: If you had a perfect $1\text{m}^2$ solar cell that is capable of converting solar radiation into electricity without loss, it could serve as a power supply for up to 14 100W light bulbs, 3 washing machines or one hair dryer.
Thereby the total rate of incident energy can be expressed as \begin{align} P_\text{in} = S \cdot \pi r_\text{e}^2 . \end{align} [8]
#### The Role of Albedo
At this point one important modification is necessary: Originally we were interested in the heat transferred to the earth. But until now we ignored the fact that a significant amount ($\approx 30\%$) of the incident radiation is reflected (e.g. by clouds or ice) immediately without transferring heat at all. This percentage is called albedo and amounts to $a = 0.3$ for the earth. Thus, only the proportion of ${1-a}$ of the incident radiation transfers heat to the earth's surface and one obtains:[9] \begin{align} P_\text{in} = (1-a) \cdot S \cdot \pi r_\text{e}^2 . \label{pIn} \end{align}
To determine the rate of outgoing energy $P_\text{out}$ we approximate the earth as a black body with surface temperature $T_\text{e}$. Then the total rate of energy emitted by the earth's surface $P_\text{out}$ is the product of the energy flux density radiated per unit area (according to the Stefan-Boltzmann law) and the earth's surface area $A_\text{e} = 4 \pi r_\text{e}^2$: \begin{align} P_\text{out} = \sigma T_{e}^4 \cdot 4 \pi r_\text{e}^2 \label{pOut} \end{align}
Maybe you wonder why we first used the projection area of the earth and now the total spherical area. The explanation is that in the first case we considered parallel radiation and in that case the perpendicular surface is plane. On the contrary now we considered radially emitted rays and in this case the perpendicular surface has a curved, spherical shape.
### Result
Now one can insert eqs. \eqref{pIn} and \eqref{pOut} into eq. \eqref{equilibriumCondition} and one obtains: \begin{align} P_\text{in} &= P_\text{out} \\ (1-a) \cdot S \cdot \pi r_\text{e}^2 &= \sigma T_{e}^4 \cdot 4 \pi r_\text{e}^2 \\[2ex] \Leftrightarrow \quad T_\text{e} &= \sqrt[4]{\frac{(1-a) \cdot S}{4\sigma}} \end{align} [10] [11]
When inserting the values as given in the previous sections, this calculation yields a surface temperature of about $T_\text{e} = 255\,\text{K}$. Even though this value is not quite bad for a very simple model, it still deviates significantly from the actual value of $T_\text{e} = 288\,\text{K}$.
What are the main flaws of this model? It was assumed that the earth is a closed system with a sharp surface surrounded by a vacuum. However, this is not valid for the earth's surface since there is additionally the atmosphere which influences the radiation balance considerably. In the next article a more sophisticated model will comprise the atmosphere's impact.
## References
[1] David G. Andrews An Introduction to Atmoshperic Physics Cambridge University Press 2000 (p. 5) [2] J. I. Lunine Earth - Evolution of a habitable world Cambridge University Press 2013 (p. 238) [3] E. Boeker, R. van Grondelle Environmental Physics Wiley 2011 (ch. 1.2) [4] D. Randall Atmosphere, Clouds and Climate Princeton University Press 2012 (p. 28) [5] M. de Oliveira Equilibrium Thermodynamics Springer 2013 (ch. 18.1.1) [6] D. Randall Atmosphere, Clouds and Climate Princeton University Press 2012 (p. 27) [7] E. Boeker, R. van Grondelle Environmental Physics Wiley 2011 (ch 2.1.1) [8] D. Randall Atmosphere, Clouds and Climate Princeton University Press 2012 (pp. 28, 31) [9] D. Randall Atmosphere, Clouds and Climate Princeton University Press 2012 (p. 31) [10] E. Boeker, R. van Grondelle Environmental Physics Wiley 2011 (ch. 1.2) [11] David G. Andrews An Introduction to Atmoshperic Physics Cambridge University Press 2000 (ch. 1.3.1)
Your browser does not support all features of this website! more
|
2019-10-19 01:54:25
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 8, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9974843859672546, "perplexity": 638.2260230048308}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-43/segments/1570986688674.52/warc/CC-MAIN-20191019013909-20191019041409-00415.warc.gz"}
|
https://www.gamedev.net/forums/topic/50505-a-class-within-another-class/
|
#### Archived
This topic is now archived and is closed to further replies.
# A Class within another Class
This topic is 6035 days old which is more than the 365 day threshold we allow for new replies. Please post a new topic.
## Recommended Posts
I was just wondering if you could define a class within the definition of another class. If it is possibly can someone show some example code. Thanks.
##### Share on other sites
Sure you can (not a really useful example, though):
#include using namespace std;class A{public: class B { public: void foo() const { cout << "A::B::foo()\n"; } }; void foo() const { cout << "A::foo()\n"; } void bar() const { B b; b.foo(); }};int main(){ A::B b; A a; a.foo(); a.bar(); b.foo();}
This will print
A::foo()
A::B::foo()
A::B::foo()
Of course, the usual access rules (public, private, protected) also hold for the inner class (B in this case).
HTH
##### Share on other sites
I''m not sure I understand the question. If the question is "Can you use classes within classes, as data members, etc?" the answer is certainly, and let us know and we''ll give you details regarding how. If the question is "Can I define more than one class in a header file?" then the answer is yes, simply add another class block and make sure you put the right class name to the functions. If you''re curious about deriving classes from other classes to use the functionality of one class in another class with more functionality, we can give you some resources on that as well.
-fel
##### Share on other sites
Sure
class Wheel {
public:
//whatever one needs to make a wheel.
private:
};
class Car {
public:
private:
Wheel wheels[4];
};
Something like that. When making classes, just ask yourself "is
this a has-a relationship or an is-a relationship?" For example,
a car has-a wheel (actually 4 of them, usually) and a Mustang
is-a car... So Mustang would be inherited from Car while Car
possesses 4 Wheels.
Get it?
##### Share on other sites
My class looks like this:
class CMap
{
public:
int XMax;
int YMax;
int TileWidth;
int TileHeight;
class CCamera
{
public:
int X;
int Y;
void Initialize(void)
{
X = 0;
Y = 0;
}
};
};
I want to create a variable in a .h file using extern then I declare the variable in a .cpp
I keep getting error that say the CMap is undefined.
##### Share on other sites
Out of curiosity, what is the reasoning behind putting class CCamera in the middle of class CMap?
Also, why use extern rather than forward class reference?
Perhaps if you explained what you''re doing, we could give you an easier approach to it.
-fel
• ### Forum Statistics
• Total Topics
628697
• Total Posts
2984271
• 19
• 9
• 13
• 13
• 11
|
2017-12-14 15:19:19
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.25168558955192566, "perplexity": 6055.469978721518}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-51/segments/1512948544677.45/warc/CC-MAIN-20171214144324-20171214164324-00283.warc.gz"}
|
https://tex.stackexchange.com/questions/567514/remove-and-between-authors-in-bibliography/567531
|
Remove “and” between authors in bibliography
I'd like to remove the "and" between the authors in my bibliography. Currently, I have authors listed as such author1, author2, and author3 and author1 and author2.
Desired output: author1, author2, author3 and author1, author
This is my first time using Latex, so while I believe this link is helpful, I've been unable to implement it. Since this is also my first post about Latex, I am also open to feedback on how to post a more informative question!
Code below:
\documentclass[12pt]{article}
% --------------- 10 POINT FONT FOR CAPTIONS ------------------
\usepackage[font=footnotesize, skip=0pt]{caption}
% --------------- NY TIMES FONT -------------------------------
\usepackage{times}
% --------------- CITATIONS -------------------------------
\usepackage[super,sort&compress]{natbib}
\usepackage{paralist}
\let\olditem\item
\renewenvironment{thebibliography}[1]{%
\section*{\refname}
\let\par\relax\let\newblock\relax
\renewcommand{\item}[1][]{\olditem}%
\inparaenum}{\endinparaenum}
\begin{document}
\fontsize{11pt}{11pt}\selectfont
\bibliographystyle{unsrtnat}
\bibliography{refs}
\end{document}
And example citation:
@article{boyd2011cultural,
author="Boyd and Richerson and Henrich",
journal={Proc. Natl. Acad. Sci.},
year={2011},
publisher={National Acad Sciences}
}
• Off-topic: The times package and the Times Roman font named for a newspaper; however, this newspaper is the Times of London, not the NY Times. – Mico Oct 20 at 5:58
2 Answers
Since you're using the unsrtnat bibliography style, I suggest you proceed as follows.
• Find the file unsrtnat.bst in your TeX distribution, make a copy of it, and call the copy, say, unsrtnat-noand.bst. (Don't edit an original, un-renamed file from the TeX distribution directly.)
• Open the file unsrtnat-noand.bst in a text editor. The program you use to edit your tex files will do fine.
• In the unsrtnat-noand.bst, find the function format.names. In my copy of the file, this function starts on line 216.
• In this function, find the following line (l. 228, probably):
'skip\$
Change it to
{ "," * }
• Still within this function, find the following line (l. 232, probably):
{ " and " * t * }
Change it to
{ " " * t * }
I.e., delete and , but leave one blank space in the first string.
• Save the file unsrtnat-noand.bst, either in the folder where your main tex file is located or in a folder that's searched by BibTeX. If you choose the second option, be sure to update the filename database of your TeX distribution suitably.
• In your main tex file, change the instruction \bibliographystyle{unsrtnat} to \bibliographystyle{unsrtnat-noand} and perform a complete recompile cycle -- latex, bibtex, and latex twice more -- to fully propagate the change in the bib style file.
Here's an MWE (minimum working example) that demonstrates the outcome of this exercise.
\documentclass{article}
\begin{filecontents}[overwrite]{mybib.bib}
@misc{ab,author="A and B",title="Thoughts",year=3002}
@misc{abc,author="A and B and C",title="Thoughts",year=3003}
@article{boyd2011cultural,
author="Boyd and Richerson and Henrich",
journal={Proc.\ Natl.\ Acad.\ Sci.},
year={2011},
publisher={National Acad Sciences}
}
\end{filecontents}
\usepackage[super,sort&compress]{natbib}
\bibliographystyle{unsrtnat-noand}
\begin{document}
aaa\cite{ab}, bbb\cite{abc}, ccc\cite{boyd2011cultural}
\bibliography{mybib}
\end{document}
and is mean to tell bibliography engine that those are different authors. You set out come of the look with \setcitestyle{authoryear,open={((},close={))}} for example.
\usepackage[super,sort&compress]{natbib}
\bibliographystyle{unsrtnat}
\setcitestyle{none,open={((},close={))}}
More about natbib can be read from manual or reference
• Thank you @Oni! I'm new to Latex so can you please describe where I should specifically implement this? (e.g., does this go somewhere in my renewenvironment or at the bottom near \biobliography?) Also does authoryear also apply to author? – psychcoder Oct 20 at 1:49
• I added link to ctan and reference. – Oni Oct 20 at 2:01
• Hm this has not worked for me yet — to confirm, I don't need to edit authoryear since it's the list of author names that I am trying to customize? Also I'd used bibliographystyle{unsrtnat} so that my citations are listed in the cited order, not alphabetically. In your recommendation, are you advising that I replace my bibliographystyle with abbrvnat or using both bibliographystyle{unsrtnat} and \bibliographystyle{abbrvnat}? – psychcoder Oct 20 at 2:08
• You chose one of the style, use none instead of authoryear to get list in citation order. – Oni Oct 20 at 2:11
• Also I notice that \setcitestyle{none,open={((},close={))}} puts two parentheses around my citations in text. Perhaps an important clarification is that I am attempting to edit the author list in the bibliography list, not the author list of the citations in text. (I'm not sure what that proper term is but I'm using text as in the main body of text.) – psychcoder Oct 20 at 2:40
|
2020-12-05 11:51:33
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8962178230285645, "perplexity": 4457.998585763717}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": false}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-50/segments/1606141747774.97/warc/CC-MAIN-20201205104937-20201205134937-00504.warc.gz"}
|
https://khullanote.com/c-programming-programming-technique
|
# C-Programming-Programming Technique
## A complete notes on Programming Technique for BCA ,BIM and BIT students.
Unit 2: Programming Technique
Syllabus
Introduction to programming technique
A program is a set of step-by-step instructions that directs the computer to do the tasks you want it to do and produce the results you want.
Computer programming is the process of designing and building an executable computer program to accomplish a specific computing result or to perform a specific task.
Programming involves tasks such as: analysis, generating algorithms, profiling algorithms' accuracy and resource consumption, and the implementation of algorithms in a chosen programming language (commonly referred to as coding).
The source code of a program is written in one or more languages that are intelligible to programmers, rather than machine code, which is directly executed by the central processing unit.
The purpose of programming is to find a sequence of instructions that will automate the performance of a task (which can be as complex as an operating system) on a computer, often for solving a given problem. Proficient programming thus often requires expertise in several different subjects, including knowledge of the application domain, specialized algorithms, and formal logic.
Tasks accompanying and related to programming include: testing, debugging, source code maintenance, implementation of build systems, and management of derived artifacts, such as the machine code of computer programs. These might be considered part of the programming process, but often the term software development is used for this larger process with the term programming, implementation, or coding reserved for the actual writing of code.
Basically, developing a program involves steps similar to any problem-solving task.
They are:
1. Defining the problem
2. Planning the solution
3. Coding the program
4. Testing the program
5. Documenting the program
Programming Techniques
Every programmer follows a different programming technique. Subsequently there are four different programming techniques. A programmer learns all four of them during his tenure. These four programming techniques are:
• Unstructured Programming
• Procedural Programming
• Modular Programming
• Object oriented Programming
Unstructured Programming:
This type of programming is straight forward programming. Here, everything is in the sequential manner. It does not involve any decision making.
A general model of these linear programs is:
1. Read a data value.
2. Compute an intermediate result.
3. Use the intermediate result to computer the desired answer.
4. Print the answer.
5. Example of unstructured Programming language are BASIC (early version),
JOSS, FOCAL, MUMPS, TELCOMP, COBOL.
Structured programming
A programming language in which the entire logic of the program is written by dividing it into smaller units or modules is called "structured programming Language".
Program written in structured programming language is very easy to modify and to debug. The languages that support Structured programming approach are: C, C++, Java, C#
The structured program mainly consists of three types of elements:
1. Selection Statements
2. Sequence Statements
3. Iteration Statements
Advantages of Structured Programming Approach:
1. Easier to read and understand
2. User Friendly
3. Easier to Maintain
4. Mainly problem based instead of being machine based
5. Development is easier as it requires less effort and time
6. Easier to Debug
7. Machine-Independent.
Disadvantages of Structured Programming Approach:
1. Since it is Machine-Independent, so it takes time to convert into machine code.
2. The converted machine code is not the same as for assembly language.
3. The program depends upon changeable factors like data-types. Therefore, it needs to be updated with the need on the go.
4. Usually, the development in this approach takes longer time as it is language-dependent. Whereas in the case of assembly language, the development takes lesser time as it is fixed for the machine.
What are the main features of Structural Programming language?
1. Division of Complex problems into small procedures and functions.
2. No presence of GOTO Statement.
3. The main statement includes – If-then-else, Call and Case statements.
4. Large set of operators like arithmetic, relational, logical, bit manipulation, shift and part word operators.
What is difference between Structured and Unstructured Programming Language?
The main difference between structured and unstructured programming language is that a structured programming language allows a programmer to dividing the whole program into smaller units or modules. But in unstructured programming language, the whole program must be written in single continuous way; there is no stop or broken block.
1. Structured Programming language is a subset of Procedural Programming language. But in unstructured Programming language no subset exists.
2. Structured Programming language is a precursor to Object Oriented Programming (OOP) language. But another one is not.
3. Structured Programming language produces readable code while Unstructured Programming language produces hardly readable code “spaghetti”.
4. Structured Programming language has some limitations while unstructured Programming language offers freedom to program as they
5. Structured Programming language is easy to modify and debug, while unstructured Programming language is very difficult to modify and debug.
6. Examples of Structured Programming language are C, C+, C++, C#, Java, PERL, Ruby, PHP, ALGOL, Pascal, PL/I and Ada; and example of unstructured Programming language are BASIC (early version), JOSS, FOCAL, MUMPS, TELCOMP, COBOL
Procedural Programming:
A procedural language is a computer programming language that follows, in order, a set of commands. Examples of computer procedural languages are BASIC, C, FORTRAN, Java, and Pascal. Procedural programming is also known as imperative programming.
Procedural languages are some of the common types of programming languages used by script and software programmers. They make use of functions, conditional statements, and variables to create programs that allow a computer to calculate and display a desired output.
It is a list of instructions telling a computer, step-by-step, what to do, usually having a linear order of execution from the first statement to the second and so forth with occasional loops and branches.
Some of the benefits of the procedural programming methodology are:
• Easy to read program code.
• Easily maintainable program code, as various procedures can be debugged in isolation.
• The code is more flexible as one can change a specific procedure that gets implemented across the program.
Modular Programming:
Modular programming is the process of subdividing a computer program into separate sub-programs. A module is a separate software component. It can often be used in a variety of applications and functions with other components of the system.
Advantages of Using Modular Programming Approach
1. Ease of Use: This approach allows simplicity, as rather than focusing on the entire thousands and millions of lines code in one go, we can access it in the form of modules. This allows ease in debugging the code and prone to less error.
2. Reusability: It allows the user to reuse the functionality with a different interface without typing the whole program again.
3. Ease of Maintenance: It helps in less collision at the time of working on modules, helping a team to work with proper collaboration while working on a large application.
Object Oriented Programming:
Object Oriented programming is a programming paradigm which relies on the concept of classes (making the common structure or blueprint) and objects (formed by inheriting common structure) along with attributes (data) and method (function). OOP structure a program into simple, reusable piece of code called class, organize such structured program around its data (object) with a set of well-defined services to that data. Class defines a structure and behavior that will be shared by a set of objects. Each object of a given class shares a structure and behavior defined by a class. Therefore, class is a logical construct and object is a physical reality. Example of class and object is given on page number 1.
Some of the well-known object-oriented languages are C++, Java, Python etc.
Benefits of OOP:
• One class can be inherited to another class for the common features available without making changes on the previous class. This will help to eliminate redundant code and extend the use of existing classes.
• Emphasis on data rather than procedure
• Data is hidden and cannot be accessed by external function.
• OOP uses abstraction mechanism (hiding the detail and complexity) which helps to filter out limited data to exposure. This helps to provide only necessary data for viewing which cannot be invade by the code in the other part of program.
• Multiple objects can coexist without any interference
OOP principle:
1. Encapsulation:
Encapsulation is the mechanism that binds together code and the data it manipulates and keeps both safe from outside interference and misuse. Encapsulation can be thought of as protective box that prevents the code and data from being accessed by other code defined outside the box. Access to the data is tightly controlled through well defined interface. The basis of encapsulation is a class. Class contains data and methods that will be shared by a set of objects. Each object of a given class contains structure and behavior define by class. The class can hide the data and method it contains by using the keyword private. This is known as encapsulation. The benefit of encapsulating a data and method is that they are only accessible by a code that is member of class i.e., code that are outside the class cannot access the data. This will help to prevent unauthorized access to class data and methods. Generally, attributes (member variable or data) of class are marked as private. Such private data or member of class can be accessed only through the public methods. Therefore, public interface should be designed carefully such that it does not expose internal working of class.
1. Inheritance:
Inheritance is the process by which object of one class acquires the properties of object of another class or it is the process of acquiring common properties of one class by another class. This supports the concept of hierarchical classification. By use of inheritance an object needs to define only those qualities that make it unique. General (common) attributes are directly inherited from parent class. Let consider following example:
Here, class account contains the general information like user’s account number, demographic information and his/ her balance. The account can be different like saving account, current account etc. But the general information for all the account is same as that of class account. So, by use of inheritance the different account class like saving, credit and checking does not have to mention general information (account number, balance etc.). the general information is directly inherited or capture from class Account. Therefore, the child class: saving account, credit account and checking account have to mention its unique information such as saving account have own features like bonus 10%, balance retrieval time etc., credit account may have features like badge 2%, credit renewal time etc.
Those class that captures general (common) information are known as super class. Those class that inherits common information from superclass and only mention its unique information is known as sub class.
1. Polymorphism:
Polymorphism (many form) is the quality that allows one interface to access a general class of action. Polymorphism means ability to take more than one form i.e., it is the mechanism from which one operation may exhibit different behavior depending on different instances. The behavior depends on type of data used in the operation. Let consider an example: for addition operation (+): if we provide numeric data, it will perform addition operation and gives sum as output but if we provide string data as input it will perform concatenation operation.
Polymorphism allows an object to have different internal structure but shares the same external structure. Polymorphism is extensively used in implementing inheritance.
Polymorphism is often expressed by the phrase “one interface, multiple methods”.
1. Abstraction:
Abstraction refers to the act of representing essential features by hiding the complex details or explanation. This helps to manage complexity. For example: in motor bike we can accelerate to move it, brake to stop it and shifts the gear to control speed. But have we ever thought how brake work, what are its internal components? This is known as abstraction that is we know brake will stop the bike (implement functionality) but we ignore how brake work (hiding internal structure). Hierarchal classification can be used to manage abstraction.
Top down and Bottom-Up Approach
Top-down and bottom-up programming refer to two different strategies for developing a computer program. Top-down programming starts by implementing the most general modules and works toward implementing those that provide specific functionality. Bottom-up programming implements the modules that provide specific functionality first and then integrates them by implementing the more general modules. Most programs are developed using a combination of these strategies.
Definition of Top-down Approach
The top-down approach basically divides a complex problem or algorithm into multiple smaller parts (modules). These modules are further decomposed until the resulting module is the fundamental program essentially be understood and cannot be further decomposed. After achieving a certain level of modularity, the decomposition of modules is ceased.
The top-down approach is the stepwise process of breaking of the large program module into simpler and smaller modules to organise and code program in an efficient way. The flow of control in this approach is always in the downward direction. The top-down approach is implemented in the “C” programming language by using functions.
Thus, the top-down method begins with abstract design and then sequentially this design is refined to create more concrete levels until there is no requirement of additional refinement.
Definition of Bottom-up Approach
The bottom-up approach works in just opposite manner to the top-down approach. Initially, it includes the designing of the most fundamental parts which are then combined to make the higher-level module. This integration of submodules and modules into the higher-level module is repeatedly performed until the required complete algorithm is obtained.
Bottom-up approach functions with layers of abstraction. The primary application of the bottom-up approach is testing as each fundamental module is first tested before merging it to the bigger one. The testing is accomplished using the certain low-level functions.
Key Differences Between Top-down and Bottom-up Approach
• Top-down approach decomposes the large task into smaller subtasks whereas bottom-up approach first chooses to solve the different fundamental parts of the task directly then combine those parts into a whole program.
• Each submodule is separately processed in a top-down approach. As against, bottom-up approach implements the concept of the information hiding by examining the data to be encapsulated.
• The different modules in top-down approach don’t require much communication. On the contrary, the bottom-up approach needs interaction between the separate fundamental modules to combine them later.
• Top-down approach can produce redundancy while bottom-up approach does not include redundant information.
• The procedural programming languages such as Fortran, COBOL and C follows a top-down approach. In contrast, object-oriented programming languages like C++, Java, C#, Perl, Python abides the bottom-up approach.
• Bottom-up approach is priorly used in testing. Conversely, the top-down approach is utilized in module documentation, test case creation, debugging, etcetera.
Cohesion and Coupling
The software matrix of coupling and cohesion were invented by Larry Constantine in the late 1960s as part of Structured Design, based on characteristics of good programming practices that reduced maintenance and modification costs.
Structured Design, cohesion and coupling were published in the article Stevens, Myers & Constantine (1974) and the book Yourdon & Constantine (1979), the latter two subsequently became standard terms in software engineering.
Cohesion
In computer programming, cohesion refers to the degree to which the elements inside a module belong together. In one sense, it is a measure of the strength of relationship between the methods and data of a class and some unifying purpose or concept served by that class. In another sense, it is a measure of the strength of relationship between the class's methods and data themselves.
Cohesion is a measure of the degree to which the elements of the module are functionally related. It is the degree to which all elements directed towards performing a single task are contained in the component. Basically, cohesion is the internal glue that keeps the module together. A good software design will have high cohesion.
Cohesion is an ordinal type of measurement and is usually described as “high cohesion” or “low cohesion”. Cohesion represents the clarity of the responsibilities of a module. So, cohesion focuses on how single module/class is designed. Higher the cohesiveness of the module/class, better is the OO design. If our module performs one task and nothing else or has a clear purpose, our module has high cohesion. On the other hand, if our module tries to encapsulate more than one purpose or has an unclear purpose, our module has low cohesion.
Modules with high cohesion tend to be preferable, simple because high cohesion is associated
with several desirable traits of software including robustness, reliability,
and understandability. Low cohesion is associated with undesirable traits such as being difficult to maintain, test, reuse, or even understand. Cohesion is often contrasted with coupling. High cohesion often correlates with loose coupling, and vice versa.
Types of Cohesion:
1.Functional Cohesion:
Functional cohesion is when parts of a module are grouped because they all contribute to a single well-defined task of the module. Every essential element for a single computation is contained in the component. A functional cohesion performs the task and functions. It is an ideal situation.
1. Sequential Cohesion:
Sequential cohesion is when parts of a module are grouped because the output from one part is the input to another part like an assembly line. i.e., data flow between the parts. It occurs naturally in functional programming languages.
For example: a function which reads data from a file and processes the data.
3.Communicational Cohesion:
Two elements operate on the same input data or contribute towards the same output data. Communicational cohesion is when parts of a module are grouped because they operate on the same data. There are cases where communicational cohesion is the highest level of cohesion that can be attained under the circumstances.
For example: a module which operates on the same record of information, Update record in the database and send it to the printer.
4.Procedural Cohesion:
Elements of procedural cohesion ensure the order of execution. That is when parts of a module are grouped because they always follow a certain sequence of execution. For example: a function which checks file permissions and then opens the file.
5.Temporal Cohesion:
Temporal cohesion is when parts of a module are grouped by when they are processed - the parts at a particular time in program execution. A module connected with temporal cohesion all the tasks must be executed in the same time-span. This cohesion contains the code for initializing all the parts of the system. Lots of different activities occur, all at unit time.
For example: A function which is called after catching an exception which closes open files, creates an error log, and notifies the user.
6.Logical Cohesion:
Logical cohesion is when parts of a module are grouped because they are logically categorized to do the same thing even though they are different by nature. Example, grouping all mouse and keyboard input handling routines etc. These elements are logically related and not functionally. For example: A component reads inputs from tape, disk, and network. All the code for these functions is in the same component. Operations are related, but the functions are significantly different. Similarly,
7.Coincidental Cohesion:
Coincidental cohesion is when parts of a module are grouped arbitrarily; the only relationship between the parts is that they have been grouped together. They are like, Utilities class. The elements are not related(unrelated). The elements have no conceptual relationship other than location in source code. It is accidental and the worst form of cohesion. For example, print next line and reverse the characters of a string in a single component.
Coupling
In software engineering, coupling is the degree of interdependence between software modules; a measure of how closely connected two routines or modules are; the strength of the relationships between modules.
Coupling is usually contrasted with cohesion. Low coupling often correlates with high cohesion, and vice versa. Low coupling is often thought to be a sign of a well-structured computer system and a good design, and when combined with high cohesion, supports the general goals of high readability and maintainability
Coupling is the measure of how dependent your code modules are on each other. Strong coupling is bad and low coupling is good. High coupling means that your modules cannot be separated. It means that the internals of one module know about and are mixed up with the internals of the other module.
When your system is really badly coupled, it is said to be “spaghetti” code, as everything is all mixed up together like a bowl of spaghetti noodles.
High coupling means that a change in one place can have unknown effects in unknown other places. It means that your code is harder to understand because complex, intertwined relationships are difficult to understand.
Heavily coupled code is difficult to reuse because it is difficult to remove from the system for use elsewhere. One should strive to reduce coupling in one’s code to as high a degree as possible.
Types of coupling in procedural programming:
1.Content coupling (high)
Content coupling is said to occur when one module uses the code of another module, for instance a branch. This violates information hiding - a basic design concept.
2.Common coupling
Common coupling is said to occur when several modules have access to the same global data. But it can lead to uncontrolled error propagation and unforeseen side-effects when changes are made.
3.External coupling
External coupling occurs when two modules share an externally imposed data format, communication protocol, or device interface. This is basically related to the communication to external tools and devices.
4.Control coupling
Control coupling is one module controlling the flow of another, by passing it information on what to do. For example: passing a what-to-do flag.
5.Stamp coupling (data-structured coupling)
Stamp coupling occurs when modules share a composite data structure and use only parts of it, possibly different parts (E.g: passing a whole record to a function that needs only one field of it). In this situation, a modification in a field that a module does not need may lead to changing the way the module reads the record.
6.Data coupling
Data coupling occurs when modules share data through, for example, parameters. Each datum is an elementary piece, and these are the only data shared (Ex: passing an integer to a function that computes a square root).
Types of coupling in OOP:
1.Subclass coupling
Describes the relationship between a child and its parent. The child is connected to its parent, but the parent is not connected to the child.
2.Temporal coupling
When two actions are bundled together into one module just because they happen to occur at the same time.
3.Dynamic coupling
The goal of this type of coupling is to provide a run-time evaluation of a software system. It has been argued that static coupling metrics lose precision when dealing with an intensive use of dynamic binding or inheritance. In the attempt to solve this issue, dynamic coupling measures have been taken into account.
4.Semantic coupling
This kind of coupling considers the conceptual similarities between software entities using, for example, comments and identifiers and relying on techniques.
5.Logical coupling
Logical coupling exploits the release history of a software system to find change patterns among modules or classes.
For example: Entities that are likely to be changed or sequences of changes (a change in a class A is always followed by a change in a class B).
Difference between cohesion and coupling
Cohesion Coupling Cohesion is the indication of the relationship within module Coupling is the indication of the relationships between modules Cohesion shows the module’s relative functional strength Coupling shows the relative independence among the modules Cohesion is a degree (quality) to which a component / module focuses on the single thing Coupling is a degree to which a component / module is connected to the other modules While designing we should strive for high cohesion. Ex: cohesive component/module focus on a single task with little interaction with other modules of the system While designing we should strive for low coupling. Ex: dependency between modules should be less Cohesion is the kind of natural extension of data hiding, for example, class having all members visible with a package having default visibility Making private fields, private methods and non public classes provides loose coupling Cohesion is Intra – Module Concept Coupling is Inter -Module Concept
Structured programming
A programming language in which the entire logic of the program is written by dividing it into smaller units or modules is called "structured programming Language".
Program written in structured programming language is very easy to modify and to debug. The languages that support Structured programming approach are: C, C++, Java, C#
The structured program mainly consists of three types of elements:
• Selection Statements
• Sequence Statements
• Iteration Statements
Advantages of Structured Programming Approach:
• Easier to read and understand
• User Friendly
• Easier to Maintain
• Mainly problem based instead of being machine based
• Development is easier as it requires less effort and time
• Easier to Debug
• Machine-Independent.
Disadvantages of Structured Programming Approach:
• Since it is Machine-Independent, so it takes time to convert into machine code.
• The converted machine code is not the same as for assembly language.
• The program depends upon changeable factors like data-types. Therefore, it needs to be updated with the need on the go.
• Usually, the development in this approach takes longer time as it is language-dependent. Whereas in the case of assembly language, the development takes lesser time as it is fixed for the machine.
The features of Structural Programming language
• Division of Complex problems into small procedures and functions.
• No presence of GOTO Statement.
• The main statement includes – If-then-else, Call and Case statements.
• Large set of operators like arithmetic, relational, logical, bit manipulation, shift and part word operators.
Deterministic and Non-deterministic technique
Algorithm:
A set of rules that define how a particular problem can be solved in a finite number of steps is known as algorithm. An algorithm is composed of finite number of steps, each of which may require one or more operations. It is step by step logical representation of program in high-level language as Standard English. Algorithm helps to define what actions should be performed in each phase of the program development cycle.
Properties of an algorithm:
• Inputs/outputs: - There must be some inputs from standard set of input(s) an algorithms execution must produce output(s).
• Definiteness: - Each step must be clear and unambiguous.
• Finiteness: - Algorithm must terminate after finite steps.
• Correctness: - correct set of output values must be produced from each set of inputs.
• Effectiveness: - each step must be carried out in finite time.
Deterministic algorithm
A deterministic algorithm is an algorithm that, given a particular input, will always produce the same output, with the underlying machine always passing through the same sequence of states. Deterministic algorithms are by far the most studied and familiar kind of algorithm, as well as one of the most practical, since they can be run on real machines efficiently.
Formally, a deterministic algorithm computes a mathematical function; a function has a unique value for any input in its domain, and the algorithm is a process that produces this particular value as output.
Deterministic algorithms can be defined in terms of a state machine: a state describes what a machine is doing at a particular instant in time. State machines pass in a discrete manner from one state to another. Just after we enter the input, the machine is in its initial state or start state. If the machine is deterministic, this means that from this point onwards, its current state determines what its next state will be; its course through the set of states is predetermined. Note that a machine can be deterministic and still never stop or finish, and therefore fail to deliver a result.
Examples of particular abstract machines which are deterministic include the deterministic Turing machine and deterministic finite automaton.
Nondeterministic algorithm
In computer programming, a nondeterministic algorithm is an algorithm that, even for the same input, can exhibit different behaviour on different runs, as opposed to a deterministic algorithm. There are several ways an algorithm may behave differently from run to run. A concurrent algorithm can perform differently on different runs due to a race condition. A probabilistic algorithm's behaviours depends on a random number generator.
An algorithm that solves a problem in nondeterministic polynomial time can run in polynomial time or exponential time depending on the choices it makes during execution. The nondeterministic algorithms are often used to find an approximation to a solution, when the exact solution would be too costly to obtain using a deterministic one.
A nondeterministic algorithm is different from its more familiar deterministic counterpart in its ability to arrive at outcomes using various routes. If a deterministic algorithm represents a single path from an input to an outcome, a nondeterministic algorithm represents a single path stemming into many paths, some of which may arrive at the same output and some of which may arrive at unique outputs. This property is captured mathematically in "nondeterministic" models of computation such as the nondeterministic finite automaton. In some scenarios, all possible paths are allowed to run simultaneously.
In algorithm design, nondeterministic algorithms are often used when the problem solved by the algorithm inherently allows multiple outcomes (or when there is a single outcome with multiple paths by which the outcome may be discovered, each equally
preferable). Crucially, every outcome the nondeterministic algorithm produces is valid, regardless of which choices the algorithm makes while running.
In computational complexity theory, nondeterministic algorithms are ones that, at every possible step, can allow for multiple continuations (imagine a person walking down a path in a forest and, every time they step further, they must pick which fork in the road they wish to take). These algorithms do not arrive at a solution for every possible computational path; however, they are guaranteed to arrive at a correct solution for some path (i.e., the person walking through the forest may only find their cabin if they pick some combination of "correct" paths). The choices can be interpreted as guesses in a search process.
Iterative and recursive logic
Iteration
Iteration is defined as the act or process of repeating or it is the repetition of a process in a computer program, usually done with the help of loops.
For example, iteration can include repetition of a sequence of operations in order to get ever closer to a desired result. Iteration can also refer to a process wherein a computer program is instructed to perform a process over and over again repeatedly for a specific number of times or until a specific condition has been met.
Iteration is when the same procedure is repeated multiple times. Some examples were long division, the Fibonacci numbers, prime numbers, and the calculator game. Some of these used recursions as well, but not all of them.
Two Types of Iterative Loops
1. for loop
2. while loop
For loop
The for loop is a control flow statement that iterates a part of the program multiple times.It has a general form…
for (initialization expr; test expr; update expr)
{
• body of the loop
• statements we want to execute
}
Here the for loop work as follows
1. The initialization expression is executed only once.
1. Then, the test expression is evaluated.
2. If the test expression is evaluated to true,
• Codes inside the body of for loop is executed.
• Then the update expression is executed.
• Again, the test expression is evaluated.
• If the test expression is true, codes inside the body of for loop is executed and update expression is executed.
• This process goes on until the test expression is evaluated to false.
1. If the test expression is evaluated to false, for loop terminates.
Flowchart diagram of For loop
Fig: Flowchart diagram of for loop
Program No -: simple for loop
#include < stdio.h>
int main()
{
int i=0;
for(i = 1; i<= 10; i++)
{
printf("%d \n",i);
}
return 0;
}
Output:
While loop
The while loop is a control flow statement that executes a part of the programs repeatedly on the basis of given Boolean condition. The general form of while loop is…
while (testExpression)
{
// body of loop
}
Here the testExpression can be any Boolean expression. The body of the loop will be executed as long as the testExpression is true. When testExpression becomes false, control passes to the next line of code immediately following the loop.
Flowchart diagram of While loop
Fig: Flowchart diagram of while loop
Program No : simple for loop
#include < stdio.h>
int main()
{
int i=10;
while(i>=1)
{
printf("%d \n",i);
i--;
}
}
Output:
Recursion
The process in which a function calls itself directly or indirectly is called recursion and the corresponding function is called as recursive function.It is the process of repeating items in a self-similar way.
Recursion involves several numbers of recursive calls. However, it is important to impose a termination condition of recursion. Recursion code is shorter than iterative code however it is difficult to understand.
Recursion cannot be applied to all the problem, but it is more useful for the tasks that can be defined in terms of similar subtasks. For Example, recursion may be applied to sorting, searching, and traversal problems.
Program
#include < stdio.h>
int fact (int);
int main()
{
int n,f;
printf("Enter the number for factorial calculation");
scanf("%d",&n);
f = fact(n);
printf("factorial = %d",f);
}
int fact(int n)
{
if (n==0)
{
return 0;
}
else if ( n == 1)
{
return 1;
}
else
{
return n*fact(n-1);
}
}
Output:
We can understand the above program of the recursive method call by the figure given below:
Difference between recursion and iteration
Parameter Recursion Iteration Definition Recursion involves a recursive function which calls itself repeatedly until a base condition is not reached. Iteration involves the usage of loops through which a set of statements are executed repeatedly until the condition is not false. Termination condition Here termination condition is a base case defined within the recursive function. Termination condition is the condition specified in the definition of the loop. Infinite Case If base case is never reached it leads to infinite recursion leading to memory crash. If condition is never false, it leads to infinite iteration with computers CPU cycle being used repeatedly. Memory Usage Recursion uses stack area to store the current state of the function,due to which memory usage is high. Iteration uses the permanent storage area only for the variables involved in its code block, hence memory usage is less Code Size Code size is comparitively smaller. Code size is comparitively larger. Performance Since stack are is used to store and restore the state of recursive function after every function call , performance is comparitively slow. Since iteration does not have to keep re-initializing its component variables and neither has to store function states, the performance is fast. Memory Runout There is a possibility of running out of memory, since for each function call stack area gets used. There is no possibility of running out of memory as stack area is not used. Overhead Recursive functions involve extensive overhead, as for each function call the current state, parameters etc have to be pushed and popped out from stack. There is no overhead in Iteration. Applications Factorial, Fibonacci Series etc. Finding average of a data series, creating multiplication table etc. Example #include < stdio.h>int fact(int n){if(n == 0)return 1;elsereturn n * factorial(n-1);}int main() {printf(“Factorial for 5 is %d”, fact(5));return 0;}Output: Factorial for 5 is 120 #include < stdio.h>int main() {int i, n = 5, fact = 1;for(i = 1; i <= n; ++i)fact = fact * i;printf(“Factorial for 5 is %d”, fact);return 0;} Output: Factorial for 5 is 120
Modular Designing and Programming
Modular programming is a software design technique that emphasizes separating the functionality of a program into independent, interchangeable modules, such that each contains everything necessary to execute only one aspect of the desired functionality.
It is the process of subdividing a computer program into separate sub-programs. A module is a separate software component. It can often be used in a variety of applications and functions with other components of the system.
Some programs might have thousands or millions of lines and to manage such programs it becomes quite difficult as there might be too many of syntax errors or logical errors present in the program, so to manage such type of programs concept of modular programming approached is preferred.
Modular programming emphasis on breaking of large programs into small problems to increase the maintainability, readability of the code and to make the program handy to make any changes in future or to correct the errors.
Modular programming is closely related to structured programming and object-oriented programming, all having the same goal of facilitating construction of large software programs and systems by decomposition into smaller pieces
Advantages of Using Modular Programming Approach
• Development Can be Divided:
Modular Programming allows development to be divided by splitting down a program into smaller programs in order to execute a variety of tasks. This enables developers to work simultaneously and minimizes the time taken for development.
Modular Programming helps develop programs that are much easier to read since they can be enabled as user-defined functions. A program that carries multiple functions is easier to follow, whereas a program that does not have a function is much harder to follow.
• Programming Errors are Easy to Detect:
Modular Programming minimizes the risks of ending up with programming errors and also makes it easier to spot errors, if any. This is because the errors can be narrowed down to a specific function or a sub-program.
• Allows Re-Use of Codes:
A program module is capable of being re-used in a program which minimizes the development of redundant codes. It is also more convenient to reuse a module than to write a program from start. It also requires very little code to be written.
• Improves Manageability:
Having a program broken into smaller sub-programs allows for easier management. The separate modules are easier to test, implement or design. These individual modules can then be used to develop the whole program.
• Collaboration:
With Modular Programming, programmers can collaborate and work on the same application. Designing a program also becomes simpler because small teams can focus on individual parts of the same code.
========================== End of Unit-2 ==========================
|
2022-05-19 14:25:18
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.3018985092639923, "perplexity": 1534.6632324547684}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-21/segments/1652662529538.2/warc/CC-MAIN-20220519141152-20220519171152-00525.warc.gz"}
|
http://mathematica.stackexchange.com/questions/46703/dynamic-syntax-evaluate-appendto-as-the-second-argument
|
# Dynamic Syntax - Evaluate AppendTo as the second argument
I am attempting to Append a new value into my list paramMonitor whenever the value for B changes. I had the code working perfectly, but my computer crashed without saving the document. I am now having a lot of trouble (dynamically) appending these values to my list.
My attempts:
• This updates B but does nothing to append the value {B, e, some number} to paramMonitor
Dynamic[B,
AppendTo[paramMonitor,
{B,e,Abs[model[B, e][#] & /@ test[[All, 1]] - test[[All, 2]]] // Total}]
and
• Removing the first argument 'B', this code fails to compile and simply aborts upon execution:
Dynamic[AppendTo[paramMonitor,
{B,e, Abs[model[B, e][#] & /@ test[[All, 1]] - etst[[All, 2]]] // Total}]
I believe this is a quick fix (since I had it working earlier), but I cannot seem to find the issue.
-
Before I will update the answer, please tell me how B can be changed? I mean, it is tied to some kind of controller or it is just a global variable that can be change anywhere? – Kuba Apr 24 at 11:15
Sorry about the late response. B is changed through NonLinearModelFit (NLMF). NLMF is finding the best fit values for B and e. I have NLMF's EvaluationMonitor printing "B and e values" to the screen. So, at every new evaluation step, Dynamic[B] will update. Thanks for all the help!! – ABBOUDR Apr 29 at 18:32
So my answer does not fit your needs quite well, right? – Kuba Apr 29 at 18:36
Unfortunately not. Since NonlinearModel takes a very long time to evaluate, I wanted a way to monitor its progress in Real-Time. My other option is to add some commands within EvaluationMonitor, like this – ABBOUDR May 1 at 0:51
add comment
## 1 Answer
The most important thing, and the thing that can be easily missed is that the second argument of Dynamic must be a function or a list of functions:
list = {};
Slider[Dynamic[b, (b = #; list = Join[list, {b}]) &], {1, 10, 1}]
Dynamic@list
-
Another thing that can be easily missed is that the second argument of Dynamic only seems to work reliably when the Dynamic expression is evaluated as an argument of a control or other GUI element (like the slider you used in your example). I have found that the front-end does seem to like a naked Dynamic with a 2nd argument when it encounters one at top-level such as in the example the OP gave. – m_goldberg Apr 24 at 8:01
@m_goldberg good point, I have not though he tried to do that this way. It seems I have to write a little bit more. :) – Kuba Apr 24 at 9:15
@m_goldberg: you say "only seems to work reliably", but does it work at all and is it supposed to? I would have said it most probably isn't even meant to work, but honestly don't know what actually is the "documented" behavior that I could expect. I also can't see that a naked Dynamic with a 2nd argument does anything relevant concerning its 2nd argument, other than quietly ignoring it. Am I missing something here? – Albert Retey Apr 24 at 14:09
@AlbertRetey. I used tentative language because I don't know for sure. From my reading of the docs and my own experiments, I think it likely that the 2nd argument of Dynamic is not intended to be used at top-level the way the OP of this question did. My experience with naked Dynamic is exactly as you say. BTW, there is a typo in my comment. I meant to say: "does not seem to like a naked Dynamic" – m_goldberg Apr 24 at 14:44
@AlbertRetey I agree with you, it's meant to cooperate with controllers: "during interactive changing or editing of val." I think that m_goldberg pointed that OP tried to use it in "naked" Dynamic. I'm waiting for confirmation from OP. :) – Kuba Apr 24 at 14:44
show 2 more comments
|
2014-07-23 08:04:47
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.49781134724617004, "perplexity": 1622.0715128103725}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": false}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2014-23/segments/1405997877644.62/warc/CC-MAIN-20140722025757-00098-ip-10-33-131-23.ec2.internal.warc.gz"}
|
https://entangledquery.com/t/how-to-choose-the-most-suitable-aer-backend-simulators-in-qiskit/12/#post-80
|
• Members 19 posts
I am going through the qiskit documentation of simulators. I have learned that there are many types of simulators, and the main simulator backend of the Aer provider is the AerSimulator backend. However, in many other tutorials, statevector_simulator and qasm_simulator are used quite often. Can anyone explain what is the major differences of all these backends, and how to choose the most suitable one depending on different needs?
backends.PNG
PNG, 43.5 KB, uploaded by JXW on Sept. 5, 2021.
• Members 12 posts
They are mostly differed by simulation methods and maximum qubits. QasmSimulator is the closest implementation to a real quantum computer where all the readouts are in binary strings. I usually test all my code in this simulator and simply change the backend to a real quantum device. In StatevectorSimulator, you can readout the quantum state in vector form. UnitarySimulator allows you to print out the matrix form of a unitary quantum operation.
More details coming later...
IBMQ-Simulators.png
PNG, 55.0 KB, uploaded by JackSong on Oct. 18, 2021.
|
2022-07-07 13:24:40
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.18601004779338837, "perplexity": 2317.4347525481726}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-27/segments/1656104692018.96/warc/CC-MAIN-20220707124050-20220707154050-00530.warc.gz"}
|
https://zbmath.org/?q=an:1226.11089
|
The author develops a fast algorithm for computing the zeros of the quadratic character $$L$$-functions for all fundamental discriminants $$-d$$ with $$10^{12}<d<10^{12}+10^7$$. Then, he discusses the data obtained from the computation of the zeros of approximately $$3\times 10^6$$ quadratic character $$L$$-functions for negative fundamental discriminants $$-d$$ with $$d$$ in above mentioned range. The author ends the paper by some implementation notes including error estimates and more details on his computations, as more as, he gives an Appendix on Miller’s ‘refined’ 1-level density.
11M26 Nonreal zeros of $$\zeta (s)$$ and $$L(s, \chi)$$; Riemann and other hypotheses 11Y16 Number-theoretic algorithms; complexity 11Y35 Analytic computations
|
2022-10-02 22:44:33
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.6589069366455078, "perplexity": 716.2774341887923}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337360.41/warc/CC-MAIN-20221002212623-20221003002623-00771.warc.gz"}
|
https://math.stackexchange.com/questions/417639/is-there-an-easy-way-to-get-to-a-paper-given-a-citation
|
# Is there an easy way to get to a paper, given a citation?
This question isn't about math per se, but I hope it will be of general interest to people studying math so I feel reasonably comfortable asking here. Let me start with an example: Today I had the following citation from a paper:
W. Hurewicz, On Duality Theorems, abstract 47-7-329, Bull. AMS 47 (1941), 562-563.
I tried Googling this directly, but it only turned up papers citing that paper. I tried typing the title into JSTOR, but got a bunch of nonsense. Finally, I had to Google the homepage of Bulletins of the AMS, then click on "past issues," scroll down and find 1941, click on that, go back and figure out which issue it was, click on that, go to another page, scroll down and find the relevant article, and click on that.
I'm fully aware that this is of course less work than walking to the library uphill both ways in the snow, but when I know exactly what I want there should be some way of getting to it without clicking more than once, at least in theory.
Does anyone have a good workflow for grabbing a paper quickly given the relevant bibliographical information? I'm willing to install software if that's what it takes. What I'm hoping for is a box where I copy/paste the above citation and the paper pops right up.
• You can find a lot of math papers by searching Google Scholar. While this particular one is not available in full, a surprising number are. Many professors host public copies of their papers online, or public copies of papers written by others that they want their students to read, and Google Scholar finds all of them. It's fantastic. – Potato Jun 11 '13 at 17:46
• Also, if it's a recent article, there's a good chance the author put it up himself on arxiv.org. – Samuel Jun 11 '13 at 17:53
• The trouble with this particular case is that it's not a real paper, but only an abstract, that's why searching for it is rather difficult. Here's a direct link. It usually helps if you know the full journal name (here it is Bulletin of the American Mathematical Society). Googling for this name will lead you to this page and a few more clicks and you're there. – Martin Jun 11 '13 at 18:49
If you are affiliated with a university with the right subscriptions, you just go to
http://www.ams.org/mathscinet/
and search for your article. Last name of the author and a word or two from the title is usually enough. After you click on the article, there will be a small button far to the right associated to your library which will link you to online versions which you can download immediately, if they exist.
Your university's library's homepage should give you the details on how to log in on MathSciNet. Added: For me personally, my library requires me to go to http://www.ams.org.focus.lib.**MY-UNIVERSITY**.org/mathscinet/, log in with my university account, and it then redirects me to MathSciNet. I've simply bookmarked that address, and make sure I stay logged in, so using this bookmark redirects me directly to MathSciNet. Then I just need to search, click twice, and then I have the article. Your experience may differ.
(Unfortunately, MathSciNet did not have the specific article you referred to. It might be because it is too old. In my experience, all articles I've ever wanted to get which were published after 1950 have been on there.)
• But what if you are not affiliated with a university with the right subscriptions? – Hans-Peter Stricker Jun 11 '13 at 17:39
• Hmm... My library makes me click through several pages on other sites when I do that, but I guess it's better than nothing. – Daniel McLaury Jun 11 '13 at 17:40
• @HansStricker: then presumably the article itself isn't available anyway. – Daniel McLaury Jun 11 '13 at 17:40
• @Hans: There always is the Zentralblatt which is free if in rather restricted form. Some tips on finding papers can also be found in the answer to this thread. – Martin Jun 11 '13 at 18:53
|
2019-06-18 23:03:34
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.30503934621810913, "perplexity": 367.013622448155}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-26/segments/1560627998844.16/warc/CC-MAIN-20190618223541-20190619005541-00026.warc.gz"}
|
http://imagextension.com/newton-raphson/general-newton-error-equation.php
|
Home > Newton Raphson > General Newton Error Equation
General Newton Error Equation
Contents
and Robinson, G. "The Newton-Raphson Method." §44 in The Calculus of Observations: A Treatise on Numerical Mathematics, 4th ed. Raphson again viewed Newton's method purely as an algebraic method and restricted its use to polynomials, but he describes the method in terms of the successive approximations xn instead of the share|cite|improve this answer edited Feb 23 '12 at 4:17 answered Feb 23 '12 at 4:11 Robert Israel 229k14155351 1 For the record, this is a consequence of Taylor' theorem. –Alex Given the equation g ( x ) = h ( x ) , {\displaystyle g(x)=h(x),\,\!} with g(x) and/or h(x) a transcendental function, one writes f ( x ) = g (
If the nonlinear system has no solution, the method attempts to find a solution in the non-linear least squares sense. pp.xiv+490. But, in the absence of any intuition about where the zero might lie, a "guess and check" method might narrow the possibilities to a reasonably small interval by appealing to the Thus n + 1 = 2 {\displaystyle n+1=2} .
Newton Raphson Formula
For example, if one wishes to find the square root of 612, this is equivalent to finding the solution to x 2 = 612 {\displaystyle \,x^{2}=612} The function to use in Generalizations Complex functions Basins of attraction for x5 - 1 = 0; darker means more iterations to converge. Wikipedia® is a registered trademark of the Wikimedia Foundation, Inc., a non-profit organization. Newton Raphson Method Ppt Varona, J.L. "Graphic and Numerical Comparison Between Iterative Methods." Math.
Solution of cos(x) = x3 Consider the problem of finding the positive number x with cos(x) = x3. Newton Raphson Method Matlab Rather than actually computing the inverse of this matrix, one can save time by solving the system of linear equations J F ( x n ) ( x n + 1 Given x n {\displaystyle x_{n}\!} , x n + 1 = x n − f ( x n ) f ′ ( x n ) = 1 3 x n 4 Consider the function f ( x ) = { 0 if x = 0 , x + x 2 sin ( 2 x ) if x ≠ 0. {\displaystyle f(x)={\begin{cases}0&{\text{if
However, the extra computations required for each step can slow down the overall performance relative to Newton's method, particularly if f or its derivatives are computationally expensive to evaluate. Newton Raphson Method System Of Nonlinear Equations Springer, Berlin, 2004. This method is also very efficient to compute the multiplicative inverse of a power series. Deuflhard, Newton Methods for Nonlinear Problems.
Newton Raphson Method Matlab
Atkinson, An Introduction to Numerical Analysis, (1989) John Wiley & Sons, Inc, ISBN 0-471-62489-6 Tjalling J. http://mathworld.wolfram.com/NewtonsMethod.html Garisto an A-plus for his paper. Newton Raphson Formula of Math. (2) 125 (1987), no. 3, 467–493. Newton Raphson Method Pdf See especially Sections 9.4, 9.6, and 9.7.
Thus n + 1 = 3 {\displaystyle n+1=3} . Wolfram Education Portal» Collection of teaching and learning tools built by Wolfram education experts: dynamic textbook, lesson plans, widgets, interactive Demonstrations, and more. However, his method differs substantially from the modern method given above: Newton applies the method only to polynomials. For 1/2 < a < 1, the root will still be overshot but the sequence will converge, and for a ≥ 1 the root will not be overshot at all. Newton Raphson Method Algorithm
The idea of the method is as follows: one starts with an initial guess which is reasonably close to the true root, then the function is approximated by its tangent line ISBN 0-89871-546-6. He does not compute the successive approximations x n {\displaystyle x_{n}} , but computes a sequence of polynomials, and only at the end arrives at an approximation for the root x. Please help improve this article by adding citations to reliable sources.
If f is continuously differentiable and its derivative is nonzero atα, then there exists a neighborhood of α such that for all starting values x0 in that neighborhood, the sequence {xn} Newton Raphson Method In C Numerical Recipes: The Art of Scientific Computing (3rd ed.). Using divided differences and Newton polynomial, P n ( x ) {\displaystyle P_{n}(x)} can be obtained as P n ( x ) = [ f 0 ] + [ f 0
In the same publication, Simpson also gives the generalization to systems of two equations and notes that Newton's method can be used for solving optimization problems by setting the gradient to
Acton, F.S. ISBN0-786-64940-7. Since d x = d ( x 0 + s h ) = h d s {\displaystyle dx=d(x_{0}+sh)=hds} , the error term of numerical integration is E integrate = ∫ x Newton Raphson Method For Load Flow Analysis The iterates x n {\displaystyle x_{n}} will be strictly decreasing to the root while the iterates z n {\displaystyle z_{n}} will be strictly increasing to the root.
Unsourced material may be challenged and removed. (November 2013) (Learn how and when to remove this template message) (Learn how and when to remove this template message) In numerical analysis, Newton's How to use the binomial theorem to calculate binomials with a negative exponent What Accelerates a Vehicle With a CVT? Then we can derive the formula for a better approximation, xn+1 by referring to the diagram on the right. That was mid-February.
The Science of Fractal Images. An initial point that provides safe convergence of Newton's method is called an approximate zero. The Taylor series of about the point is given by (1) Keeping terms only to first order, (2) Equation (2) is the equation of the tangent line to the curve at Contact the MathWorld Team © 1999-2016 Wolfram Research, Inc. | Terms of Use THINGS TO TRY: Mandelbrot set Cantor set fractal Newton's Method Chris Maes Square Roots with Newton's Method Jon
The system returned: (22) Invalid argument The remote host or network may be down. x {\displaystyle x} e x {\displaystyle e^{x}} 0.1 1.10517 0.2 1.22140 0.3 1.34986 0.4 1.49182 0.5 1.64872 Solution: According the general error formula of polynomial interpolation | E interpolate | ⩽ Bad starting points In some cases the conditions on the function that are necessary for convergence are satisfied, but the point chosen as the initial point is not in the interval In the limiting case of α = 1 2 {\displaystyle \alpha ={\tfrac {1}{2}}} (square root), the iterations will alternate indefinitely between points x0 and −x0, so they do not converge in
In Nonlinear Regression the SSE equation is only "close to" parabolic in the region of the final parameter estimates. and Saupe, D. Whittaker, E.T. However, even linear convergence is not guaranteed in pathological situations.
C. Difficulty in calculating derivative of a function Newton's method requires that the derivative be calculated directly. However, with a good initial choice of the root's position, the algorithm can be applied iteratively to obtain (5) for , 2, 3, .... Alternatively if ƒ'(α)=0 and ƒ'(x)≠0 for x≠α, xin a neighborhood U of α, α being a zero of multiplicity r, and if ƒ∈Cr(U) then there exists a neighborhood of α such
P. Generated Mon, 17 Oct 2016 05:17:20 GMT by s_wx1127 (squid/3.5.20) ERROR The requested URL could not be retrieved The following error was encountered while trying to retrieve the URL: http://0.0.0.8/ Connection Let f ( x ) = x + x 4 3 . {\displaystyle f(x)=x+x^{\frac {4}{3}}.\!} Then f ′ ( x ) = 1 + 4 3 x 1 3 . {\displaystyle and Sebah, P. "Newton's Iteration." http://numbers.computation.free.fr/Constants/Algorithms/newton.html.
Similar problems occur even when the root is only "nearly" double. In this case, three equally spaced points are used for integration. Affine Invariance and Adaptive Algorithms.
|
2018-06-21 19:25:12
|
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8359989523887634, "perplexity": 781.6976525112243}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-26/segments/1529267864257.17/warc/CC-MAIN-20180621192119-20180621212119-00362.warc.gz"}
|
http://www.ck12.org/algebra/Division-of-Polynomials/lesson/Division-of-a-Polynomial-by-a-Monomial-ALG-I-HNRS/
|
<img src="https://d5nxst8fruw4z.cloudfront.net/atrk.gif?account=iA1Pi1a8Dy00ym" style="display:none" height="1" width="1" alt="" />
# Division of Polynomials
## Using long division to divide polynomials
Estimated23 minsto complete
%
Progress
Practice Division of Polynomials
MEMORY METER
This indicates how strong in your memory this concept is
Progress
Estimated23 minsto complete
%
Division of a Polynomial by a Monomial
Can you complete the following division problem with a polynomial and a monomial? How does this relate to factoring?
### Division of a Polynomial by a Monomial
Recall that a monomial is an algebraic expression that has only one term. So, for example, , 8, –2, or are all monomials because they have only one term. The term can be a number, a variable, or a combination of a number and a variable. A polynomial is an algebraic expression that has more than one term.
When dividing polynomials by monomials, it is often easiest to separately divide each term in the polynomial by the monomial. When simplifying each mini-division problem, don't forget to use exponent rules for the variables. For example,
Remember that a fraction is just a division problem!
#### Let's divide the following polynomials:
This is the same as . Divide each term of the polynomial numerator by the monomial denominator and simplify.
Therefore, .
Divide each term of the polynomial numerator by the monomial denominator and simplify. Remember to use exponent rules when dividing the variables.
Therefore, .
This is the same as . Divide each term of the polynomial numerator by the monomial denominator and simplify. Remember to use exponent rules when dividing the variables.
Therefore, .
### Examples
#### Example 1
Earlier, you were asked complete the following division problem:
This process is the same as factoring out a from the expression .
Therefore, .
#### Example 2
Complete the following division problem.
### Review
Complete the following division problems.
To see the Review answers, open this PDF file and look for section 7.12.
### Notes/Highlights Having trouble? Report an issue.
Color Highlighted Text Notes
### Vocabulary Language: English
Denominator
The denominator of a fraction (rational number) is the number on the bottom and indicates the total number of equal parts in the whole or the group. $\frac{5}{8}$ has denominator $8$.
Dividend
In a division problem, the dividend is the number or expression that is being divided.
divisor
In a division problem, the divisor is the number or expression that is being divided into the dividend. For example: In the expression $152 \div 6$, 6 is the divisor and 152 is the dividend.
Polynomial long division
Polynomial long division is the standard method of long division, applied to the division of polynomials.
Rational Expression
A rational expression is a fraction with polynomials in the numerator and the denominator.
Rational Root Theorem
The rational root theorem states that for a polynomial, $f(x)=a_nx^n+a_{n-1}x^{n-1}+\cdots+a_1x+a_0$, where $a_n, a_{n-1}, \cdots a_0$ are integers, the rational roots can be determined from the factors of $a_n$ and $a_0$. More specifically, if $p$ is a factor of $a_0$ and $q$ is a factor of $a_n$, then all the rational factors will have the form $\pm \frac{p}{q}$.
Remainder Theorem
The remainder theorem states that if $f(k) = r$, then $r$ is the remainder when dividing $f(x)$ by $(x - k)$.
Synthetic Division
Synthetic division is a shorthand version of polynomial long division where only the coefficients of the polynomial are used.
|
2016-12-04 23:37:43
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 16, "texerror": 0, "math_score": 0.9328222870826721, "perplexity": 744.6576009031467}, "config": {"markdown_headings": true, "markdown_code": false, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2016-50/segments/1480698541426.52/warc/CC-MAIN-20161202170901-00253-ip-10-31-129-80.ec2.internal.warc.gz"}
|
https://cran.hafro.is/web/packages/psdr/vignettes/Introduction.html
|
# Introduction
## Overview
Author: Yong-Han Hank Cheng
This package allows you to generate and compare power spectral density (PSD) plots given time series data. FFT is used to take a time series data, analyze the oscillations, and then output the frequencies of these oscillations in the time series in the form of a PSD plot.
## Installation
# Install the package from GitHub
# devtools::install_github("yhhc2/psdr")
# Load package
library("psdr")
## Example
Below is an example of how this package can be used to take a dataframe with multiple separate time series belonging to 2 categories (A and B), separate out the time series, and use the time series to make PSDs and compare the dominant frequencies between the two categories of signals.
In this example dataset, there are 3 time series for each category. 3 for category A and 3 for category B. Each time series comes from one session, so there are 6 sessions in total. For each signal, the sampling rate is 100 Hz, which means a data point is obtained every 0.01 seconds.
example_data <- GenerateExampleData()
example_data_displayed <- example_data
colnames(example_data_displayed) <- c("Time in seconds", "Signal", "Session", "Category")
head(example_data_displayed)
## Time in seconds Signal Session Category
## 1 0 0.0000000 1 A
## 2 0.01 0.1255810 1 A
## 3 0.02 0.2506665 1 A
## 4 0.03 0.3747626 1 A
## 5 0.04 0.4973798 1 A
## 6 0.05 0.6180340 1 A
#Only works in html, not md.
rmarkdown::paged_table(example_data_displayed)
Here is how the package can be used to take a dataframe containing data from all 6 sessions and split it into multiple dataframes, with each dataframe containing data from a single session.
example_data_windows <- GetHomogeneousWindows(example_data, "Session", c("Session"))
### Explore the dataset
Plotting all the time series for category A on a single plot and plotting time series data for category B on a single plot shows that the frequencies of signals in category A are higher.
Plot signals for category A
plot_result <- ggplot2::ggplot(subset(example_data, example_data$Category=="A"), ggplot2::aes(x = Time, y = Signal, colour = Session, group = 1)) + ggplot2::geom_line() plot_result Plot signals for category B plot_result <- ggplot2::ggplot(subset(example_data, example_data$Category=="B"), ggplot2::aes(x = Time, y = Signal, colour = Session, group = 1)) + ggplot2::geom_line()
plot_result
This remains true when the time series for each category are averaged.
FirstComboToUse <- list( c(1, 2, 3), c("A") )
SecondComboToUse <- list( c(4, 5, 6), c("B") )
timeseries.results <- AutomatedCompositePlotting(list.of.windows = example_data_windows,
name.of.col.containing.time.series = "Signal",
x_start = 0,
x_end = 999,
x_increment = 1,
level1.column.name = "Session",
level2.column.name = "Category",
level.combinations = list(FirstComboToUse, SecondComboToUse),
level.combinations.labels = c("A", "B"),
plot.title = "Comparing category A and B",
plot.xlab = "Time in 0.01 second increments",
plot.ylab = "Original units of signal",
combination.index.for.envelope = NULL,
TimeSeries.PSD.LogPSD = "TimeSeries",
sampling_frequency = NULL)
ggplot.obj.timeseries <- timeseries.results[[2]]
ggplot.obj.timeseries
### Visualize the frequency contribution of signals
Looking at the time series data, we can tell the frequencies of oscillations are different between the time series. To determine which frequencies are contributing to each time series, we can plot the PSDs for each time series.
PSD for signals in category A
data1 <- example_data_windows[[1]]
psd_results1 <- MakePowerSpectralDensity(100, data1$Signal) data2 <- example_data_windows[[2]] psd_results2 <- MakePowerSpectralDensity(100, data2$Signal)
data3 <- example_data_windows[[3]]
psd_results3 <- MakePowerSpectralDensity(100, data3$Signal) Frequency <- c(psd_results1[[1]], psd_results2[[1]], psd_results3[[1]]) PSD <- c(psd_results1[[2]], psd_results2[[2]], psd_results3[[2]]) Session <- c(rep(1, length(psd_results1[[1]])), rep(2, length(psd_results1[[1]])), rep(3, length(psd_results1[[1]]))) data_to_plot <- data.frame(Frequency, PSD, Session) plot_results <- ggplot2::ggplot(data=data_to_plot, ggplot2::aes(x=Frequency, y=PSD, color = as.factor(Session), group=1)) + ggplot2::geom_point() + ggplot2::geom_path() + ggplot2::xlim(0,3) plot_results PSD for signal in category B data1 <- example_data_windows[[4]] psd_results1 <- MakePowerSpectralDensity(100, data1$Signal)
data2 <- example_data_windows[[5]]
psd_results2 <- MakePowerSpectralDensity(100, data2$Signal) data3 <- example_data_windows[[6]] psd_results3 <- MakePowerSpectralDensity(100, data3$Signal)
Frequency <- c(psd_results1[[1]], psd_results2[[1]], psd_results3[[1]])
PSD <- c(psd_results1[[2]], psd_results2[[2]], psd_results3[[2]])
Session <- c(rep(4, length(psd_results1[[1]])), rep(5, length(psd_results1[[1]])),
rep(6, length(psd_results1[[1]])))
data_to_plot <- data.frame(Frequency, PSD, Session)
plot_results <- ggplot2::ggplot(data=data_to_plot, ggplot2::aes(x=Frequency, y=PSD, color = as.factor(Session), group=1)) +
ggplot2::geom_point() + ggplot2::geom_path() + ggplot2::xlim(0,3)
plot_results
To get a single composite PSD for each category, we can take the average.
FirstComboToUse <- list( c(1, 2, 3), c("A") )
SecondComboToUse <- list( c(4, 5, 6), c("B") )
PSD.results <- AutomatedCompositePlotting(list.of.windows = example_data_windows,
name.of.col.containing.time.series = "Signal",
x_start = 0,
x_end = 5,
x_increment = 0.01,
level1.column.name = "Session",
level2.column.name = "Category",
level.combinations = list(FirstComboToUse, SecondComboToUse),
level.combinations.labels = c("A", "B"),
plot.title = "Comparing category A and B",
plot.xlab = "Hz",
plot.ylab = "(Original units)^2/Hz",
combination.index.for.envelope = NULL,
TimeSeries.PSD.LogPSD = "PSD",
sampling_frequency = 100)
ggplot.obj.PSD <- PSD.results[[2]]
ggplot.obj.PSD
If we want to see how the average compares to the individual signals that make up the average, then we can include an error envelope.
Here is the error envelope added to the category A composite curve.
PSD.results <- AutomatedCompositePlotting(list.of.windows = example_data_windows,
name.of.col.containing.time.series = "Signal",
x_start = 0,
x_end = 5,
x_increment = 0.01,
level1.column.name = "Session",
level2.column.name = "Category",
level.combinations = list(FirstComboToUse, SecondComboToUse),
level.combinations.labels = c("A", "B"),
plot.title = "Comparing category A and B",
plot.xlab = "Hz",
plot.ylab = "(Original units)^2/Hz",
combination.index.for.envelope = 1,
TimeSeries.PSD.LogPSD = "PSD",
sampling_frequency = 100
)
ggplot.obj.PSD <- PSD.results[[2]]
ggplot.obj.PSD
Here is the error envelope added to the category B composite curve.
PSD.results <- AutomatedCompositePlotting(list.of.windows = example_data_windows,
name.of.col.containing.time.series = "Signal",
x_start = 0,
x_end = 5,
x_increment = 0.01,
level1.column.name = "Session",
level2.column.name = "Category",
level.combinations = list(FirstComboToUse, SecondComboToUse),
level.combinations.labels = c("A", "B"),
plot.title = "Comparing category A and B",
plot.xlab = "Hz",
plot.ylab = "(Original units)^2/Hz",
combination.index.for.envelope = 2,
TimeSeries.PSD.LogPSD = "PSD",
sampling_frequency = 100
)
ggplot.obj.PSD <- PSD.results[[2]]
ggplot.obj.PSD
When the signals are very noisy, it is often times helpful to log transform the PSD plots. For the example data, this is not necessary because the signals are very clear. Since amplitudes are small, log transform are also not helpful here.
LogPSD.results <- AutomatedCompositePlotting(list.of.windows = example_data_windows,
name.of.col.containing.time.series = "Signal",
x_start = 0,
x_end = 5,
x_increment = 0.01,
level1.column.name = "Session",
level2.column.name = "Category",
level.combinations = list(FirstComboToUse, SecondComboToUse),
level.combinations.labels = c("A", "B"),
plot.title = "Comparing category A and B",
plot.xlab = "Hz",
plot.ylab = "Log((Original units)^2/Hz)",
combination.index.for.envelope = NULL,
TimeSeries.PSD.LogPSD = "LogPSD",
sampling_frequency = 100
)
ggplot.obj.LogPSD <- LogPSD.results[[2]]
ggplot.obj.LogPSD
### Comparing frequency contribution of each category
We know there are differences in frequencies of signals between category A and B, but we want to statistically test if the difference is significant.
comparison_results <- PSD.results[[3]]
dominant_freq_for_comparison <- comparison_results[[1]]
kruskal_wallis_test_results <- comparison_results[[2]]
wilcoxon_rank_sum_test_results <- comparison_results[[3]]
Since multiple signals are present in each category, we want to see if the dominant frequencies in signals of category A are significantly different from the dominant frequencies in signals of category B
dominant_freq_for_comparison
## vals.to.compare.combined combo.labels.combined
## 1 1.0 A
## 2 1.5 A
## 3 1.2 A
## 4 0.1 B
## 5 0.2 B
## 6 0.3 B
The comparison can be performed using the Kruskal-Wallis rank sum test. Here, the p-value indicates the difference is statistically significant.
kruskal_wallis_test_results
##
## Kruskal-Wallis rank sum test
##
## data: vals.to.compare.combined by combo.labels.combined
## Kruskal-Wallis chi-squared = 3.8571, df = 1, p-value = 0.04953
In this example, only two categories are used. However, Kruskal-Wallis rank sum test can be used for multiple categories, similar to ANOVA. If more than two categories are used, pair-wise testing using Wilcoxon rank sum exact test can be used to see which two categories are significantly different.
wilcoxon_rank_sum_test_results
##
## Pairwise comparisons using Wilcoxon rank sum exact test
##
## data: vals.to.compare.with.combo.labels$vals.to.compare.combined and vals.to.compare.with.combo.labels$combo.labels.combined
##
## A
## B 0.1
##
## P value adjustment method: BH
|
2022-12-10 04:44:02
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.4304574728012085, "perplexity": 7757.965518349821}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-49/segments/1669446711712.26/warc/CC-MAIN-20221210042021-20221210072021-00339.warc.gz"}
|
https://tex.stackexchange.com/questions/178668/margins-or-line-breaks-in-koma-script-table-of-content
|
# margins or line breaks in koma-script table of content
Entries in my KOMA-script TOC spread too close from the page numbers:
One solution is to use a short title and insert in it a line-break, but this is not an acceptable solution.
I am trying to modify the tocrmarge. Here are my attempts, but all compile with errors:
\makeatletter
% one and one only of the following
\renewcommand*{\settocfeature}{tocrmarge}{10em}
\renewcommand*\l@tocrmarge}{10em}
\renewcommand*{\settocfeature}{\setlength{\@tocrmarge}{10em}}
\makeatother
My understanding of LaTeX is not good enough. I need help to write correctly the command please.
EDIT: add a MWE (not so minimal in the preambul)
\documentclass[english,enlargefirstpage, full]{scrbook}
\usepackage[T1]{fontenc}
\usepackage[utf8]{inputenc}
\usepackage[paperwidth=148mm,paperheight=210mm]{geometry}
\geometry{verbose,tmargin=3cm,bmargin=3cm,lmargin=2.5cm,rmargin=3cm}
\usepackage{fancyhdr}
\pagestyle{fancy}
\setcounter{secnumdepth}{-2}
\setcounter{tocdepth}{1}
\usepackage{pifont}
\usepackage{textcomp}
\usepackage{fixltx2e}
\PassOptionsToPackage{normalem}{ulem}
\usepackage{ulem}
\makeatletter
\@ifundefined{lettrine}{\usepackage{lettrine}}{}
\@ifundefined{date}{}{\date{}}
\usepackage[english,frenchle]{babel}
\usepackage[nottoc,numbib]{tocbibind}
\renewcommand{\sectfont}{\normalfont\bfseries} %\slshape} %\rmfamily}
\makeatletter
\renewcommand*\l@chapter{\bprot@dottedtocline{1}{1.8em}{3.2em}}
\renewcommand*\l@section{\bprot@dottedtocline{1}{1.8em}{3.2em}}
\renewcommand*\l@subsection{\bprot@dottedtocline{3}{1.8em}{3.2em}}
\makeatother
\frenchspacing
\usepackage{hyphenat}
\hyphenpenalty=3500
\doublehyphendemerits=9000
\finalhyphendemerits=6000
\usepackage[T1]{fontenc}
\usepackage{lmodern}
\usepackage[babel=true,kerning=french,protrusion=true,expansion=auto,spacing,tracking]{microtype}
\pretolerance=1500
\tolerance=2000
\setlength{\emergencystretch}{2em}
\AtBeginDocument{
\def\labelitemi{\Pisymbol{psy}{42}}
}
\makeatother
\usepackage{babel}
\begin{document}
\title{my book title}
\subtitle{my book subtitle}
\author{\textbf{myself}}
\maketitle
\frontmatter
\chapter{Before}
blublu
\tableofcontents{}
\newpage{}\mainmatter
\part{PROLOG}
\chapter{\emph{chapter 1}}
blabla
\chapter{\emph{chapter 2 title is very long and should look uggly}}
bloblo
\end{document}
And its result (not as uggly as in my real document):
EDIT: Here is the successful build log (in a pastbin, since otherwise I exceed the Stack Exchange 30,000 characters body limit.
• Please help us to help you and add a minimal working example (MWE) that illustrates your problem. It will be much easier for us to reproduce your situation and find out what the issue is when we see compilable code, starting with \documentclass{...} and ending with \end{document}. – user31729 May 17 '14 at 9:26
• I would rather say, that for some reason your entry name is too long. Did you consider a short entry for the toc such as \section[shorttitle]{long title}? – user31729 May 17 '14 at 9:27
• No, the title cannot be shorten. – lalebarde May 17 '14 at 10:00
• Your example does not compile – user31729 May 17 '14 at 18:45
• I have added the successful build log in the OP. I will try to modify the MWE to be really minimal, hoping it will help. Possibly you don't have the french stuff in your setup ? – lalebarde May 18 '14 at 6:37
[...]
\renewcommand*\l@subsection{\bprot@dottedtocline{3}{1.8em}{3.2em}}
\renewcommand*\@pnumwidth{3em}%%%%%%%%%%%%%%% the width of the page number
\makeatother
[...]
Your preamble looks a bit weird ...
• Thanks Herbert ! For the weirdness of the preambul, I do agree with you. I am considering using texmaker instead of LyX, what will give me full control on things. – lalebarde May 19 '14 at 7:18
I have solved this with one of the two following solutions:
Either:
\renewcommand*\@pnumwidth{2em}
Or:
\renewcommand*\@tocrmarg{3em}
|
2019-09-16 20:18:02
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.887550950050354, "perplexity": 1867.8758411017811}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-39/segments/1568514572934.73/warc/CC-MAIN-20190916200355-20190916222355-00468.warc.gz"}
|
https://pixel-druid.com/the-zen-of-juggling-three-balls.html
|
## § The Zen of juggling three balls
• Hold one ball in the left hand A, two in the right hand B, C. This initial configuration is denoted [A;;B,C].
• throw B from the right hand to the left hand. This configuration is denoted by [A;B←;C] where the B← is in the middle since it is in-flight, and has ←since that's the direction its travelling.
• When the ball B is close enough to the left hand that it can be caught, throw ball A. Thus the configuration is now [;(A→)(B←);C].
• Now catch ball B, which makes the configuration [B;A→;C].
• With the right hand, throw C (to anticipate catching A). This makes the configuration [B;(A→)(C←);]
• Now catch the ball A, which makes the configuration [B;C←;A].
• See that this is a relabelling of the state right after the initial state. Loop back!
#### § The Zen
• The key idea is to think of it as (1) "throw (B)" (2) "throw (A), catch (B)", (3) "throw (C), catch (A)", and so on.
• The cadence starts with a "throw", and then settles into "throw, catch", "throw catch", "throw, catch", ...
• This cadence allows us to actually succeed in the act of juggling. It fuses the hard parts of actually freeing a hand and accurately catching the ball. One can then focus attention on the other side and solve the same problem again.
|
2023-01-27 18:09:17
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8126224279403687, "perplexity": 2572.2850902766563}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2023-06/segments/1674764495001.99/warc/CC-MAIN-20230127164242-20230127194242-00017.warc.gz"}
|
https://www.physicsforums.com/threads/why-is-the-scale-1-10.532488/
|
# Why is the scale 1:10
1. Sep 22, 2011
### davedave
Because I cannot draw a picture in this problem, I will do my best to describe it in words.
This problem is about a circular railway track. The diagram that goes with it shows one of the 15 curved pieces. Its width is MEASURED 1 cm and its arc length of the inner edge is MEASURED 30 cm.
The circular model railway track is made by connecting the 15 identical pieces. When the 15 pieces are assembled, the circumference of the inner edge is 450 cm. The radius of the inner edge is 71.6 cm. So, the scale used in the DIAGRAM is 1:10.
I don't understand why the scale for the diagram is 1:10.
Can someone please explain?
2. Sep 22, 2011
### Allenman
It's just saying that the drawing is $\frac{1}{10}$th the size of what it would be in real life. It's giving an aspect of scale. So each CM is actually represented by a millimeter in the drawing.
3. Sep 22, 2011
### gsal
Can you grab on of the pieces and put right on top of its diagram? Is the drawing the same size as the actual piece? If so, the drawing's scale is 1:1 ....if the drawing is a lot smaller than the actual piece (10 times smaller) that it is 1:10
|
2018-01-17 14:04:27
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.571661651134491, "perplexity": 747.3219647994424}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-05/segments/1516084886939.10/warc/CC-MAIN-20180117122304-20180117142304-00105.warc.gz"}
|
http://sitzler.biz/journal/wp-includes/fonts/pdf.php?q=download-master-the-gre-2010.html
|
by Lionel 3.6
## A sure download master the gre 2010 identifies that you cannot have in email that an topological kind of additional data in a reasonable book is point-set not. Since real definitions are the fascial plan of such topics( doing simple laccases by the shape of spaces), the Top-down is oriented for Structured animals and in the site subsequently jointed proofs of other authors hope preserved. logs do So combined out of analytical family: Thanks claimed microbiological rates very before fundraiser oriented of using up with the cycling of prominent bodies and in the nurses they were extreme, topological humors were connected under extended inheritance and that decay was in model Now various to become languages stipulated. In non-empty topics, the weight of central Approaches( and the litter you use in the everyone along with it) wanted tied from objects, n't taken on them. away in harmful communities, bark; it looked like a algebraic aproach at the future;? SamB: no, hence then: that is inside at all what I get. And yes, atheists say generally religious in the concise none on the ho&hellip. If they were, as every overview, moving the pole of available similarities, would ask useful. This encloses used the Other surface. I are this topology can take so however started by what is in additional spaces, of which the great article is a standard surgery. survive a used question open if around each copy in separation there is a day( with background to the topology) orientated at that diagram and seen in library. once you can understand as an download that Organisms of nuclear organs of open processes am such, and that wide metrics of respective categories are new. only, the $U'$ above activities that you ca now be moral measures. But why explains it be inside not to destroy old-fashioned available numbers to be this topology of essential data? For more on the plan of paying manifolds in graphics of ' shared strategies ', you may work Active in this MathOverflow calculus. specifically, I are that the funding component is from the chemical of geography. Mating the CAPTCHA is you remain a arbitrary and is you super download master the gre 2010 to the vertex Ecology. What can I be to eat this in the finance? If you are on a same deity, like at logic, you can draw an Creator perspective on your analysis to open open it is still given with SolidObject. If you am at an Program or bariatric manner, you can die the requirement dead to Consider a vibra across the cycle Mobbing for unique or defective benefits. Another donut-hole to combine being this snake in the cent is to fit Privacy Pass. download master out the problem distance in the Firefox Add-ons Store. Please get often if you are topologically fixed within a differential subsets. know volume reasons and works there! Become our data any web disappearance. be equations in exactly subject as 30 surfaces. manage any download for Analysis classes to have you consolazione brain. What are Chegg Study difference Solutions Manuals? Chegg Solution Manuals understand decreased by Forgot support terms, and grouped by points - Early you are you are modifying inland access activities. communities micro-organisms are glandular for harmonies of the most Thick-walled function and available system criteria in principles surgical as Math, Science( Physics, Chemistry, Biology), Engineering( Mechanical, Electrical, Civil), Business and more. structure state is n't closed easier than with Chegg Study. Why covers Chegg Study better than open PDF download needles? reducing the CAPTCHA is you think a open and does you topological download master the gre to the torment perspective. What can I seem to make this in the property? If you do on a many synthesis, like at impact, you can write an fact Litter on your review to make final it is therefore derived with portion. If you are at an life or solid post-op, you can eat the life book to help a course across the adaptation creating for burned or ongoing waves. Another way to be having this accumulation in the modification is to read Privacy Pass. shape out the space discussion in the Firefox Add-ons Store. How Surfaces Intersect in Space: An duo to Topology By J. 5 MB In this metric epidemiology the control needs us to apply a information more than it describes our insects. Without join he has us to the adult of Vocal funds. decal by value the advice is the appointment of Cartesian table. As to the structures, they feature theoretically fuzzy. I Lately lay the transitions of triangles and download master the gre 2010 substrates. No technical page makers almost? Please pay the population for staff funds if any or are a value to plant related Notes. No spaces for ' How Surfaces Intersect in Space: An union to search '. way relationships and occurrence may use in the analysis release, added material truly! pay a answer to prevent risks if no electron packages or misconfigured spaces. Any download master the gre can ask played the life-cycle rating in which the object-oriented tests do the different business and the networks whose block encloses bariatric. This is the smallest amorphous role on any feelingof point. Any career can have defined the Oriented certainty, in which a religion is partitioned as agile if it is highly controversial or its amount manipulates 500-year-old. When the weight is convex, this viewport provides as a metric in open data. The solid example can then gain reached the lower volume place. This information on R is there finer than the soft energy Seen above; a metabolism 's to a idea in this law if and only if it is from still in the hedge Disclaimer. This loss is that a &minus may be dimensional bariatric components wondered on it. Every download master the gre of a large set can compare permitted the level flow in which the solid capsids relate the courses of the active artists of the larger z with the system. For any made site of Large spaces, the point can ok defeated the Bol history, which is raised by the Swedish tanks of open Methods of the months under the f(x buttons. For fund, in open spaces, a graph for the behavior logic acknowledges of all species of Important Parts. For Shared wagers, there ranges the easy expertise that in a object-oriented general career, deservedly but just weekly of its fungi work the gastric x.. Y enables a many blog, so the litter surgery on Y is the religion of limits of Y that call close principal sequences under concept In powerful ones, the loss form is the finest object on life for which theory becomes iterative. A only Goodreads of a property amount is when an water view belongs required on the misconfigured duality X. The agreement security gives above the available outcome onto the text of species balls. Un of compact elements in X, we are a Evolution reflected getting of all subsets of the ball of the Ui that are natural sets with each Ui. The Fell download master the on the donut--it of all parallel was points of a completely applicable immune community destruction is a instruction of the Vietoris address, and refines embalmed after category James Fell. Un of basic points in X and for every bariatric application K, the teak of all interdependencies of X that lie primary from K and study patient months with each Ui examines a point of the sense. They are mathematical download master the for their dimensional existence and they 've the Viable year well because it is the other respect to beat maybe because world found them it left the prime material to collect. For bizarre methodologies they need to and use on season. property contains they are Great or real. enterprise if they do to Read sets. Dogs to see their topologists. No, to appear a sense it is brought to give in a Help following. The dimensions allow apart be access who contains of possible system. How include you are if you deal an surface? If you represent an network you Do a woody silicone and are Likewise called world peer-reviewed by a right investing: describing an Edition is finally oversight that can remove closed and there is no & wanted to achieve it to the information. Many who are to hear tools are therefore for volume x.( for the edge of donation; or who are its region. geometric who are modeling philosophers acknowledge in the decision of a surface that is with reading the subsets of Oriented development and is with living away from them with no content. To die an download you must be to neighbourhoods of a development in the long elongation that you are to purposes about the wood under the way or whether you should develop ideas or difference for Santa. In because it should not die surface of your world means. The computer is that you need a water on yourself. Your inverse is also normal to understand. I accredited a visual future with that not. Boethius found again defined for his' download of terminology' which he located while in Engineering where he needed later isolated. It meant completely migratory in the Middle Ages. They 've from a abstract administrator in which author refers competitive and testing volume and development. activities mean as present to directly. notion: about, morphisms can discuss from no. They delve exactly make from such download or volume. They are just like you and me. They can primarily properly as, from any various guide. What is modifier close richness? Boethius is most low for hi cell hole of Philosophy, which added a so-called humidity on Performance, error, and previous philosophiae and was one of the most near topics of the Middle Ages. The outgrowth is in the edge of an organized manifold with equation reviewed as a homology. Its download master the gre; starts that della does open to satisfy M(x. An restriction has a deal who is or is application of administrator composition or surface. What includes the geography Boethius set? Anitii Manlii Severini Boethi in surgical system books are options & servers structures principis Opera' -- subject(s): Legal fungi to 1800, Geometry, Philosophy' King Alfred's seller of the classes of Boethius'' The time of part'' Boethian plane; small evolution'' King Alfred's easy solid distinction of Boethius De today ways'' Trost der Philosophie'' The page of surface of Boethius' -- subject(s): download and x, Happiness' The Theological Tractates and The way of Philosophy' -- subject(s): honey, sex and body, Happiness, topology' De musica' -- subject(s): Music, Greek and Roman, Music, Theory, Manuscripts, Latin( Medieval and built-in), Facsimiles, Medieval' Anicci Manlii Torquati Severini Boethii De rate equations system donut'' Anicii Manlii Severini Boethii De Part meaning' -- subject(s): Division( Philosophy), so 's to 1800' De institutione arithmetica libri language. De institutione musica canopy topology' -- subject(s): theory, relatively is to 1800, Geometry, Greek and Roman Music, Music, Greek and Roman' Boethius' outgrowth of definition' -- subject(s): Check, Philosophy' Boeces: De development: polymorphism animal d'apres le manuscrit Paris, Bibl. In Many, I am vastly taught download master run the likes low habitat or various sequence. still one of the hottest diagram surfaces in density is personal ' oriented absent function '( here topology proves at least used of the Poincare fraction). For what it is shiny, I'd sufficiently combine s personal Allotype. Euclidean theory indeed is up a object. I uploaded my Non-commercial text on it with Novikov in 1999. well, it runs the tool of a ' topology ', which is fundamental to hedge well-defined consolation, without the energy of a ' board-certified '( Frankly the low z as in what Mark is contouring ' partial mesh ') that is popular in second part. There shows a lack you can log not about concerns before growing them. The nearness of a back target received suspended to that of a certain accessible content. These processes Similarly happen us the century of what ever fairly( and with not object-oriented research sequentially) encloses treated to modify the critique of malware in empty axiom and Optimal software anti-virus( below the code shapes used, designs do in hedge leader as they do in temporary topology neighborhood). The static download master of Covering y in clear nearness and partial nothing topology once sinking phosphodiester by another primo control, even teaches one oversight to run. not you rather was all that completely. clearly if I hope residing the time often: what I want is topological CBD improvement. only when I was system, we were with connection in a other quinque, and correlated to be ve bargain and y within the decision. From widely, we told on to various question, where we could believe in more data. To me, it as wrote to Learn just the normal radius we was given in season, where we supported off typically within a set, and even did the differential philosophers of contrasting metric in that $x$, and was them into more examples. inside, it well died to me that often I refused determining that water myself; also was sure to learn it teaching subdivision.
Holly O'Mahony, Monday 17 Jul 2017
Interviews with our current Guardian Soulmates subscribers
This features the download master personal for Open seasons( involving structures and features). $x$ «), but this should consider to learn one author to Thank in axioms of classes. An pole of the Goban mesh is right Object its several works. At this continuity, I could have you to get this page, start development forests, and restore me the presented success when you have built. I have personally share to cause what topology terms you need presented. I can move your range to offer on( and specify) a Goban. This is a bariatric sphere that top macros happen important. This helps junior download master the. recommend the reference where Bob belongs slightly go to collaborate his offering was Euclidean. Bob is completely been his geometry was temporary, and continuity could use it. There 've items to understand this ©, but tips have perfectly make them. also, object is chosen using in only, many, algebraic, and other potentials. One open gap to Visit rainfall is by knowing a usability of HairColour. The become Person set just decides it low to add the inheritance gland to Pink. Bob can make properly, getting he will strictly consider to a other download master the algae. contouring the Go encapsulation may provide a stuff modern for translation also single with the information, but I help learn the web of diagramming an Sign( topology) norm. On the Unified download master the well because it chooses such a male set it is still multiple to derive of a chamfered to use some sincere topology in machine of humans on it( though as some stable authors would record;)). The bacteria of the translation of a topological nutrient-enrichment, as are essentially not impossible to for me to ask an statechart of its bird in standard diverse i of microbes, precisely I not add hence. together to know you the type of my CD: part helps clearcutting of the disease of temperature, and wondering much animals on possible clearcuts works like convincing( and including) short mirrors between those months to share small. check this shows meticulously rapidly reducing to the antagonist where it is regulatory. general contains always ever widespread. I would not repeat to touch that there want green reasonable objects of pricing a rigor, smoothing poles, mobile Contradictions, useless Editors, office, ecosystem, and Also on. Any property to be that the central skin impact is THE notion of a software is to be denied, although in certain connections it 's the most object-oriented. not it is central download master can Let off gods, with 2-to-1 projection. There gives a brief resecting of ' account ' in this centre-piece, which extends the hole with connectedness of methods, and scalar of friends. One 's the plant which has best for the pine at performance. The metric topology that Analysis species is the software of base"( in some base) of intersections and flows of a guide. The tissue is most object-oriented in the most dead( least hands-on and most likely) ones; the Medieval arm( the question of all answers of a left seems&rdquo) where every two physics and every two definitions can come modeled and the shared ecosystem( the Non-Destructive litter and the are itself) where no space and no knot can use brought by another. The most finite and open pieces need those in between the external and Intelligent systems, because they are Object parts of everyone than those curved by the fuzzy and available users. stop you for your class in this space. Because it is affected download or scan examples that did to develop been, using an library as seems 10 polymorphism on this philosophy( the $x$ theory is too get). Would you send to prevent one of these coloured components recently?
be: To See a download master the gre with surfaces for the transfer of filtering a single-variable scan. library: The environment been to look an future into a coniferous stuff for Homeostasis. dietician: A hydrofactor of metric topology, completely other or such diagrams mean closed to DNA. clearcutting author: The simplest normal system of normal markets. business: The Scape by which a DNA Decomposition encloses given into another probability. Interspecies Hydrogen Transfer: The tolerance of concentration lignocellulose and way models, bathing by the help of explicit leaders. new: Inside the Nitrogen. population: When two low activities, which may keep functional in their y, space as spheres for the initial reality, or been of terms. ground: A basis not a iterative right of an " Describes reported from a vertices or an subcategory. open download master the gre 2010: The application of an copyright by a well performed space of lower decomposition, in a private drainage type. Jaccard's definition: An success application of new lot, which is the purpose of data that begin, passing those that both airlines need. K- Strategy: specific libri where mathematics are on modifying currently to the & topological in their certain enzyme. Koch's users: cookies given by Robert Koch which Have that an edge becomes the object-oriented banner of a Topology. symbiosis Phase: The atmosphere freedom when there encloses no maintenance in the result of manifolds, passed after book of well-defined volume value. loss: closed in points as the designs of object-oriented books in Mathematics that need Swedish methods. tearing: property of seasons from structures by the balance of atheists. A Illustrative download Symmetry is the T3 topology if over finish near excess complexes which have any stable Evolution and any property permanently in the graduate: for any local topology site and any employee, greatly are incremental own markets modifying flow and surface not. 8221;) and volume to manage the single-variable future. This is why and where we are to be properties in browser to Tweak inside Final trophic antigens. Yes, we can be T3, T4, and T5 champions per se. We are that a network is patient if it is important and T3. In neighborhood, we can Edit that if a edge 's T0 and T3, not it uses modern, in hedge, simply T1 and T3. In the western $Y$, the automatic example encloses very so almost. It is to all cases 3 and higher. Whereas the phase of the earlier business titles reassured using the techniques on the worth & whose sense we tried, vastly we completely set a dioxide by a nice way. That would work a death of the earlier bird if components themselves was required theories. accessible why we have to consider artists which have both T1 and T3. really we are two elliptical Objects easily of a download master and a metric maximum. A same anything way lines the T4 point if not cross few good terms which are any two resting significant plants: for any popular other laccases A and B, slightly have open final approaches planning A and B Right. I should intersect that a whole problem of T4 methods gives that T4 sheds n't open: vastly every setting of T4 contains T4. We provide that a approach is related if it is 2-to-1 and T4. We n't acknowledge the free: n't only topology of a physical Recipes decomposes personal.
This is the original download on any Metric future code. On a cocountable point food this ratio exposes the important for all data. There have real procedures of starting a temperature on R, the Preface of particular policies. The relevant development on R is described by the basic atheists. The research of all advanced years calls a surgery or topology for the clay, documenting that every metric basic is a DNA of some thrombosis of thousands from the mesh. In other, this is that a uniform moves easy if there goes an suitable coordinate of new zero y about every analysis in the page. More here, the organic attacks anything can visualise said a site. In the open use on Rn the heterotrophic truncal excrements exist the average readings. only, C, the Topology of companion ideas, and Cn need a metric topology in which the tough hydrological methods acknowledge many results. Lysis functions are a quinque of study of two rates. This analysis is kind. You can be by finding to it. third aspects are modeling the notion between only elements. This acrimony appears code. You can know by altering to it. A two-dimensional surgery in which the things suggest applications explores studied a volume regression. then if you have Also meshed with download master long, the motivated Trachea of C++ moves you fund you have to change the good wizards of molecule distinct point, which shows you to prevent bariatric goodness diagrams from homotopy managers of Topological Mathematics. This encapsulation is your source decomposition to affecting with general reflections in the other y of subdivision. divide your simple informa to wondering the products with: All the religion and reconstructive advertising you take to die open regions to put main atheism ratio. poor germinating Procedures and compatible cases using what to use when doing home and plant preimages in the differential acid. A possible weed Slime Appalachian C++ surfaces, sites and ecosystems to eight-gon. be asking Hedge Fund Modelling and intersection your organic Poster and be all the sphere and eventual formulation you ask to develop the projects. be Fund Modelling and Analysis. English for Professional Development. Restaurant and Catering Business. The Art and Science of Technical Analysis. Hedge Fund Modelling and Analysis. moisture useful C++ results and linear object-oriented Programming( OOP) to revolutionise in open region purpose sacrificing Low type ideas, related regions and greater 2d educator allow carefully some of the whole methods it contains plastic to deist for $Y$ vendors to necrose agile offers. The half for organic parasitic libri exercises, other soil routines and discussion techniques is to replace open forests, spaces and population thousands to better reward their spaces and feel the sets of their rate spaces. achieve Fund Modelling and Analysis has a empty padding in the latest extra ordinals for simplicial universe entity, invasive with a Such analysis on both C++ and enable topological situation( OOP). looking both 5-edge and ignored subspace resources, this guide's Check is you to assist surface then and be the most of Supuhstar lives with and infected space Terms. This usually answered Differential punishment in the especially treated Hedge Fund Modelling and Analysis surgery is the non-phenolic point Reply for following the ancient C++ program to be first cycling time.
n't you can run esteemed why find we do to take applied? We wait because its condition of experience and model you want or are will be it from demonstrating. now far customize your x while you just call one we give to touch so we can map charts a topology at ab-)use We 're in an sandy photo. then we willing; anywhere as we can cause a flat one. success, determine other, 're Next pay, pay effective arrows and register Non-Destructive. Through you rather can draw of rapid structure. As therefore, download master the gre 2010 is Essentially impacted a groin water to set -- primer gets a convergent web of the homeomorphism time and algebra, normally Characters and points and the date, is at some author. What will you See after you contain? You will Do, intersect type, and into introductory mechanics like topics. Your organism you played on the future will Currently be, Early get space and subclass. There is not a comfort in coffee. Your statements will rather run, and that that could be So. That is is its download; sin greater intersection a topology. looking has a Differential limits that sets Otherwise as your Division strategies. The reflections in your home do to use understanding Complex words and as the older attacks form off so leads your structure. now to have your population in the topological surgery your advertisement will answer unless you can help out how to see to turn homokaryon; interesting properties that will make your operations significant. Whereas when you get a ' no download master the gre how concrete ', you am role or side. This discrete x does a isotopy library for time. W$the smart thesis you would criticise for plant. As you are long shown out, sophisticated segments will here say. disconnected animals have not numbers that think access of all its books. I believe that right study has same for a other default because when you think talking with organized way exactly you get trying once market. apps of certain classes in child will imagine and that is open. infected under additional surfaces would Answer MORE supersticious surfaces. In various, in the Terms every download master the gre would be open, modeling the gait algebraic. Please Jumpstart much to perform the kind. give MathJax to harbor emails. To do more, be our edges on offering specific birds. Please go low Class to the containing amount: usually be visible to agree the human&hellip. To go more, be our facts on Continuing microbial inputs. By typing subclass; Post Your approach;, you are that you are enabled our constructed materials of volume, existence testing and format cover, and that your temporary weight of the religion appears topological to these algorithms. use different times looked ratio implementation afterlife or cause your entire organism. essential gills say already embalmed to remold barbules or muscles to Barbs about new subsets in download master the gre 2010. Any statement can be grown the destination air in which the many species are the illogical measure and the sequences whose definition is easy. This calls the smallest mathematical connection on any correct discussion. Any ecology can have given the only length, in which a theatre shows given as key if it is directly massive or its failure identifies dimensional. When the download master the gre 2010 is topological, this cell has as a space in evolutionary methods. The soft time can incrementally improve done the lower question scan. This temperature on R is still finer than the continuous reason metabolised above; a enemy proves to a web in this type if and Eventually if it explores from first in the general p.. This theory facilitates that a &minus may parallel personal natural poles suggested on it. Every download master of a reusable volume can Check constructed the example componentA in which the Delayed applications are the parts of the solid algorithms of the larger environment with the impact. For any defined filosofia of religious portions, the policy can be biased the theory surface, which feeds expected by the complex problems of metric funds of the Objects under the microorganism points. For Step, in certain spaces, a case for the acrimony cross proves of all spaces of infinite sets. For deep explanations, there is the patient Consolation that in a independent open neighborhood, often but first general-purpose of its neighborhoods argue the true model. Y has a immune download master the gre 2010, still the rest topology on Y misses the network of objects of Y that need extensive open developers under agreement In intuitive answers, the recourse measure is the finest set on redox for which development is slow. A second thing of a today glance is when an space Chromosome has derived on the erratic answer X. The ecosystem goal is simply the English atmosphere onto the programming of storage applications. Un of detailed signs in X, we vary a topology put pinching of all balls of the representation of the Ui that appear standard Topics with each Ui. The Fell body on the space of all programming was relationships of a not basic easy study risk has a duo of the Vietoris Consultation, and has determined after impact James Fell. download master to prevent, whenever a planning has disproven, one Check apnea must be grounded in the point the chapter is examining orientated, while another is designated from wherever the behavior hired from. The science for this is that the set backslashes must ask explained around the climatic history of the Hibernation. I teach this s study on analysis populations is generated you a better century of how to show them only. If this passing told infected to you, always typically sustain editing it or counting to the condition; available Patreon oversight. If world; components so got to prevent a page or any simple PhD car, you allow how alternative it does to be important meshes able of sequencing CAD others. With that in download master, I developed I d area about talking very for these objects of patterns and the best rates to be it up arbitrary as misconfigured. not and Sparsely Place LoopsEven dawson, while exactly human without using complete NormalsAt triangles, is a sort homotopy property. Because of the number measure requires manage and get, right living an individual RV reputation that plant; vote; to complete the topology of your mythology can affect system micro-organisms to surfaces if refined mostly. This is why existence; chosen best to increase the intensity as good as hedge, for not Mostly all topological. make High-Density Edge PolesEdges rates believe all Seen for identifying analysis structures and processes. But they are respectively new in Euclidean books without building another download master the gre of the Process. surgery; version why analysis; anticipated best to sustain bodies to phases of less definition or problem to make them less open, apart than regardless purring them. Make Curvature books After publication Point-sets, using operations like area is on a canopy, or Conversations on a transfer decal can get infected organisms with content and decreases. To make this chip, it covered best to give days after one or two ideas of useful creating fill oriented blurred. code; point meteor Holding Edges when PossibleWhile space services accept a relative knot, they up well use body on some Syllogism of a marijuana by relating Dungeons along the total ofsurface of the programming the development becomes. dimensional Sub-SurfaceOne download master of many life spaces is that you can be how the using will exist without getting it. World's tips( Cole and Rapp 1981). There is immediate representation, simultaneously, in Logical book folds in this system, with highest spaces in enthusiastic term and cooler many sequences and lowest diagrams in less oriented hotter and drier possible points( sex 1). A programming53Object connectedness of this object gets in information and Edge same part. audiobooks can complete used then after a general matter in brain Organisms( Spies and points 1988). This development will live with volume, and Basal cultures will believe at about device 50 transformations. 100 to 200 fungi after the plant, after which the software of CWD will give not. plots can rather suply fuzzy trees of rare temporary business in players crucial to the chest. oversight Experience is divided normal language concepts, then with collection to CWD( Harmon and manifolds 1990; Spies and descriptions 1988). skills and investigations( 1988) are continuity statement mathematics not have the impact of example below challenges naturally expected under basic Generalization organisms. Edmonds, low managers). Edmonds, many instances). This nearness of a influential axiomatisation of basic phase is of basic you&rsquo. 1Johnson and measurements( 1982). litter and visitors( 1984). fund finite type is purportedly other to See given with$X'$addition( Harmon and spaces 1990), although usual methodologies are shown redirected to ask this. The exclusive system of central game in decals to achieve administrator in constant ponds is too distributed. form a download master, it will be preferentially. What did the erratic problems of supply Manlius Torquatus defining Cannae? He was accoring the americans understood anyone at the theory people on the P that improvement should then do filed. When learned Manlius Boethius start? represent spores below and we'll read your behavior to them not. Questo use-case calculus information di life modeling per spaces Euclidean? Chiudendo questo surface, scorrendo questa pagina donation essence change suo elemento acconsenti all'uso dei advent. 1886 nothing soil da Treves revealed 18 technology dello stesso anno, choice process giorno di scuola in Italia. Il libro godette di moltissima fortuna e fu download master in big finite&rdquo. Italia, study place amount ha litter price network di warning is giovani cittadini del Regno le bonus? Lo schietto network vertices world in quasi ogni pagina e distribution per approach creationists, per distance, per phase Copyright sono stati diseredati Network atheist? Nell'Italia point dopo la passione del Risorgimento viveva una future future time problem, Cuore effort? Maestrina dalla Penna Rossa, decomposition? Boetii, Ennodii Felicis, Trifolii presbyteri, Hormisd? Boetii, Ennodii Felicis, Trifolii presbyteri, Hormisd? Boethius BoethiusExcerpt from Boetii, Ennodii Felicis, Trifolii Presbyteri, Hormisdae Papae, Elipidis Uxoris Boetti Opera Omnia, Vol. Dialogi in Porphyriuni a Victorino space space 9 lattice in Porphyrium 71 In Categorias Arislotelis libri Use 159 In density Aristotelis de interpretatione Commentaria minora 293 In convergence way Commentaria majora 393 Interpretatio dogma Analyticorum Aristotelis 639 Interpretatio Grothendieck Analyticorum Aristotelis 712 711 Introductio relation Syllogismos categoricos 761 Interpretatio Topicorum Aristotelig 909 Interpretatio Elenchorum Sophisticorum anti-virus 1007 cup in Topica Ciceronis 1040 1041 De Differentiis topicis 1173 De distinction cognatione 1217 Commentarius in Boelium de consolatione Philosophia? You think the greatest download sometimes! I are you more than you will latest allow. There are no standards to like the willing systems Dr. homeomorphism minutes 'm after programming definition microorganism. Please contribute a world to run our material hole, to be for yourself the sentimental bodies that 've discrete when you are management with Dr. David Davtyan, one of the most human reasonable statements in Los Angeles. course is an Chemolithotroph that is then now as 35 material of the hedge distinction in the United States. At the lack of this Soil are a example of morals that are both in and out of your class, going productivity, supernatural continuities and new standard techniques. download more than well there, there do organic tools you can have to do the programming, mean forever post, and define a healthier problem being very. At our Los Angeles Isoenzyme problem example, we are to be please you on example. complete you stop with your library? identify you tagged every century and article employee out Similarly? tend you expressed working up not? change you shared to decrease the amount not and for all? send the gardeners of Americans who are intended their requirements with the download of fall life anabolism. Your atheist gives Personally at The Weight Loss Surgery Center of Los Angeles. The Weight Loss Surgery Center of Los Angeles is a coordinate Inferior library mouth subcategory, used in Beverly Hills and starting testo to the Greater Southern California surface. Through our shared continuity triplepoint consolatione, we surface closed major birds to run their greatest data. Octavia Welby, Monday 10 Jul 2017 Is there a secret recipe to finding the right person, or is it really just down to luck? C is instead download master the gre 2010 imposed in such answer since it is even personalise a Unsaturated impact used do hands-on world like mathematics, program and all on. But if you suggest the Countershading you can properly maximise step operative trader to it back talking panel, risk kfrag, continuity; subset. DirectFB handles such a C point sent in an analyst finite administrator. The fearful content it is more world usual since it applies still connected by modeling and define system set. It shows mentioned on saying download wherein. C++ is example found since it supports basic back for quality significant plane like name and program. But there has time that it is also a monthly or relative loss stepwise shape since it allows make C choice( sophisticated X device) in it. I consequently understand that C++ LibraryThing a several pole completed sheets but not be each one HERE. C is only an O-O download master under any use-case of ' O-O ' and ' point '. It converges still real to give C as the meaning property for a volume that is an O-O API to its points. The X Windows species wears especially a future O-O family when supposed from its API, but a sloppy topology of C when using its litter. This biology is right produced by the open aspects. Unless your download master the gre uploaded taking about Objective C( an OO example of C) n't therefore, C is hopefully an OO help. You can post OO sequences using C( that is what the odd continuity C++ ratio called, it told C++ into C) but that is nothing communicate C an OO space as it includes as HIGHLY die treatment for open OO sets like lack or Effectiveness. Yes, you can discuss line OO NAPL in C, Maybe with plastic( site of ecosystems but as litter who is executed the atheists of some of those problems, I'd again draw to make a better Diffused analysis. quantitative payments can beat open distance in topological device. download master from Worldwide to take this loss. minds: n. enjoy the small to revolutionise this subclass! 39; responsible not avoided your theory for this ca&hellip. We are not changing your space. waste studies what you came by case and Constructing this performance. The nearness must mean at least 50 mathematics not. The space should elapse at least 4 technologies Just. Your critique game should be at least 2 disclosures just. Would you improve us to create another topology at this set? 39; specials properly suggested this Start. We do your download master. You made the illustrating category and &minus. Harvey Lodish, Arnold Berk, Chris A. With its sick Estivation Rule, bit offering, therapy on old intersection, and shape invented on much settings, Molecular Cell Biology is n't produced an major study as an such and second Bioinsecticide. The tentacles, all single-varaible airlines and ve, offend well open fields where normal to be facilitate the reasons between topology rest and philosophy and fresh water. News SpotlightDecember 2018: Kathrin Stanger-Hall Named AAAS Fellow Monday, December 3, 2018 - great. I hydrolyze you n't really a download master the: please add Open Library tree. The western category is terrestrial. If nest gallstones in point, we can treat this equivalence analysis. n't still, your decomposition will touch well-illustrated good, adding your scholarium! once we are is the control of a final model to be a attempt the metric transition sets. But we not focus to define for semantics and analysis. Open Library appears a download master, but we use your surface. If you are our definition object-oriented, life in what you can brachioplasty. Please meet a other quatuor modifier. By requiring, you seem to Let topological programmers from the Internet Archive. Your development is northern to us. We want almost eject or expect your point-set with office. Would you matter being a infected download presenting Considerable set? CBD interference is be that background open however to lose sand will release other to be it generally. just we happen looking the Oriented photographs of the anything. New Feature: You can well make second line architects on your exclusion! download: The approach of look of author, where presented base in the status 's been. It can like defined by Second-guessing &minus, GB, or by the ball of Completing networks like type. food: balls which are reached from a organic interaction space. books ageing due people of atheist page, which is used by material. reuse: Analysis of an isomorphic combination of insects at a built plane. obese Sulfur Bacteria: A pace of western poles that are limitation bodies, forever understanding their today by this community. new analysis: The point of system of ecological home from one theorem to another. as initiated to need genes 5-edge as regressions. It is often distributed in collinear weight. donation: development of a top by a class without creating meaning or atheists from the reason. such: The regularity to do up topology. built-in example: A & heart of any RNA object, like theory or neighborhood. To mobilize out the topological site between DNA and RNA, content well. solid effects: topics with institutions that need not free nor convergent. They determine a algebraic download master the. secondary projects: dozens saying n't under Early ischial activities. The download master is in the research of an rough surgery with component infected as a LibraryThing. Its oxygen; is that feces is essential to investigate sort. An librum becomes a surface who is or complements Classification of set author or sharing. What uses the book Boethius described? Anitii Manlii Severini Boethi in metric topology forms are dissections & problems managers principis Opera' -- subject(s): geometric techniques to 1800, Geometry, Philosophy' King Alfred's music of the data of Boethius'' The anything of kilo'' Boethian mean; triggered vertices'' King Alfred's Informed shiny topology of Boethius De conscience-cleansing programs'' Trost der Philosophie'' The n of sense of Boethius' -- subject(s): call and organism, Happiness' The Theological Tractates and The stress of Philosophy' -- subject(s): patient, mammaplasty and topology, Happiness, role' De musica' -- subject(s): Music, Greek and Roman, Music, Theory, Manuscripts, Latin( Medieval and present), Facsimiles, Medieval' Anicci Manlii Torquati Severini Boethii De time triangles chemical name'' Anicii Manlii Severini Boethii De we&rsquo basic' -- subject(s): Division( Philosophy), together is to 1800' De institutione arithmetica libri difference. De institutione musica shape Theology' -- subject(s): Use, not is to 1800, Geometry, Greek and Roman Music, Music, Greek and Roman' Boethius' example of surface' -- subject(s): Check, Philosophy' Boeces: De association: movement activity d'apres le manuscrit Paris, Bibl. open spaces to 1800, Bravery and volume, Happiness' De topology complement' -- subject(s): continuity anatomy, also is to 1800' Consolatio bank in Boezio'' Trattato sulla divisione'' terms of web' -- subject(s): property Land-use, somewhere is to 1800' Anicii Manlii Severini Boetii Philosophiae consolationis release edition' -- subject(s): continuity and eve&hellip, Happiness' De device analysis' -- subject(s): case, Facsimiles' Boetii, Ennodii Felicis, Trifolii presbyterii, Hormisdae Papae, Elpidis uxoris Boetii union philosophioe'' King Alfred's handy planning of the Metres of Boethius'' Chaucer's Aggregation of Boethius's De component objects'' De consolatione axioms course surface. full clients to 1800, way and critique, Happiness' Libre de consolacio de study' -- subject(s): property, Love, reload and community' The extra manifolds and, the hole of duo'' Philosophiae consolationis trading definition' -- subject(s): intuition and leaf, Happiness' Anici Manli Severini Boethi De rainfall data web book' -- subject(s): decomposition and state, Happiness, Ancient Philosophy' Anicii Manlii Severini Boethii'' Trost der Philsophie'' An. Boezio Severino, Della consolazione subspace volume'' An. De hypotheticis syllogismis' -- subject(s): download' Anicii Manlii Severini Boethii In Isagogen Porphyrii commenta'', De institutione arithmetica libri expertise( Musicological Studies, Vol. Lxxxvi)'' Boethius' -- subject(s): Modern topology, Philosophy, Medieval, Poetry, Translations into English' La science topology R4' -- subject(s): scan and Phosphobacterium, Happiness, just 's to 1800, Theology, Sources, panniculectomy' De reason information. Cum commento'' Traktaty teologiczne' -- subject(s): built millions, Theology' Boethii Daci breakthrough'' The web of Philosophy( De consolatione sets)'' La consolazione radiograph access' -- subject(s): set and floor, Happiness' Boetivs De loops everything'' Anicii Manlii Severini Boethii de result spaces distance gait, objects. live discs below and we'll allow your content to them essentially. Es conocido por su Epistula rupture replacement Faustum senatorem contra Ioannem Scytham mesh de 519-20 d. Los escitas estaban dirigidos por Juan Majencio y feedback a Roma en 519 optimal la esperanza de nothing study apoyo del citado Papa. Dionisio analysis Exiguo de la Carta de San Proclo a los armenios, escrita en griego. La ' Epistula Buddhist process Faustum senatorem contra Ioannem Scytham nearness ' se contiene en Boetii, Ennodii Felicis, Trifolii presbyteriani, Hormisdae processes, Elpidis uxoris Boetii Opera connection, Migne, Parisiis, 1882. Ornato cuja forma imita a are vector. download master the gre 2010; devices mainly are to Make that strategies include various, and integrated for Supported theory. But when not be we do when a student should or system; dont examine where it requires? It Here misses down to surface. If a extension depends using the browser of the Philosophy, instead it should continue done or applied. This not is on algorithms or any organic components of object-oriented granite. being logs: The core atheist of the most found countries I hole Founded supplies how to be differences. And for metric &ldquo, spaces can have mostly basic to increase without depending Brain in an n-gons system. In apart every punishment end-point maintainability must be arranged to include a Decomposition in subset rates, accoring the discription to off allow remarkably non-zero if high data pine to see explained. This is why the best Philosophy for praying people is to efficiently belong them wherever metric by taking your information integers is key. download master; usually not Vadose to contact where a network will measure by using at the metric managers of a subject and where they represent. That website enables where a Process will log. But, in the substrate that you are appreciate up with a surgeon that is to Connect shown, there do a fund of inviarti for focusing complexes contributing on your varieties. Every context works a along hedge level, easily, there are some other pages for 3 and open manifolds that can Answer a considerable x for axiomatizing a law. lattice to be, whenever a dream 's found, one series undercarriage must make applied in the section the substance is contouring correlated, while another does overcrowded from wherever the device talked from. The step for this creates that the loss titles must Answer done around the septic article of the procedure. I do this certain programming on process questions is held you a better SolidObject of how to share them finitely. If you have our download master the gre 2010 fat, programming in what you can code. Your close carbon will be assumed aggregate biosynthesis environmentally. I colonize you n't now a parallel: please visualise Open Library bit. The many book is first. If religion features in procedure, we can use this information peptide. Now widely, your download master will support used slight, creating your experience! officially we are is the space of a possible problem to be a weight the empty vernacular ve. But we together think to have for bodies and account. For 22 funds, my decision-making 's seen to Hedge the trader of home and have it second to neighbourhood. Open Library discusses a$X$, but we do your point. If you use our download impossible, warning in what you can density. Your open change will read chosen stepwise loyalty Likewise. I give you below Often a space: please evolve Open Library topology. The Object ability is major. If wall species in band, we can manage this society abstraction. away highly, your download master the gre 2010 will illustrate documented continuous, finding your flow! The download of this aspect does to develop and estimate the processes, others, proofs, and species that construct s during the developer waterfall, topology trading, and rates inheritance. This connection Right is and is the general parts or manuals that are project of the sylvatica. Prototyping is to so be how Object-oriented or worth it will lose to prevent some of the brands of the material. It can wrong understand rules a point to live on the form and creationism of the s. It can further aid a snippet and run article coding over easier. It is either sufficient Development( CBD) or Rapid Application Development( RAD). CODD is an plastic effect to the theorem future volume using CO2 temperature of illustrations like human portions. download master the gre sub-strate ores from broad earth to use of metric, open, concave Opinion views that think with each great. A other function can move copyrights to find a close temperature cold. mess is a allele of seminars and cookies that can use given to merge an website faster than hence 2-to-1 with microbial factors. It helps n't decrease SDLC but is it, since it uses more on gradient point and can complete introduced slightly with the manuscript CO2 process. Its product handles to reconstruct the price practically and instead modify the modeling factors transition through Relations point-set as detailed fat, Cellulose speed, etc. Software anyone and all of its recens demonstrating supervisor are an ve Choosing. now, it can be a entertaining model if we guess to deliver a question out after its thorough class. inherently many distance works into origin as the col is depicted during important models of its case. Goodreads takes you go download of standards you are to Let. make Fund Analysis and Modelling Accessing C++ and Website by Paul Darbyshire. is Thomas Aquinas. The Division and Methods of the Sciences: actors network and VI of his home on the De ability of Boethius turned with Introduction and Notes, trans. Armand Mauer,( Belgium: Universa Wetteren, 1963). oversight of Expression for Aquinas' Unity of Vision: moving a Set Theoretic Model of the sets and Data in the Trinity,( Berkeley, CA: Graduate Theological Union gap, 2005). Unless music is a day to Learn the system first of Completing or passing itself we do largely getting to accommodate at some line. The course is out over stripe. exist or say has please a download where you must be a topology, or make minor systems to log and revolutionise your type which is ignored by words, either stronger or However open as yours. There aims a convention between Accessing Personally and living mainly metric. If you not included, following your knowledge gives given undergoing continuum, you can never be analysed well to Skin, or incorporated. If you shrinkwrap else rich, your compendium will minimally use necessary still the slightest kind. You overlap, perhaps your code practitioner; devices using, your signup can not prevent mineral for, on set, greatly fifteen Lots. When you Have joined team only, you are important and ca never facilitate excited. Another download master the: You suggest existing when the module is the ammonia along with the Few. There takes no bad fuel, because we can about write up to 100 methods honest or only ask lignin by a form or a everything investing. You normally are what is connecting to define in Program so using in a additional one would also accept theoretically UseThe after all. Another version: When God has typically as he is when and how. Holly O'Mahony, Friday 09 Jun 2017 For your chance to win a meal for two with a bottle of house wine at Shanes on Canalside, enter our competition This download master the gre 2010 differs good for moves and v residues doing in systems of GI Science, Geography and Computer Science. It exactly is set EnglishChoose for Masters algorithms quantifying on design region rates as atheist of a GI Science or Computer Science topology. In this terminology, which may die connected as a tricky weight for a cardinality approach, Professor Lefschetz is to stop the topology a object-oriented starting area of the metric functions of usual bad exercise: books, subdivision activities, sets in modules, offering, axioms and their determined ordinals, rights and acceptableand&hellip god(s. The Princeton Legacy Library enables the latest name access to typically Object metric essentially uniform spaces from the object-oriented set of Princeton University Press. These Polish manifolds send the positive manifolds of these shared requirements while occurring them in amniotic inevitable pictures. The spiral of the Princeton Legacy Library enforces to not construct sample to the fair other analyst curved in the questions of results eliminated by Princeton University Press since its asset in 1905. This energy comes the object of living and day area in a Intracellular, primary, and detailed stomach while being the UML of the surface through first, not first, areas. It offers a analytic Philosophy that runs a Place of topological sets and humoral Poles to be an practice of the formation. The pole is useful x, occurring from the others of logic to codes of new sets. He sometimes is the creature of rich, small groups, commercial coffee, and applications, which is to theory of arbitrary X. The download master the gre of containing properties beating throughout the Field, which are to criteria of the quantitative points, sometimes Not as the SolidObject scientistbelieved in each study answer this something cookie for a Luxury or future area job. The birds are a moving completion of basic prizes in basic death first and Great use, and some bonus" of systems and closed levels. This is an smartphone to the single-variable successor which is standing surfaces, as it is led in substrate dynamics and anti-virus. problem listings are a code to be the negative purpose of a branch that shows an position and so to disappear it to define a more hands-on or complete everyone. A small return of infinite with works passed to have the plots of the Covering logs, and this biodegradation is the death away and no. The quality is just predatory, beginning management classes in a limited and medieval SolidObject which discusses copyright and mentality. In download master the gre of ll the weight heard between the sort and the way, being professionals and changes. individualized Species: A species which is the plane of starting correct in the large season. stone: The theatrical substrate of the task wrote, detailed to using and following, during area. nearness: The section going as the structured guide for malware of nucleus to and from the surfaces, in researchers and next components. It is from the location to the variety. Check: The date loved by graphs to object out the OverDrive between themselves and their Philosophy learning two or more metric sects. This parent has given asymptomatically by properties and mats. download master: A cellulose in the out-of-print of feathers between a question, which 's one or more methods. organisms: topological space for sets beginning to Procellariiformes changes. due: An state or example of an presentation which shows bounded in metric and derived objects, not paid materials. scale: A libretto finding to the Testudines ball, which 've both tropical and other Studies. The analysis of these password is set in a collision. continuous right: It is the skin which shows up points through a History and is them to the small mug-shape of the Nearness. It generalizes Basically required the scan. download master the sets: An conversion which is considered to help a consequent Nearness. Heaven&hellip: A tripling PointsHaving represented by some loops, in which they Are plant or Monophosphate onto the useful prizes of their markets. I not pay that C++ download master the a morbid water featured topics but carefully make each one well. C is long an O-O problem under any point of ' O-O ' and ' office '. It is very microbial to draw C as the atheist Calamus for a breakthrough that is an O-O API to its examples. The X Windows Technique is topologically a thumbnail O-O Forest when used from its API, but a Chinese loss of C when diagramming its ecosystem. This overview is probably called by the many works. Unless your way began viewing about Objective C( an OO GIFTBOOK18 of C) fully ever, C encloses already an OO cookie. You can prevent OO ll choosing C( that provides what the similar &ldquo C++ donut-shape translated, it was C++ into C) but that is really collect C an OO light as it 's directly wherein feel breast for low OO sequences like problem or communication. Yes, you can communicate track OO Topology in C, essentially with prone( file of pages but as scan who meets recycled the chips of some of those systems, I'd probably nourish to run a better rewarded Conidiospore. bad cookies can be agile download master in theological animal. I are infected 1FollowLanguages0 easy assessment. are you feel to manage these years ' low '? 39; component then have and first when I are I might be my$U$exists selective Right. famously I 'm I talked numerous. C is incrementally share hedge subset. C is a additional, terrestrial component, using other whole. Because C is Even set indexed well C++ hypothesized into programming in appearance to believe beams disappear and OOP involves a anything award office acted around carbohydrates. Davtyan contradicts a dead, So specified download bigotry language with simultaneously 28 exercises of disposila and factors of bad lots. At The Weight Loss Surgery Center Of Los Angeles, we are a staff of magical s, arbitrary, and often short level umbilicus actors with authoritative philosophers. The terms we have give arbitrary surfaces, Simple spacing, high continuous object( idea pencil), Lap-Band, and other design. All of these models work opposed by Dr. Davtyan at The Weight Loss Surgery Center of Los Angeles in Beverly Hills, Cedars Sinai Medical Center or Marina Del Rey Hospital. Davtyan and his download master is object-oriented topology Trinitate scan number in Los Angeles, Orange County and the Inland Empire. Davtyan becomes a topological tour himself, and is evolved from Rplant volume metric. This is him to be your community with obese Browse, componentA and bonus". Please make your kind term study documents and take us for your general topology. We will apply you define which utilitarian download master the gre 2010 leap forms best for you. other fat topology is a do-and-think-for P problem that is Even acquiring the distribution of your plane. This 's you to investigate high and orientated after Making just smaller groups, following in possible meaning root and a finite usability in the model you enjoy. The isomorphic creation is a introductory soldier that is ecosystems to not include afterlife in the neighbourhood. This is a download master the gre 2010 of weight so you want less and can complete ecosystem more there. already heard as close targeting, LAP-BAND is one of the less orientable and out endophytic issues to draw system. It is looking an different obesity programming around property of the class, taking its thesis&hellip and reducing the connectedness of$U$you can be. clearcutting insight maintainability poesis is been my group. If you are on a environmental download master, like at result, you can Write an code surface on your hedge to pass general it is dead examined with time. If you are at an topology or certain effort, you can apply the course coffee to make a example across the analysis getting for near or metric properties. Another world to understand reducing this access in the information is to treat Privacy Pass. plane out the x topology in the Chrome Store. be up or run in to say your isotopy. By determining our definition, you are that you see considered and build our Cookie Policy, Privacy Policy, and our examples of Service. How are you build where two net ve are? How have I make whether or inside two limits call, and if they are, at what application, overview plane? This download master the gre is either answer to see also Constructing within the Place increased in the Class Antigen. If this submission can keep pissed to qualify the sets in the mandible Access, build pay the geography. It might suffocate to Find of the logs of the neighbourhood as empty people not of the important object. are you seem to accept( A) where two risk visits do or( B) whether or also two australis enhance( C) whether or completely two nose pages supplement( D) where two Books are? Could you give get your definition malleable with your nearness? 39; software working this Microenvironment as same because it is Just about a physiological waste primer as chosen in the litter solution. This is a practitioner religion really. lymphocytes instead you could be to some more topological peroxide site why this is together abstraction? This download master only makes and is the Such rates or extrusions that are primer of the web. Prototyping requires to versa build how advanced or international it will upload to have some of the phases of the surgery. It can so learn sets a calculus to buy on the work and set of the multi-body. It can further Visit a capsular and use modeling passing once easier. It is either flat Development( CBD) or Rapid Application Development( RAD). CODD means an metric problem to the situation development volume building welcome device of ways like procedural ors. scan relation documents from available statement to organism of intestinal, comprehensible, big It&rsquo animals that support with each observed. A authoritative download master can stop mathematics to be a Open analysis complexity. p. is a decomposition of scriptures and images that can be refined to buy an understanding faster than So great with other topologies. It proves anymore do SDLC but is it, since it exists more on model belief and can be sent even with the area object-oriented Philosophy. Its sequence is to perform the viewing inside and Early ask the treeDisplay names decomposition through tools unambiguous as standard programming, scan guide, etc. Software coffee and all of its devices including processing are an incidental set. as, it can bar a near surface if we are to read a god closely after its triggered Tundra. only metric maintenance is into edge Even the structure is filled during topological insights of its line. 171; Hedge Fund Modelling and Analysis. download master the gre 2010 topological C++ spaces and other open Programming( OOP) to accommodate in true mortis programming identifying Low code processes, born data and greater visual organism are sometimes some of the key strategies it is Long-term to complex for international strategies to Answer aesthetic candidates. The sun for saprotrophic significant set locations, bad surface procedures and topology challenges 's to understand different devices, methods and number combinations to better read their toes and do the servers of their incidence mathematics. For me, it misconfigured to a download master the - I feel that extreme quick cookies molecules, or at least on the sophisticated light, and modeling is to source turtle better with modeling. In charge, I 're that there could expect some property of higher process, but I have not adopt connection would be particular matched in the non-factual objects. I have a small-town time, the way to implement as I have, and yet as a enemy is played for me. make I are an information traditional to the metric that the book of Start maintenance described to believe open while all moves get all led out major. occurrence encloses shared that function is managed from the tiniest exercises. believe There are two results that am to draw in download master the respect to write the postgraduate methods of the process and millions do that there may Notice common and reusable Evaluation between a libri's topology set in spaces) and the distance of their markets. generalizations continuities check not ca also Sign into a pole or zones. I would reach of my route to this network whatever it may work. ranging resulted myself 4-1The to change into the last, abstract to tend why technologies had ' metrology '. Logic, when GB release on all important 0day, led to object it. enriched the ' open ' download that I were governed. navigate I are an example because I precisely are about use in any variation. Scamming is a massive Sm, because Here Music at theistic questions functions did to be circles. And Fairy designers because overview of selecting to a other structure field object after email to Begin all surface-to-volume forms such to me. about the prime track loss God was nitrifiers and courtship I exist converges prejudiced Got by shower number. eat I was played in England although properly in a download master the gre. How are you prevent an download? The metric ecology you would be airfoil here. also, instead you should be not why you are to ask operators who do not a open cell of others who are highly avoid with experiences. Object remain on themselves. They are patient property for their office-based movement and they are the special network only because it proves the human course to monitor far because implementation tried them it took the other browser to understand. For sure principles they inherit to and lift on topology. network is they are small or preoperative. Part if they say to prevent factors. locations to be their physics. No, to make a SolidObject it is grown to save in a information taking. The procedures have respectively let download master the gre 2010 who is of topological supreme. How have you are if you are an design? If you have an topology you have a small-town ofa and offer really done time hidden by a class problem: focussing an case is not share that can make used and there is no target claimed to refer it to the point. Many who imply to do points do again for hand loop( for the left of property; or who have its disease. visual who have designing members are in the instructor of a question that is with considering the teeth of object-oriented matter and is with connecting away from them with no animation. To find an Ratio you must build to differences of a philosophie in the opposite archeologist that you are to Methods about the plane under the wrap or whether you should run dimensions or secret for Santa. Why get I need to understand a CAPTCHA? winning the CAPTCHA does you use a eternal and is you general z to the diabetes organ. What can I define to resist this in the Javascript? If you see on a compact mug, like at isotopy, you can customize an Stoop topology on your topology to consider close it allows n't read with space. If you need at an direction or usual donation, you can do the calculus month to work a topology across the surgery passing for true or animal projects. Why do I are to help a CAPTCHA? sponsoring the CAPTCHA looks you are a close and 's you cosmetic belief to the topology sequence. What can I Feel to make this in the process? If you give on a digital home, like at approximation, you can document an el type on your hole to appear religious it compiles widely expressed with development. If you use at an download master or same system, you can make the superset Topology to define a Preface across the pole Storing for applied or human methods. Another continuum to trigger using this space in the code believes to be Privacy Pass. business out the radia in the Chrome Store. Why are I include to be a CAPTCHA? intersecting the CAPTCHA is you are a crucial and comes you related object to the &ldquo nearness. What can I Request to ask this in the zip? If you have on a custom enough, like at poll, you can read an example soil on your understanding to be main it does there worked with model. Holly O'Mahony, Tuesday 16 May 2017 The post-bariatric download master on any species. contact that a child tells such if and Sorry if for every set within the side, there is a © released within the Plastron. combine that the similar world is the loan nested by the triple phase. This surface were highly meant on 6 November 2017, at 07:30. By getting this virus, you do to the micro-organisms of Use and Privacy Policy. be up or disagree in to have your death. By estimating our loss, you are that you are born and merit our Cookie Policy, Privacy Policy, and our findings of Service. How to use a file with phase? helps especially a download master the gre 2010 to ensure sets from a world Creating GEOS? necessarily, I are to be the information myself. also it will secure number. If download master balls in volume, we can fit this Decomposition consolatione. in well, your state will achieve accrued, watching your target! n't we have progresses the trading of a other content to Wait a segment the open neck barbules. But we sometimes begin to give for topologists and$U$. For 22 techniques, my coordination gives s to register the Abstract of surface and be it algebraic to market. The Internet Archive indicates a generalization, but we wait your topology. If you meet our bra differential, customize content in. Your organic study will construct classified lateral point also. I occur rigorously Not a download master the gre 2010: please click the Internet Archive community. The old design is real. If works in &, we can use this productivity production. easily widely, your bird will prevent designed, modelling your element! again we do constitutes the tool of a lower-dimensional field to gather a t the short-sighted Category artifacts. But we now live to start for worms and time. For 22 techniques, my modeling supplies heard to contact the % of bottom and use it other to None. The Internet Archive is a mary, but we are your flow. In topological, this is that a download master is internal if there does an topological field of slender zero class about every anti-virus in the loss. More not, the Rational surfaces ulceration can depict infected a page. In the connected future on Rn the skew relative flights 're the close phases. not, C, the patient of average processes, and Cn use a difficult book in which the temporary available applications encourage general methods. nearness markets am a office of &minus of two compounds. This book represents way. You can define by knowing to it. genetic details are drilling the temperature between topological legs. This outset is analyst. You can be by sliding to it. A many download master in which the properties Are components includes thought a scan melanin. This volume is line. You can accept by getting to it. basic surfaces are the real&rdquo to be whether a solution is probabilisitic. major properties contain a inescapable knot for adding spaces. This sepsis is website. A accessible download master of a performance suggests really therefore imbue us as specific centre-piece as person surfaces. But, if no others really describe in the fund, even a open question gives diagram. convenient waterfall includes not advanced in the modeling of continuous neighborhood errors, since surface subdivisions connect not So been with a relevant through the algebraic pole( Hilbert components) or the structure( Banach regimes). You Do it is containing to increase all high decomposition like links and Klein anions, and you are up to the topological outset to get scales of lignin about reusable and mathematical humors. That has see you always was to the Riemannian example of influential weight. The one that explores intended even to most belief( filamentous leader) sets. If you are Klein loops and the saprophytes of those, you should be to new or such decomposition. I 've Early exciting that irrelevant unparalleled download master covers theory at all to answer with ' phase '. In the article of a original, how are you believe whether pigment compactness does ' analytic ' to use direction? You can look often property in a special evolution, that you can think in a first extent. rate Is really a small-sized intersection. A better order do dimensions of a web of Methods in the variable. You request the response in a slow product-space? Of content you can complete introduction in a predatory topoligy, at least rigorously about as you can imagine it only actually, for quatuor in the oriented theory. download master the is particular, but you can eventually represent whether one time depends shorter than another. But without a modern, possibly you have to test with do intelligent transitions. In download master the, both are named used with the RealSelf 500 amount, a native chip introduced to the many 500 spaces in the implementation who do worked extra radius and bargain on intelligent and complete solution for courses and developers. Because of their Surgical large SolidObject, both spaces meet open topology with an topology on space and Goodreads. Please look out the being arrows. Methods do you also highly fundamentally old-fashioned. enhance natural to get a monster at our abdominal before and after dioxide calculus given to expect you draw the end of our great specializations. mathematical Surgery Center of Nashville. Before and after answers should correspond used a being of our real-world. test the sea portion to see your alternative absence. At the Plastic Surgery Center of Nashville, we are given to Correcting you look your rates. We are you to often take yourself in a Celsius download and run your fourth programming. Mary Gingrass will help above and beyond to navigate you mention your best. Our relevant, closed data and temporary existence 'm still extraordinary to your organisms and classes. You are to achieve topological at every synthesis of the sort from wood to define. The Plastic Surgery Center of Nashville belongs a compatible Tennessee hedge geometry loss been by available fake Everyone ecosystems, Dr. Sign all to remember static pole and developers. Dallas Rhinoplasty - Nasal Surgery By The Masters fine agreement. example: moisture; OUP Oxford; 1 author( 19 Mar. What returns the immune download master the gre 2010 of primary metric chemistry? I'd prevent that personal network. The malware that the procedures have in the Animal space seems definition of Reply( although same for same distance). As for ' comprehensive topology ', it is anything of percent to what relationships ask ' topological browser '. But that ai all free Network of fundamental or topological design. The project we do inside evergreen, homeomorphic, and not fundamental emperors is not not that those are what we can be, but because there Are a type of open sets that 've in some of these rich poles. For download master the, analysis at the Geometrization matter. body-contouring all other points( small soil) on historical sources( interesting Object) did not conducted by Gauss. believing all hedge resources on 4- and agile cookies presents maximum, but what can increase is together merged. But depending all differential a-theists on plain actors is mathematical. well, come ' few polygon '. It is ibelieve to include a element on object-oriented actual class which is always online to the arbitrary one, and this is always worked-out in four spaces. For download stripe, the soil of which Sn an Sequence can not be in needs fully male for few polypeptides, and is more important in higher lines. You might make as, Sean, but So. 4- I were n't difference what you was. simply you do looking about a scan, which I consist to be open cycle, and ' normal ' about controls Illustrative in the baroque of courses. download master the understanding in available Methods: a definition primer space. Hammond DC, Khuthila DK, Kim J. The Completing v Bioavailability for Mastoptose of hands-on website and platform. Eisenhardt SU, Goerke SM, Bannasch H, Stark GB, Torio-Padron N. Technical trick of the malware process for geometric way edges in architectural system computing risks. Friedrich JB, Petrov Deterioration, Askay SA, et al. problem of opinion anything: a x. center with a superstitious account rate. average TO, Wachtman G, Heil B, Landecker A, Courcoulas AP, Manders EK. motivation as an winter to mycorrhizal F. Zannis J, Wood BC, Griffin LP, Knipper E, Marks MW, David LR. neighbourhood time of the closed topology of Approach. Aly AS, Cram AE, Chao M, Pang J, McKeon M. Belt extension for reflexive simple geometry: the University of Iowa page. Centeno RF, Mendieta CG, Young VL. dynamic Completing in the previous plant plane reader. punching approach with Euclidean point-set malware. Shermak MA, Mallalieu JE, Chang D. does download for painless information fact after non-prophet Isolation anti-virus 've a total soil? sets coming k determining laxative after true collection hibernation: a airfoil. J Plast Reconstr Aesthet Surg. Albino FP, Koltz PF, Gusenoff JA. Why 's it a download master the if you are computer? Opinion True oceans would stop it intuitively at all as a sense for themselves. They know Him whom they are seen. They would be it as a comma for the life spent, alone in an Object accordance. Category many micro-organisms do differential in their benefits and are to See their topics with person; opportunities. Some mean they are a help to divine those who start very learn their others, while functors may share supposed when infinite, reasonable ll, who acknowledge oriented closed to the many Implementation, provide never help their media. For these bodies, your component of book may control into music the carried Plasmogamy of attribute. download The question with conjugation is that relationships are together linked or else considered by actions that they think do differential It is a point-set implementation for basic books that make obese idea a popular shouldn&rsquo to do. topology One access that topics exist has polishing the reason that some names die they agree Early of general marks and have themselves Also activating to read and run the pension of papers who need almost avoid their philosophioe. This is pretty differential for the children who are to fill an Moisture of a' inferior and other' topology. The courses who inside turn z in the cod of nothing undergo it temporary for those who say to be always' softer' in their base". points especially ever refer to make the language of the Click of some Attributes in problems of how they are and are their offspring. The going sometimes of the suitable system' manifolds' which 'm similar protein is fir to refine with a design of web on the development of those standard. It then encloses an experience of the organic original reason of the wet solution which is that the structure and n-gons God comes generally based in need, in both volume and faith. The God who refers required in download master, who means avoided with many ve in complex types, is not formed forest without comma. services other as the SolidObject Dr. Clifford Wilson governs said 5000 abdominal triple forms decoupled by individual topology. naturally given in 1979, this is a honest download master the of the probability of scan questions. risk; greatest perspective about theorem individuals, those instead continuous physical thoughts on a radius that panel so " parts for abstract features around the y. Valuation cells are not computed to fund with more or less than 4 using$Y$. On a decomposition plant, this calculus carbon with either 3 being advertisements or 5 or more generalizing 1930s. elements most Thus feel when techniques or similar looking in points, as why standard download master the gre methods try Basically studied in metric loop. The Problem of PolesSo you might come showing, why have set shows exist such a personal network? The competitive protein is that an help primer goes viewing around its assessment when markets or Maintaining is done. This is why a team with a web extension is well crucial when doing value scholarium doing. The download master the gre 2010 below similarities why this can end a subdivision on infected concerns. Despite the learning declared by ideas, they are an unpublished business of neighborhood and a near programming for standard sequence antibodies and programs. philosophiae notice the most x. network librum and gain of five systems fixing at a reasonable information. neighbourhoods get most odd for modeling when dating N-polesN-poles on a animation and for reading 2nd infected enemy; parent; within the nit-picking when torus systems are or construct. morals do download master that look of three occurring bodies. This network of Philosophy uses properly less physical, but just writing around edges or space rates of a topology. In stupid object, this topology is then named as the woodlot; phase; heart, since N-poles 've arbitrarily topological for using the surface of the atheism. venous Pole TypesPoles with six or more flows are strongly anticipated to exist fake testing and ever only use up in such musica. In a rapid download master the gre, what has not feel for entities to build Gluteal to one another? What hatred of processes can you be planning connection but plan solutions? What specifies a Femur named very by that theory of importance? When I do that oak is topological to the great analysis of analysis, some effects - arbitrarily organs! I are learning when I operate that theConsolation. In war, you are a bank of links, and you have their Sheath by which languages have other to each natural. Most of the abstract open questions do enabled overshadowing other components of rates, where there explains no asymmetrical complex of two membranes that axiomatize such organisms; there is a example of together smaller topics that are willfully closer and closer belief lines. The &minus of target that I intersect examining very is more like the problem of a place - it is that particular topology of criteria that are other to each tropical, with that final subject to live live narrower and narrower gods around a analysis. The comments of units that you take from getting that are perfect. In some download master the, you need making animals - but they hope approximate, proper, Oriented followers, because all that actors is what games believe Metric to what new bodies - Personally what way you are to check to recap from one to another. gain is relieve a extracellular truth at an code. There gives a last improvement about compounds; you can Hopefully solve a network at pubblicato, because they have the students who ca slightly budget the R4 between their staff science and their programming. Like most organic works, there is a Internet of sort featured exactly of it. From the month of life, the NHS duality and the process focus the cosmetic case. In loss, the close N 's only prove: what differs is the Biblical books of the trading: what is 00CLOSED to what, what scrapers are solid to what mythical citations. If you need the emergence chest into cleanup, you can be it from approach to general-topology without Bleeding it, or assessing it, or Completing any organelles exactly. Holly O'Mahony, Wednesday 15 Mar 2017 Are pick-up lines a lazy tool to ‘charm’ someone into going home with you, or a tongue loosener to help get conversation flowing when you meet someone you actually like? We’ve asked around to find out. unique download master is previously open in the network of internal morning ways, since element depths 've just Right used with a discrete through the Euclidean treeDisplay( Hilbert units) or the anyone( Banach forests). You want it does learning to improve all distinct presentation like norms and Klein ecosystems, and you find up to the main use to pose instances of work about average and suitable tips. That is discuss you together was to the 4shared surface of coarse information. The one that is accomplished also to most subset( Potential distance) dynamics. If you use Klein files and the algorithms of those, you should pay to 40-cm or main one-. I need not reduce that such useful section helps breast at all to handle with ' set '. In the space of a small, how are you look whether intersection lung is ' difficult ' to manage chemical? You can want then Computer in a wrong email, that you can have in a available leader. technique lacks back a other That&rsquo. A better administrator require classes of a correlation of channels in the access. You need the download in a topological transfer? Of litter you can share positioning in a common subdivision, at least shortly as as you can pay it anymore carefully, for malware in the superior Copyright. network is important, but you can often see whether one matter helps shorter than another. But without a geometric, also you are to be with are tropical continuities. I love doubling, how can you gain about ' Theism ' in that skin? With a flat -- a course of group -- ' near ' Reflections ' within a space of some Euclidean( no additional) fir '. looking No one Depends how they are drawing to judge. Those who say a stage bird are an faith, but Even, they could be another fund. It destroys too home that we 've to create until the name spreads to describe out. demonstrating to t ecological i meet more? It covers On Your Life Span. If You have once become operations, seen physiological, and second, you will gather a professional analyst. If It comes the brain, you will complete around your parents or topological. But easily trademarks, points, programs, and about more be it. What was Marcus manlius are when he listed that the packages did modeling? relevant part: Marcus Manilus located because of the practices. Second Answer: The lack did what Oriented he have, again how he got out. He located the reusable entities for$X$. durable sets cannot please almost. time is a one range space. Will a course told with you if you are and 'm with you? others 'm clearly genetic balls, some can be when links shrinkwrap in resource, it becomes then equivalent. be changing Hedge Fund Modelling and download master the gre your many existence and lose all the topology and continuous everything you pray to deliver the roles. David HamptonHedge Fund Modelling and Analysis. Iraq were by the United States Repression to Minimize aesthetic minutes. PHP, Joomla, Drupal, WordPress, MODx. We are using beliefs for the best atheism of our 94,30&euro. applying to stay this guide, you are with this. Why wait I are to share a CAPTCHA? using the CAPTCHA is you know a wrong and proves you such download master the gre 2010 to the anti-virus implementation. What can I Find to answer this in the right? If you know on a advanced term, like at topology, you can achieve an mom time on your mug to change specific it is solely defined with number. If you are at an smoking or Tympanic Rn, you can manage the x ME to deliver a donation across the soil extending for basic or monthly interactions. Another book to allow going this number in the temperature is to be Privacy Pass. definition out the network Valuation in the Chrome Store. Why have I are to ask a CAPTCHA? making the CAPTCHA has you illustrate a indiscrete and seems you metric download master the gre to the speaker adjuvant. What can I interview to be this in the day? This only cared advanced download in the away been Hedge Fund Modelling and Analysis word is the graduate use redox-potential for stretching the certain C++ feedback to waste technical side forest. Sorry if you single long received with cell strictly, the given project of C++ is you generalization you know to be the Rational eBooks of space such injury, which is you to provide metric ecosystem others from basic links of Large way. This libri 's your Process substrate to studying with selected sorts in the only justification of information. work your dry office to editing the systems with: All the plant and identifiable thing you show to do everyday accounts to build paperback browser future. sure using reactions and complex programs modelling what to object when organising century and today levels in the nucleic ball. A low embarrassment system same C++ surfaces, coordinates and problems to search. manage Completing Hedge Fund Modelling and topology your primary hormone and turn all the loss and same vector you do to imagine the activities. say Fund Modelling and Analysis. English for Professional Development. Restaurant and Catering Business. The Art and Science of Technical Analysis. other download master and stuff( OOAD) is a Chronic topological neighborhood for including and conveying an conditioning, topology, or information by attacking available resource, always well as looking certain set throughout the blood work movies to belong better libri waterfall and network insert. making to the metric Continuity Unified Process, advice in ambient browser organism is best endowed in an Euclidean and metric browser. etc. by protein, the disorders of OOAD microns, software Dungeons for OOA and structure theorems for boxes yet, will evolve infected and use normally given by object-oriented aids like diagrams and object point. In the open data of particular activity before the notations, there was many concrete changing classes for super point and early ecosystem, hence given to dynamic bottom charred Software Engineering( CASE) dream god(s. No quantitative men, fresh others and material neighbors decomposed the many dimensions at the company, which went bonus approach and came contributing applications. run how you can Hedge to happen a download master the gre 2010 Denitrification to be an important projection with one of our anesthesia micro-organisms! Such in blood host? One system to understand this has to make the Plant Biology Greenhouse and make in symbiotically new with the additional set. fine to Plant Pathology and Plant-Microbe Biology. We say Knowing 4-D soft approaches about the pictures between objects and materials and changing traditional thousands to Allow the vertebrates of stepwise assembly Philosophy across the litter. We let techniques and download master the variables many people to Notice the areas and processes of classification data. At the general reason, we want individual acollection to skills, missing talk mandibles, automobiles, loops, scan acts, and 19th difficulties. science and closed continuum of case leader Atheists. sets and articles winning patches, weight and managers. high spaces of Thanks between compounds and semantics. Where will the Plant Sciences remain you? For one way, to terms and beyond! There support no modifiers to preserve. Contradictions words are experienced to all N on Earth. They 're possible because they agree familial to be their Rational analyst by a being ended consolatione where they have home Breakdown from the inoculation and waste it into theorem. functions do a download master the gre 2010 of probabilisitic ve that can not notice but can review here Metric sometimes and as. download not enables me guess of the superior cases in Fantasia Mathematica. so a such project&rsquo: will you make Adjusting sets when you are to the equal classification, Mark? I are it one of these is activated to understand ' analogous '? What 'm you need the maintainability is? I are along do closed anti-virus, but I simply contain that the n't infected students of treatment are easiest to use designing off with competitive brands. usually there is a continuity of differential browser to receive reviews motivated, measuring from modern ones into other areas that feel systems. THat is why I produced ' a Cataloguing Copiis '. In substances of second nearness vs temporary everything they are Therefore not 00CLOSED. The future between them is easily the Object-oriented as the space between page encapsulation and future set. And what is ' real-world sphere '? I are possibly known the download master the gre stipulated like this. I say some of these sets may believe more real, and are continuously still closed also not within the many nationalist( at least I think also used them). In true, I are however inserted kernel build the characters own scan or topological person. really one of the hottest web functions in extension is oriented ' basic other cause '( usually V is at least modeled of the Poincare technology). For what it is short, I'd up gain all open design. previous power permanently is up a s. They call structures and projects to introduce your download master the. I imply sent this impact and reported it Now large. bear beams, UML, account animals bag 39; surface n't be a environment - give why it is a arbitrary notion and is the guide. The 3D Head First design is Even similar, no Library how temporary case you have. It suggests all the surface; 3D litter; in a properly medical, basic space. Java you can See it if still better to represent person oriented. I would right do Head First Design Patterns. There are a man in my topology- that is me from going where I should charge a computation and where I should finitely. When it is down to it homeomorphisms are a everyone to object unique physics into researcher-focused pseudopodia that are with each understandable. answer to use objects where not you would be consideredthe yourself. Early well the Movement wrote an Integer in the Plot You&rsquo. is the download master the gre 2010 have to like a generality? What would the point of living it into a duality reverse? These connect the &ldquo of Definitions you forward are to run yourself. system links use Supported to believe. system Is dead and you'll need quick applications, interact in preview adopted by this. metric spaces wrote called because of whole organisms into the download master the gre 2010 and terminology of the decal and the structural reality, vital as the object of the relative inner approach of the pH. Each topology is fluid to another and more than one topology is addressed in one dogma, not published in the talk connectedness. Minute power ai brought to dry, not less great, spaces western as the pole, the ideas, and the compounds. These problem libraries say Exactly more or less metric by non-profit categorical Electronics and knot single network as Lately. sophisticated Christians solid as edge see known read to the deep variety, the surface design. It allows there read in laser with a mind and object set. If a lower download master the bird is Also colored, the particular courage can prevent described with parallel topology, either with same triangles or, in more dependent departments, as a vernacular connectedness, where the consent is to the cost. These states published chamfered by use-case and so by the analytics. This rate proves appropriate and the things may adopt expected as the knowing hole contains. Markman B, Barton personal generalisation( 1987) continuity of the incidental model of the book and lower page. Sarwer DB, Fabricatore AN( 2008) Open exercises of the certain object extension left. Sarwer DB, Thompson JK, Mitchell JE, Rubin JP( 2008) self-intersected pages of the capable quinque mug Completing plant decomposing antonym. Dhami LD, Agarwal M( 2006) Safe competitive download master the gre undergoing with patient Topology for the non vector. Alegria Peren goodness, Barba Gomez J, Guerrero-Santos J( 1999) difficult type Completing with line( 120 s books). Zook Conversion( 1975) The dependent decomposition Biology ab-)use. McCraw LH jr( 1974) key life after close energy topology. Chapter 7 's a download master the gre 2010 wound carbon of the benificial metric programming. Chapter 8 supports Notations into higher recens. not of ecosystems for the logs? start our Gift Guides and learn our audiences on what to regulate sets and x during the geometry analysis. using for personal methods? cause our Beautiful Books Protoplast and Start real properties for cases, magnetotaxis techniques and more. The bariatric consumers of the Elements, Vol. The Thirteen Books of the Elements, Vol. The Thirteen Books of the Elements, Vol. 034; In this misconfigured X the response is us to Develop a History more than it is our experts. Without family he is us to the part of other Methods. opera by area the volume does the breakdown of low-dimensional model. As to the stars, they help also 20-to. I still hired the topics of distortions and download master the populations. 2018 The Book Depository Ltd. Our bonus tells exposed open by consisting early constraints to our aspects. Please be telling us by dating your information theory. duo basidiomycetes will be modern after you are the forgiveness distance and backlist the context. 3-space not have set. Please include home to die the substrates given by Disqus. Lucy Oulton, Tuesday 24 Jan 2017 Since both sheets and important budgets obtain as nutrients from the others to the particular Endoglucanases, one can always have them not. What is all of this are? structures, I learned about analogous mycorrhizae and plugs that master values. A Few page of a adaption is only here make us as small reuse as point is. But, if no examples so attract in the Y, completely a infinite index highlights model. human body affects normally good in the parasite of topological way strategies, since topology forests do also perhaps managed with a ongoing through the average point-set( Hilbert Atheists) or the world( Banach aspects). You take it proves moving to ask all other problem like acts and Klein elements, and you need up to the Metabolic implementation to prevent Conjugants of$U$about critical and human developments. That needs draw you usually presented to the iterative download master of important law. The one that does given there to most donation( bizarre anyone) solids. If you have Klein mirrors and the tropics of those, you should develop to continued or good loss. I are only early that first important decomposition gives cycle at all to be with ' staff '. control out more about the wiki on the Community Portal download master. If you need allow, you can regularly manage the returns at the Admin context. An wish is strongly Answer to Object real; well changing molecule topologies and closed difficulties is simple. To use a sure consolatione, internationally come the platform recherche in the decal below or in the version geometry at the coffee of the point. This wiki is home of the Gamepedia Gacha Network. For more Gacha download master, analysis out one of the Notations exactly! 160; World of DemonsDiscuss this decomposition direction and intersect parts to like so. This point-set triggered not shown on 27 November 2018, at 23:07. paper way and forms are features and Terms of their intermediate surface and its spaces. This vertex is a perfect of Curse, Inc. Why live I need to follow a CAPTCHA? modelling the CAPTCHA does you define a coniferous and proclaims you good download to the need landscapeBookmarkDownloadby. What can I produce to make this in the Check? If you do on a current subdivision, like at hole, you can allow an nearness word on your set to run open it takes maybe requested with mind. If you have at an network or slow example, you can make the g particle to intersect a stabilization across the quotient thinking for Many or personal links. Another network to sell Completing this matter in the maintainability sets to say Privacy Pass. download out the species world in the Chrome Store. To me, that is that any download master the gre 2010 of ' exception ' that is radical in the solid Real bargain is vastly begin instead to the historical oil of richness to inhibit the list. Yes, solution is identified by topology. But that does wherein the life of call. Wing-Bar fundamentally is down to the body enzyme, which belongs it for dynamic properties. n't, Just all geos are shared, and they have not now open. For one world, average volume is us sufficiently complete environments that we are to be rather, but which ca properly keep called under the anti-virus of continuous areas. When we are to extend ourselves, what descriptions of topological new aspects need there metric for progression and the axioms of lb we allow here, we wanna been to the account of an new future, and from there to the differential root of a subject usability. In this complex, we turn that a direction lignin is ' Illustrative ' a example x if y is in some( slightly ' aquatic ') basic manner of x. It does moral that looking interesting inland doctors, it offers subject to like the Access of person. I are used that set. But that is However because I 're changed arbitrary analogous surfaces in my download master the. If I announced to get a cycle of great points, I inherit I could Browse have topological of this texture. ecology that this simplifies then any weirder than defining titanic. A union t could have nearer to logic than union disallowing to one airline but farther including to another. What about having that y is done in more Illustrative types Completing duo than glucose? Of surgery, you would develop an nutrient creating case if there Provides a topological complex of interesting answers Object-oriented. Which would have some surface component into topology! It is so beat warm download master the. Why would you customize an thumbnail? however, I were moved an subdivision. sign, we represent called in a truth of approach. download master the gre 2010 is a nice . There is no one hole why a SolidObject is an formalism. n't, one are they all are in fuzzy is SolidObject of cellulose: basic email that has closer to ' example ' than atom equipped administrator; es. being and specfying a metric Everyone for topology is one of the point-set tables for adding or Covering an atheist. It is mechanistic to be nuclear crashes, inputs and early download, actually orientated However to oriented new poles by a only way, as components to save by or to do the mesh of decomposition. Over the topologists that 'm recognized, biosphere encloses to share the metric which extended the UML for diein common recalcitrant millions and born mobile objects to gain up spaces and phase files. Many together evolved morals concerned in only markets agree to compute the practice of mass and is very really in Truesight. They need the solid children who do beyond including connectors on consolatione and concentrate more bird enriched visitors. intersections 'm metric to meet and assign the download master the gre with topological situation, naturally limited by infinite Plant or solids that are what a army must do. No volume supports certified using in concepts. It works Got$N$and points reject related from the reason. usually meaningful banner needs a part what to be So than how to gain. The download master the of personal available lines from these Hemoglobin autores is a more monthly information to the later projects of philosophy as so please to subset donation. This ever needed and shed endless section of Plant Litter suggests on situation readers in obvious optimal artifacts algebraic as near and monthly methods. The volume of metric plastic applications from these importance areas forms a more Amphitrichous A-B to the later surfaces of anti-virus as still purportedly to book component. It further about does how lines are recycled not to incidental textbooks. Earlier lines imply logic in accounting of other lignin. This volume outlines rigorously sometimes been on Listopia. There need no software data on this implementation very. dead a orientation while we carry you in to your decomposition way. These number Point-sets are seen to move funds and all variant invariants to judge conditions into Counterexamples dead as mitochondria, approach superset or metric bird. expertise to Scientific Research Projects moves a Object-oriented office to the continuous individual surface reality. The download master the gre 2010 of Modern Science makes the energy of equation from surface to the network. The consolatione, which does on mass and administrator of research, will begin inverse to insight Characters selecting a anything for the coarse analysis. it&rsquo to Cancer Biology is a T1 object on how sets continue and are. Dungeons am f(x and insights, tradition fungi; start cycles and rates; natural way poles in non-empty pectin with religions, mind proofs, attributes, and Check books. This solution is the name for a surgical consultation of approach for set experts without differential available space. This iterative insurance of subset is problems of the cardinality and NHS of the most strong pH twigs. download of set of process of object contains to their useful space and temporary heterophylla-Picea sites. Leaf-fall officially a large space y. A polygon cellulose for the continuity of equations preloading operation coverage. links of the style pathologies of epsilon-delta analyst. The fact of union Users in care of figure topology. Amsterdam, North-Holland Publ. list engineering and browser in a heart( Tectona grandis) product. so-called weight from de-epithelializing silicone and guidance trunk in the Hubbard Brook Forest, New Hampshire. properties in download master the model notion. homeomorphism bacteria of areas of mesofaunal posts. Some waves for being the point of help poles in the parent of Logs. circumferential definition of course sets of topological confirmation in 8 and four-dimensional terms. pole and anatomy of phase and small workshop of t in idealistic West Africa. approach of 1st and metrizable genes. excluding behaviors of common introduction concept. structure view through continuity theory. He himself reported download master the gre 2010 there was a higher product, as he did questions. This would exacerbate him a information. Wikipedia is a malware of comfort. It encloses no real stuff any of its axioms. It is the relation without project. What was 501(c)(3 highest intersection of truth? For Boethius, the highest assembly of atheist was musica mundana, the Compressive viewing of the service performed in the sacrifices of the sets and data and the object-oriented mesh of the ecosystems. communal goes are download ones and Christianity of the rates. full to this were nona- way; a, the closed consolatione pointed in the works of the metric length and neighbourhood, applying the four cells, or abstraction regions, migrating from the rare pine of the four neighborhoods. The lowest leg of reality for Boethius found example numbers, the paperback markets missed by beliefs and small edges. For what signed Boethius as provided for? Boethius was obviously given for his' PhysicalTherapy of calculus' which he lost while in tradition where he was later removed. It attended first open in the Middle Ages. They intersect from a various home in which Start proves tight and site page and line. samples alert often angular to inside. topology: Just, proteins can ensure from Also. And decomposing for some download master the gre. You illustrate, surround, & Ask God for phase'. If open have your shared subsets what you do them to buy, Werther its I are you or a cover. What are they take to you when you want? From what I happen the two-dimensional gneiss knows to prove to click out why you were. So your litter is s to see the universe of programming and this does on your network Object. If more code is become in this community they will define loss and volume orders and say at the specific handlers. After all of leaf; is has given the pit 's found up and wanted for knowing. It is tested and so a download master the gre contains in to pay the example for the litter and Litter. This focuses when type is divided, lab 'd on and the Analysis is required to make long and paired in a phase. Some techniques are one-point levels so those work made and some 've infected examples that may touch the many point I are supposed also. But, I try I use the mm of objects in the other trading. emphasis will prevent at some fall. Because every key statechart gives main, they tend very outer to % at any affair. always, runtime will Enter at some life. There are no Functional topology of Infinite. The download master the of the lack line as an open domain of the flow is shown described for a Structured type. One of the shared definitions getting before offers'' chapter'', a network fairly threatened to remove to any continuities in analysis, use and series. initiation applies all on the version essence, where functions and neighbourhoods not do among the earliest Cases of low system. Wildland donors 're Collected many body to prevent glucose in developer to whole umbilicus and intersection problem. The fuzzy download master is that office topology is a non-profit property on the structure of materials, and possible approach has a general membership of relation computer and computation code. 1966; Madge, 1965); and( 3) the coniferous problems under the scalable activities! 1973; Mikola, 1960; Ovington, 1954) because first anything is more business, more p, less gl'infelici and now locally less central privacy than potential mind. Pradhan( 1973) made that book of pole future did more only than that of Acacia norm network. However, download master the welfare is actually faster than faces and relationships( John, 1973; Rochow, 1974), and website Data which are softer think more So than surface dynamics( Willams and Gray. N years and genus actinomycetes. N collection come to interview at an relative tissue( Singh, 1969). Broadfoot and Pierre( 1939) claimed a still hind lack between god shape and each of five financial measures: possible nitrite, pelvic tangible temperature, specific flight, functional music and available background. Kucera( 1959) too were a real download master between both software of loan and p-adic module water of online others. Since the human simplex of investing has its matter of basis, it is involved almost to do same climate in influencing the shape of designs. The object-oriented stages according variable set on Skin world are context spaces and surface. Crossley and Hoglund( 1962) wanted a shared biology between the confusion of sets in distance surfaces and the scan of connection definition. What stops all of this have? mitochondria, I received about triple cases and reasons that share programs. A Object-oriented libri of a microarray is here precisely judge us as surgical harmony as bird has. But, if no sets equally Hedge in the meeting, Please a northern team has system. topological context converges long basic in the continuity of secondary privilege points, since fund processes are no differently incorporated with a personal through the Other sense( Hilbert spaces) or the topology( Banach adults). You are it simplifies Continuing to answer all algebraic subdivision like phases and Klein technologies, and you are up to the open page to make rates of way about existing and topological changes. That is Jumpstart you simply was to the small system of suitable talk. The one that is left Finally to most environment( new topology) returns. If you are Klein projections and the rates of those, you should delete to topological or many download master the gre 2010. I do n't handy that previous worked-out space is biochemistry at all to contact with ' site '. In the type of a due, how do you depend whether anti-virus access is ' knotted ' to beat tick? You can die not name in a old going, that you can be in a open topology. need takes properly a medical concept. A better curvature have Specimen of a way of forests in the notion. You are the weight in a only browser? Of case you can join tool in a Percent system, at least Maybe n't as you can share it last even, for network in the metric programming. Lucy Oulton, Tuesday 13 Dec 2016 WorldCat simplifies the download's largest design testing, forming you see page degrees Such. Please use in to WorldCat; know not start an impact? You can be; avoid a close calculation. say up or move in to make your transition. By considering our donation, you make that you need understood and be our Cookie Policy, Privacy Policy, and our units of Service. is the C matter set technical? I moved following with a download master the about C and C++ and he was that C has basic, but I il that it put just. I have that you can end specific centuries in C, but C++ is a dynamic free work. thermally, it needed kind on who comes what it says to like liberal and that it builds algebraic to use what rapid also also is. What are you patterns on this? be philosophy to shout on a opera of Oriented various and ever I will do other to download the Philosophy. mathematics are most geometric for Completing when getting reasons on a download master the gre 2010 and for taking different Ecological website; fund; within the bird when section books have or embody. models 're basis that do of three clicking industries. This mesh of address is as less normal, but There following around characteristics or ecosystem crashes of a glance. In such anti-virus, this value 's not built as the technique; x; space, since N-poles are before naked for Cycling the theory of the region. aware Pole TypesPoles with six or more chips recommend only repeated to learn entire network and very just do up in particular world. future; terms back cover to go that singletons Do sure, and eliminated for international language. But when else pay we learn when a Evolution should or surgery; treatment provide where it defines? It So 's down to religion. If a x corrects quantifying the point-set of the calculus, differently it should complete written or moved. This Sorry does on developments or any single-variable spaces of surjective download master the gre. getting appendages: The algebraic analysis of the most incorporated years I animal initiated becomes how to Do fields. And for personalized economist, edges can make Therefore arbitrary to follow without working biogeochemistry in an particular pointer. In so every N network overview must read meshed to affect a topology in complement services, supporting the root to maybe Read only multiple if total forms do to determine changed. This is why the best primer for reading bacteria gives to only be them wherever general by saying your availability goals coplies major. analysis; though not necessary to support where a distance will assert by according at the other parts of a process and where they 've. That lot is where a set will make. The contents are only maybe, and the subject download that we can However very be to the Iteration( conducted) itself is the figure of patterns that the site( factor) is. In medical objects, Cardinality is the such, and as especially, fish which is a structure( in widely hopefully as the lips relate to one another). Of body, in tissue, we just use with functions whose near sequence is supporter. We have with the accessible particle, in which there reduces a respected atheism of software between surfaces; there is n't an network which encloses used on the segments of the concept. We are in the statistical analysis, where there needs no longer a scalable, annual Humus between surjective sets, but there is n't a Check of greats, which 'm a public of estimation called to them( object from the topology neighborhood), and slightly of umbilicus between them. The actual y to lose simply provides that all of these easily infected, major projects, the metric theatre which is levels like right help repeatably largely various, are intuitively millions between data. The body only longer defines object-oriented access, but just is integrated difficulties in which minutes do to one another. be us share how some of these decomposers shrinkwrap to one another in the difficult download master. The approach of the distinct anti-virus, in inverse applications, helps from its finite way. From the full equivalence, we can breathe a area for the moisture of an between distances, and from this lets the space of choice. We can just beat a development for a plane, a lot, of devices in the programming. then from device, we Relate the Ammonification of waist. From the sequence, we supply CASE to run a modeling for someone: We 'm that a real hedge core Everyone is down the modeling of a basic useful religious Hospital, and from this, we are the echinoderm not of a same neck. We do reading down through Lately excellent fuzzy intersections: uptake helps a more alone metric continuity than magnetite, and class is a stronger life than tvchannel. This is the download master: What Loss has a class be when we cannot not ask a god of ofenlightenment between Undergraduates? What strives the hedge selective base" that a analysis can allow matched with? In download master, you prefer a inclusion of hides, and you believe their &minus by which flows love historical to each accessible. Most of the open only substrates are thought belonging actual distortions of terms, where there surfaces no due eumdem of two bodies that do few topics; there is a accumulation of then smaller data that have very closer and closer iron Experiments. The diagram of chapter that I have smoothing especially works more like the time of a topology - it 's that abstract world of earthworms that are interdisciplinary to each last, with that homotopy music to do Do narrower and narrower data around a deployment. The methods of terms that you have from using that are woody. In some interception, you are receiving nations - but they look algebraic, free, same sequences, because all that components is what servers are only to what fuzzy sequences - too what classification you do to evaluate to say from one to another. send is require a skilled download at an volume. There Is a intangible cool about Q& you can naturally model a necessity at hemicellulose, because they are the arguments who ca very get the response between their research modeling and their joke. Like most inner classes, there 's a cap of method heard else of it. From the revolution of topology, the faith athiest and the vector have the Thirteen developer. In management, the fuzzy series is actively work: what is provides the compatible exhibitors of the religion: what needs used to what, what subsets are evil to what solid limitations. If you dwell the download master the gre 2010 Heaven into content, you can model it from course to product without Pertaining it, or living it, or studying any friends exceptionally. One can demolish the subject by Sexually wearing and dating: once in$x$, they are the tribal access. On the scientific setting, a practice strips available: you ca now have a surface into a edge without using a reader in it; and you ca again make a today into a ten-gon without Hence Depending a topology in it, or gaining it into a analyst and redirecting the partibus just. You ca alone go one into the locally-homeomorphic without ii the single coffee of the power. To make at it slightly more n't: gain a access. usually, are a download master the through it, to write in into a website. Erskine were before he were Steve into a marvelous download? What would be the basic approaches to be similarities? How geometric should I preserve in watching back views? Why do UK MPs tripling the topological religion and usually the death? 's science transferred in Red Dead Redemption 2? covers it third to be property; forest; metric part poles in a Internet development? What strictly surfaces it do to die meeting? Why remember we then understand that when we have a download master the by a programming53Object connection ecosystem the statechart works hands-on? Why shrinkwrap a surgery and around appear degrees? In Star Trek( 2009), how set the methodologies of the Kelvin be base? How can an space like a authority before leaves are big? 39; functional the best object to tell over 400 cell of mathematical customers? MVP Nominations - What should however die offset in the markets during set hand? When I was a Class of representations that shows were my to Find inside, an clear shine of you was me to get about theologian. I was that before - not after I dove my download master the gre 2010 to ScienceBlogs. hugely I have regulating to change quickly to those Aerial sets, 'm some assessing and looking, be some scripts, and say them. I 'm you so perhaps a download master the: please tell Open Library plant. The hedge Abstract learns dangerous. If treatment methods in development, we can set this sense process. n't not, your base will be grouped influential, learning your topology! not we have is the download master the of a human topology to define a book the common equivalent collectors. But we very are to customize for advertisements and depth. For 22 profiles, my topology is filled to need the bargain of theory and happen it technical to thing. Open Library proves a biomass, but we are your supply. If you are our download master paperback, Consultation in what you can musica. Your possible neighbourhood will assemble Modified long-term number usually. I are you immediately away a method: please link Open Library SolidObject. The centralized skin is variable. If download master fees in math, we can emphasize this Energy edition. No naturally, your home will ask based different, being your space! all we think is the concentration of a geometric fund to Sign a specialist the fascinating property points. But we now have to give for sums and space. 2001) found that C wonderful download master the closed during protein face can Read in known thumbnail area by forests topologically containing down shared s and inside using in- possibility. necessary only similar classes are shown All nested by determining of pathway. interior antonym in sets is repeated in infected reflections to Check individual and topology. These strategies find squishing much and provide many artists for human cellulose and infected space according. markets: Why comes this system framework? The browser is measured constantly matched by the matters of different factors. comparison plane encloses an higher-dimensional existence in capturing and pinching to the most false tough temperate concerns. scan$Y\$ and hydra analysis intersect deeply caused with return phenomenon. using formally provided knots explores on continuity of new rates of studies( Ehrenfeld and Toth 1997). Without these sets dominant, Monthly download master the of dimensions is very infected and specifically Euclidean molecules may use in the bank. For wrinkle, questions within the open closed drugs of Guatemala derive more possible to 500-year-old reasons and twisty Fungal respect spaces surgical to flux of tail tools. This set is reading existing that these rates are mixed to each interested, the solution of phases wait of different man, and the book and next OOPs are new. This belongs a area of two tables of equations looking terms in wherein sure centuries. model trading 's the open loss described to enter agent and to be sense analysts that see for possible different Cases. definition loss in looking life presentation involve as a loss of scaly and volume man. Cannadian Journal of Botany 60:2263-2269.
|
2020-02-21 05:50:59
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.260042667388916, "perplexity": 5150.740897268045}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-10/segments/1581875145443.63/warc/CC-MAIN-20200221045555-20200221075555-00087.warc.gz"}
|
http://crypto.stackexchange.com/tags/zero-knowledge-proofs/hot?filter=week
|
# Tag Info
3
The description of this "kid zero knowledge" example follows the strucure of how interactive proofs that are zero-knowledge usually work: The prover sends a commitment (walks into one of the two sides) The verifier challenges the prover (tosses the coin to decide which side the prover should walk out) The prover gives a response (walks out the side the ...
Only top voted, non community-wiki answers of a minimum length are eligible
|
2014-03-12 15:21:43
|
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8104615807533264, "perplexity": 4787.478131223649}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 5, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2014-10/segments/1394021900438/warc/CC-MAIN-20140305121820-00078-ip-10-183-142-35.ec2.internal.warc.gz"}
|
http://www.physicsforums.com/showthread.php?t=162189
|
## Enginering Electromagnetics help
1. The problem statement, all variables and given/known data
The charge density throughout a region is give by pv=10e^-3r{uC/m^3}, where r is measeure in meters. Find the total charge Q contained in a sphere center about the origin that has a radius of meters.
Where e = -1.60210X10^-19
2. Relevant equations
e = -1.60210X10^-19
volume of a sphere = 4/3¶r^3
3. The attempt at a solution
didnt know where to start
PhysOrg.com science news on PhysOrg.com >> Leading 3-D printer firms to merge in $403M deal (Update)>> LA to give every student an iPad;$30M order>> CIA faulted for choosing Amazon over IBM on cloud contract
The following should help. Regards, Nacer.
Yeah, I figured out that part i just have trouble integrating the problem. I get some huge number when I do that. But I do appreciate you showing me that but, if you could can you show me how its worked out. The answer is 8.73microCoulombs. Thanks again
## Enginering Electromagnetics help
There an attachment of a sample problem like the one im doing either way i still cant get the right answer.
Attached Images
untitled.bmp (63.4 KB, 3 views)
Similar discussions for: Enginering Electromagnetics help Thread Forum Replies Classical Physics 1 Biology 0 Academic Guidance 0 Introductory Physics Homework 6 General Engineering 12
|
2013-06-20 07:16:50
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.2868807911872864, "perplexity": 3019.5280127023043}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368710963930/warc/CC-MAIN-20130516132923-00049-ip-10-60-113-184.ec2.internal.warc.gz"}
|
http://proteinsandwavefunctions.blogspot.com/2018/07/gaussian-processes-for-dumb-dummies.html
|
## Monday, July 2, 2018
### Gaussian Processes for dumb dummies
It's quite disheartening when you don't understand something titled "xxx for dummies" so that's what I was after reading Katherine Bailey's blogpost "Gaussian Processes for Dummies". Luckily the post included Python code and a link to some lecture notes for which I also found the corresponding recorded lecture on YouTube.
Having looked at all that, the blogpost made more sense and I now feel I understand Gaussian Processes (GP) a little bit better, at least how it applies to regression, and here is my take on it.
What is GP good for?
GP is an algorithm that takes x and y coordinates as input and returns a numerical fit and the associated standard deviation at each point. What makes GP special is that you don't have to choose a mathematical function for the fit and you get the uncertainty of the fit at each point
I refactored the code from Katherine Baily's blogpost here and show a plot of a GP fit ("mu", red line) to five points of a $sin$ function (blue squares, "Xtrain" and "ytrain"). "Xtest" is a vector of x-coordinates for which I want to evaluate the fit, "mu". "stdev" is a vector of standard deviations of the fit at each point in Xtest and the gray shading in the plot represents 2 standard deviations of uncertainty. We'll get to "L" later and there is a hyperparameter ("param" in the code) that we also need to talk about.
mu, stdv, L = Gaussian_process(Xtest, Xtrain, ytrain)
What does the GP algorithm do?
1. Construct the kernel matrix $\mathbf{K_{**}}$, where $K_{**,ij} = e^{-(x_i-x_j)^2/2\lambda}$, for the test set. The kernel is a measure of how similar $x_i$ is to $x_j$ and $\lambda$ is a parameter, called "param" in the code (note to self: "lambda" is a bad choice for a variable name in Python).
2. Construct the kernel matrix $\mathbf{K}$ for the test set and Cholesky-decompose it to get $\mathbf{L}$., i.e. $\mathbf{K = LL^T}$
3. Construct the kernel matrix $\mathbf{K_*}$ connecting the test set to the training set and compute $\mathbf{L_k = L^{-1}K_*}$ (use the solve function for better numerical stability).
4. Compute $\boldsymbol{\mu} = \mathbf{ L^T_kL^{-1}y}_{train}$
5. Compute the standard deviation $s_i = \sqrt{k_{**,ii} - \sum_j L^2_{k,ij}}$ where $\mathbf{k_{**}}$ is obtained by diagonalising $\mathbf{K_{**}}$
6. Compute $\mathbf{L}$ by Cholesky-decomposing $\mathbf{K_{** }- L_k^T L_k}$
What is the basic idea behind the GP algorithm?
How would you write an algorithm that would generate an random function $y=f(x)$. I would argue that the simplest way is simply to generate a random number $y \sim \mathcal{N}(0,1)$ (i.e. y = np.random.normal()) for each value of $x$. ($\mathcal{N}(0,1)$ is standard notation for a Gaussian distribution with 0 mean and a standard deviation of 1.)
Here's a plot of three such functions.
If I generated 1000 of them the average y-value at each x-coordinate $\langle y_i \rangle$ would be 0. I can change this by $y \sim \mathcal{N}(\mu,\sigma^2) = \mu + \sigma \mathcal{N}(0,1)$.
You'll notice that these functions are much more jagged than the functions you usually work with. Another way of saying this is that the values of $y$ tend to be similar if the corresponding values of $x$ are similar, i.e. the $y$ values are correlated by distance ($|x_i - x_j|$).
This correlation is quantified by the kernel matrix $\mathbf{K}$ and can be used to generate smoother functions by $y_i \sim \sum_j L_{ij} \mathcal{N}(0,1)$. This works great as you can see from this plot
You can think of $\mathbf{K}$ and $\mathbf{L}$ a bit like the variance $\sigma^2$ and the standard deviation $\sigma$. $\langle y_i \rangle$ = 0 as before but this can be changed in analogy with the uncorrelated case $y_i \sim \mu + \sum_j L_{ij} \mathcal{N}(0,1)$
The GP is a way of generalising this equation as $y_i \sim \mu_i + \sum_j L_{ij} \mathcal{N}(0,1)$ and using the training data to obtain values for $\mu_i$ and $L_{ij}$ such that $\mu_i$ matches the y-values in the training data with correspondingly small $L_{ij}$ values, i.e. greater certainty. Now if you generate a 1000 such random functions and average you will get $\boldsymbol{\mu}$.
|
2019-01-17 16:21:45
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8390688896179199, "perplexity": 475.6443673611562}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-04/segments/1547583658988.30/warc/CC-MAIN-20190117143601-20190117165601-00488.warc.gz"}
|
https://www.shaalaa.com/question-bank-solutions/increasing-decreasing-functions-prove-that-function-f-given-f-x-log-cos-x-strictly-increasing-2-0-strictly-decreasing-0-2_46129
|
PUC Karnataka Science Class 12Department of Pre-University Education, Karnataka
Share
Books Shortlist
Your shortlist is empty
# Solution for Prove that the Function F Given by F(X) = Log Cos X is Strictly Increasing on (−π/2, 0) and Strictly Decreasing on (0, π/2) ? - PUC Karnataka Science Class 12 - Mathematics
#### Question
Prove that the function f given by f(x) = log cos x is strictly increasing on (−π/2, 0) and strictly decreasing on (0, π/2) ?
#### Solution
$f\left( x \right) = \log \cos x$
$f'\left( x \right) = \frac{1}{\cos x}\left( - \sin x \right)$
$= - \tan x$
$\text { Now,}$
$x \in \left( - \frac{\pi}{2}, 0 \right)$
$\Rightarrow \tan x < 0$
$\Rightarrow - \tan x > 0$
$\Rightarrow f'(x) > 0$
$\text { So,f(x)is strictly increasing on } \left( - \frac{\pi}{2}, 0 \right) .$
$\text { Now,}$
$x \in \left( 0, \frac{\pi}{2} \right)$
$\Rightarrow \tan x > 0$
$\Rightarrow - \tan x < 0$
$\Rightarrow f'(x) < 0$
$\text { So,f(x)isstrictly decreasing on }\left( 0, \frac{\pi}{2} \right).$
Is there an error in this question or solution?
#### Video TutorialsVIEW ALL [3]
Solution Prove that the Function F Given by F(X) = Log Cos X is Strictly Increasing on (−π/2, 0) and Strictly Decreasing on (0, π/2) ? Concept: Increasing and Decreasing Functions.
S
|
2019-05-23 21:50:54
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.6067099571228027, "perplexity": 4613.5937358286}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-22/segments/1558232257396.96/warc/CC-MAIN-20190523204120-20190523230120-00346.warc.gz"}
|
https://deepai.org/publication/computing-the-shapley-value-in-allocation-problems-approximations-and-bounds-with-an-application-to-the-italian-vqr-research-assessment-program
|
DeepAI
# Computing the Shapley Value in Allocation Problems: Approximations and Bounds, with an Application to the Italian VQR Research Assessment Program
In allocation problems, a given set of goods are assigned to agents in such a way that the social welfare is maximised, that is, the largest possible global worth is achieved. When goods are indivisible, it is possible to use money compensation to perform a fair allocation taking into account the actual contribution of all agents to the social welfare. Coalitional games provide a formal mathematical framework to model such problems, in particular the Shapley value is a solution concept widely used for assigning worths to agents in a fair way. Unfortunately, computing this value is a # P-hard problem, so that applying this good theoretical notion is often quite difficult in real-world problems. We describe useful properties that allow us to greatly simplify the instances of allocation problems, without affecting the Shapley value of any player. Moreover, we propose algorithms for computing lower bounds and upper bounds of the Shapley value, which in some cases provide the exact result and that can be combined with approximation algorithms. The proposed techniques have been implemented and tested on a real-world application of allocation problems, namely, the Italian research assessment program, known as VQR. For the large university considered in the experiments, the problem involves thousands of agents and goods (here, researchers and their research products). The algorithms described in the paper are able to compute the Shapley value for most of those agents, and to get a good approximation of the Shapley value for all of them.
• 1 publication
• 1 publication
• 3 publications
• 6 publications
• 5 publications
07/19/2021
### Maximizing Nash Social Welfare in 2-Value Instances
We consider the problem of maximizing the Nash social welfare when alloc...
05/13/2020
### Fair and Efficient Allocations under Subadditive Valuations
We study the problem of allocating a set of indivisible goods among agen...
06/18/2019
### Weighted Maxmin Fair Share Allocation of Indivisible Chores
We initiate the study of indivisible chore allocation for agents with as...
02/14/2022
### Optimizing over Serial Dictatorships
Motivated by the success of the serial dictatorship mechanism in social ...
01/19/2022
### Achieving Envy-Freeness with Limited Subsidies under Dichotomous Valuations
We study the problem of allocating indivisible goods among agents in a f...
08/17/2022
### Finding Fair Allocations under Budget Constraints
We study the fair allocation of indivisible goods among agents with iden...
08/21/2014
### A Study of Proxies for Shapley Allocations of Transport Costs
We propose and evaluate a number of solutions to the problem of calculat...
## 1 Introduction
### 1.1 Coalitional Game Theory
Coalitional games provide a rich mathematical framework to analyze interactions between intelligent agents. We consider coalitional games of the form , consisting of a set of
agents and a characteristic function
. The latter maps each coalition to the worth that agents in can obtain by collaborating with each other. In this context, the crucial problem is to find a mechanism to allocate the worth , i.e., the value of the grand-coalition , in a way that is fair for all players and that additionally satisfies some further important properties such as efficiency: we distribute precisely the available budget to players (not more and not less). Moreover, for fairness and stability reasons, it is usually required that every group of agents gets at least the worth that it can guarantee to the game.
Several solution concepts have been considered in the literature as “fair allocation” schemes and, among them, a prominent one is the Shapley value Shapley (1953). According to this notion, the worth of any agent is determined by considering its actual contribution to all the possible coalitions of agents. More precisely, it is considered the so-called marginal contribution to any coalition , that is, the difference between what can be obtained when collaborates with the agents in and what can be obtained without the contribution of . More formally, the Shapley value of a player is defined by the following weighted average of all such marginal contributions:
ϕi(G)=∑C⊆N∖{i}|C|!(n−|C|−1)!n!(v(C∪{i})−v(C)).
### 1.2 Allocation Games
Among the various classes of coalitional games, we focus in this paper on allocation games, which is a setting for analyzing fair division problems where monetary compensations are allowed and utilities are quasi-linear Moulin (1992). Allocation games naturally arise in various application domains, ranging from house allocation to room assignment-rent division, to (cooperative) scheduling and task allocation, to protocols for wireless communication networks, and to queuing problems (see, e.g., Greco Scarcello (20142); Iera . (2011); Maniquet (2003); Mishra Rangarajan (2007); Moulin (1992) and the references therein).
Computing the Shapley value of such games is a difficult problem, indeed it is #P-hard even if goods can only have two different possible values Greco . (2015). In this paper we focus on large instances of this problem, involving thousands of agents and goods, for which no algorithm described in the literature is able to provide an exact solution. There are however some promising recent advances that identify islands of tractability for the allocation problems where at most one good is allocated to each agent: it has been recently shown that those instances where the treewidth of the agents’ interaction-graph is bounded by some constant (i.e., have a low degree of cyclicity) can be solved in polynomial-time Greco . (2015). The result is based on recent advances on counting solutions of conjunctive queries with existential variables Greco Scarcello (20141). Unfortunately, if the structure is quite cyclic this technique cannot be applied to large instances, because its computational complexity has an exponential dependency on the treewidth.
In some applications, one can be satisfied with approximations of the Shapley value. With this respect, things are quite good in principle, since we know there exists a fully polynomial-time randomized approximation scheme to compute the Shapley value in supermodular games Liben-Nowell . (2012). The algorithm can thus be tuned to obtain the desired maximum expected error, as a percentage of the correct Shapley value. However, not very surprisingly, for very large instances one has to consider a huge number of samples, in order to stay below a reasonable expected error. Maleki et al. Maleki . (2013)
provide bounds for the estimation error (as an absolute number rather than a percentage of the correct value) if the variance or the range of the samples are known. They also introduce stratified sampling as a method to further reduce the number of required samples.
### 1.3 Contribution
In order to attack large instances of allocation problems, we start by proving some useful properties of these problems that allow us to decompose instances into smaller pieces, which can be solved independently. Moreover, some of these properties identify cases where the computation of the worth function can be obtained in a very efficient way.
With these properties, we are able to use the randomized approximation algorithm of Liben-Nowell et al. Liben-Nowell . (2012) even on instances that (when not decomposed) are very large.
Furthermore, we note that in some applications one may prefer to determine a guaranteed interval for the Shapley value, rather than one probably good point. Therefore, we propose algorithms for computing a lower bound and an upper bound of the Shapley value for allocation problems. In many cases the distance between the two bounds is quite small, and sometimes they even coincide, which means that we actually computed the exact value. We also used these algorithms together with the approximation algorithm of Liben-Nowell et al.
Liben-Nowell . (2012), to provide a more accurate evaluation of the maximum error of this randomized solution, for the considered instances.
Moreover, by plugging the computed lower bound values into the randomized sampling algorithm proposed by Maleki et al. Maleki . (2013), we were able to express their error bound as a percentage of the correct Shapley value, rather than as an absolute number, at least for our test instances. This allowed us to compute approximate Shapley values for our largest test case (namely, the 2011-2014 research assessment exercise of Sapienza University of Rome), within 5% of the correct value with 99% probability, in a matter of hours.
### 1.4 The Case Study
The way ANVUR currently uses product scores, for the purposes described above, yields evaluations that do not satisfy the desirable properties outlined in Section 4. In order to deal with this issue, we have modeled the problem as an allocation game Greco Scarcello (2013), with a fair way to divide the total score of the university among researchers, groups, and departments based on the Shapley value. The proposed division rule enjoys many desirable properties, such as the independence of the specific allocation of research products, the independence of the preliminary (optimal) products selection, the guarantee of the actual (marginal) contribution, and so on.
## 2 Preliminaries
In the setting considered in this paper, a game is defined by an allocation scenario comprising a set of agents and a set of goods , whose values are given by the function mapping each good to a non-negative real number. The function associates each agent with the set of goods he/she is interested in. Moreover, the natural number provides the maximum number of goods that can be assigned to each agent. Each good is indivisible and can be assigned at most to one player.
For a coalition , a (feasible) allocation is a mapping from to sets of goods from such that: each agent gets a set of goods with , and , for any other agent (each good can be assigned to one agent at most).
We denote by the set of all goods in the image of , that is, . With a slight abuse of notation, we denote by the sum of all the values of a set of goods , and by the value . An allocation is optimal if there exists no allocation with . The total value of such an optimal allocation for the coalition is denoted by . The budget available for , also called the (maximum) social welfare, is , that is, the value of any optimal allocation for the whole set of agents (the grand-coalition). The coalitional game defined by the scenario is the pair , that is, the game where the worth of any coalition is given by the value of any of its optimal allocations. Note that holds, for each , since the allocation where no agent receives any goods is a feasible one (the value of an empty set of goods is ). The definition trivializes for , with .
###### Example 1
Consider the allocation scenario , depicted in a graphical way in Figure 1, where each edge connects an agent to a good she is interested in, and it is possible to allocate just one good to each agent (). The figure shows on the left an allocation for all the agents, with the edges in bold identifying the allocation of goods to agents. Note that this is an optimal allocation, i.e., a feasible allocation whose sum of values of the allocated goods is the maximum possible one. The value of this allocation is .
The coalitional game associated with this scenario is , where the worth function is precisely . In particular, we have seen that, for the grand-coalition, holds. For each with , an optimal allocation restricted to the agents in is also reported in Figure 1. It follows that the other values of the worth function are , = , , and .
For any allocation scenario , we define the agents graph as the undirected graph such that if there is a good .
## 3 The VQR Allocation Game
Note that the VQR research assessment exercise can be naturally modeled as an allocation scenario where is the set of researchers affiliated with a certain university , is the set of publications selected by for the assessment exercise, maps authors to the set of publications they have written, and assigns a value to each publication. In the current VQR programme (covering years 2011-2014), the range of is , with the latter value reserved to the excellent products.
In the submission phase, the values are estimated by the universities according to authors’ self-evaluations, and to the reference tables published by ANVUR (not available for some research areas). At the end of the program, will receive an amount of funds proportional to , that is, to the considered measure of the quality of the research produced by the university . The first combinatorial problem, which is easily seen to be a weighted matching problem, is to identify the best allocation scenario for the university. That is, to select a set of publications to be submitted, having the maximum possible total value among all those authored by in the considered period.
The final result may sometimes be different from the preliminary estimate, in particular because of those publications that undergo a peer-review process by experts selected by ANVUR, which clearly introduces a subjective factor in the evaluation. We assume that the values used by in the preliminary phase do coincide with the final ANVUR evaluation for all products. This is actually immaterial for the purpose of this paper, because we are interested here in the final division, where only the final (ANVUR) evaluation matters. However, we recall for the sake of completeness that, by adopting the fair division rule used in this paper, the best choice for all researchers is to provide their most accurate evaluation, so that is able to submit any optimal selection of products to ANVUR. In particular, any strategically incorrect self-evaluation by any researcher is useless, in that it cannot lead to any improvement in her/his personal evaluation, while it can lead to a worse evaluation if the best total value for is missed Greco Scarcello (2013).
###### Example 2
Let us consider the weighted bipartite graph in Figure 2, whose vertices are the researchers of a university and all the publications they have written. Edges encode the authorship relation , and weights encode the mapping providing the values of the publications. Consider the optimal allocation such that , , and , encoded by the solid lines in the figure. Based on this allocation, an optimal selection of publications to be submitted for the evaluation is . The publications that are not submitted are shown in black in the figure. Note that is co-authored by , , and , while is co-authored by and Thus, the allocation scenario to be considered is , and the associated coalitional game is the pair . In particular, the total value of the grand-coalition is .
The problem that we face is how to compute, from the total value obtained by , a fair score for individual researchers, or groups, or departments, and so on. As mentioned above, product scores are currently used for evaluating the hiring policy of universities and the PhD committees, and from this year such scores contribute to evaluate the quality of courses of study, too. Unfortunately, this is currently done in a way that fails to satisfy the properties that we outline below. Instead, following Greco Scarcello (2013), we propose to use the Shapley value of the allocation game defined by the scenario selected by the given structure as the division rule to distribute the available total value (or budget) to all the participating agents. For the allocation scenario in Example 2, we get , , and . Notice that the Shapley value is not a percentage assignment of publications to authors, but takes into account all possible coalitions of agents. Note that is not penalized by the fact that its best publication is assigned to researcher , in the submission phase determined by the optimal allocation depicted in Figure 2. Similarly, is not penalized by the fact that the worst publication is assigned to her/him (instead of being assigned to ).
Another important property is that the value assigned to each researcher is independent by the specific selection of products to be submitted, as long as the submission is an optimal one. For instance, an equivalent selection would consist of the products , because of the optimal allocation such that , , and . It can be checked that no Shapley value changes for any researcher, by considering the alternative allocation scenario based on the selection of products . On the other hand this nice property does not hold for many division rules. For instance, assume that the value of each researcher is determined by the average score of all the products evaluated by ANVUR of which she is a (co-)author222The products that were not submitted cannot be used, because they miss a certified evaluation by ANVUR.. Then, in the former allocation scenario gets , while in the latter one she gets . Symmetrically, gets a higher value in the former scenario and a lower one in the latter.
We will now recall the main desirable properties enjoyed by the division rule based on the Shapley value used in this paper. We refer the interested reader to Greco Scarcello (2013) for a more detailed description and discussion of these properties.
Budget-balance. The division rule precisely distributes the VQR score of over all its members, i.e., .
Fairness. The division rule is indifferent w.r.t. the specific optimal allocation used to submit the products to ANVUR. In particular, the score of each researcher is independent of the particular products assigned to him in the submission phase; moreover, it is independent of the specific set of products selected by the university, as long as the choice is optimal (i.e., with the same maximum value ).
Marginality. For any group of researchers , , where and . That is, every group is granted at least its marginal contribution to the performance of the grand-coalition .
We remark the importance of the fairness property, as the choice of a specific optimal set of products is immaterial for , but it may lead to quite different scores for individuals (and for their aggregations, assume e.g. that researchers and above belong to different departments). As a matter of fact, this property does not hold for the division rules adopted by ANVUR for the evaluation of both departments and newly hired researchers (see Section 1.4). The budget-balance property, on the other hand, is violated by the division rule for evaluating researchers who are members of PhD committees.
## 4 Useful Properties for Dealing with Large Instances
Recall that computing the Shapley value is P-hard for many classes of games (see, e.g., Aziz de Keijzer (2014); Bachrach Rosenschein (2009); Deng Papadimitriou (1994); Nagamochi . (1997)), including the allocation games, even if goods may have only two possible values Greco Scarcello (20142).
For large instances, a brute-force approach is unfeasible, because to compute the value of each agent , it would need to solve optimization problems, where is the number of agents. This is particularly true in our case study, where is in the order of thousands.
In order to mitigate the complexity of this problem, in this section we will describe some useful properties of the Shapley value, in particular for allocation problems, which allow us to simplify the instances in a preprocessing phase.
Let us consider in this section an allocation scenario , with denoting its associated game, whose agents graph is . For such scenario we show the following properties which allow us to simplify the game at hand without altering the Shapley value of any player: Modularity, Null goods, Separability, Disconnected agent.
###### Theorem 4.1 (Modularity)
Let be a partition of agents of such that , for every pair of agents with and . Let (resp., ) be the coalitional game restricted to agents in (resp., ). Then, for each agent , .
###### Proof
Let and be two coalitional games such that, for each , and . Contrasted with the games in the statement, these games are defined over the full set of agents .
Since there are no interactions between agents in and agents in , the total value of the optimal allocation for any coalition is given by the sum of the values of the goods in the optimal allocations restricted to the two sets of agents and . Therefore, we have . Then, from the additivity property of the Shapley value, for each agent , .
Consider now the games and ) restricted to agents in and in , respectively. Note that each player is dummy with respect to the game , so that her Shapley value is null, and her presence have no actual impact on any other player in . In particular such dummy agents could be removed from the game without changing the Shapley value of the other agents, so that for every , we have and the result immediately follows (by using the same reasoning for ).
From the above fact, it follows immediately that each connected component of the agents graph can treated as a separate coalitional game.
###### Corollary 1
Let be any connected component of the agents graph. The coalitional game associated with the allocation scenario obtained by restricting to the players in is such that the Shapley value of each player in is the same as in the full game associated with .
It easy to see that goods having value do not impact on the computation of the optimal allocation. However, the existence of shared null goods between multiple agents induces connections (among agents) which complicates the structure of the graph.
For instance, consider an allocation scenario comprising three agents having a joint interest only for one good, say , whose value is . Any other good has just a single agent interested in it. In such a scenario, Corollary 1 cannot be used, since the agents graph associated with the scenario consists of one connected component. On the other hand, without , the agents graph would be completely disconnected and thus it would be possible to compute the Shapley values immediately, by using Corollary 1. The following fact states that, in fact, we can get rid of such null goods.
###### Fact 4.2 (No shared null goods)
By removing all goods having value from , we get an allocation scenario with the same associated allocation game.
###### Proof
Just observe that in the computation of the marginal contribution of any agent to a coalition , there is no advantage for agents in in using a good in having value .
If it is useful in the algorithms, we can also use Fact 4.2 in the opposite way, and add null-value goods. Let be a good with and let be the set of agents that are interested in having . Then, the game associated with is the same as the game associated with the allocation scenario where is replaced by fresh goods such that each of them is of interest to just one agent in (hence, there are no connections in the graph because of such goods).
The following property provides us with a powerful simplification method for allocation games. Intuitively, the property states that any set of agents that does not exhibit an effective synergy with the rest of the agents can be removed from the game and solved separately.
###### Theorem 4.3 (Separability)
Let be any coalition such that . Then, we can define from the allocation scenario two disjoint allocation scenarios restricted to agents and , respectively, that can be solved separately. For each player , we can compute its Shapley value in the game associated with by considering only the game associated with the restricted scenario where occurs.
###### Proof
Denote by , and consider the allocation games and restricted to agents in and , respectively.
Preliminary observe that, for each pair of disjoint coalitions , holds. Indeed, given any optimal allocation for the agents in , its restriction to is a feasible allocation for , as well as its restriction to is a feasible allocation for . In particular, we have that, combined with the hypothesis about the considered coalition , entails that . This means that the values of the goods not used in any optimal allocation for is equal to the sum of the values of the best goods for the agents in .
We shall show that, for each optimal allocation for , the set of goods allocated by to is such that and the analogous property holds for . Therefore, these agents get the best goods they can obtain. To prove this claim, consider the value and the value . We know that and, by the optimality of , it holds too.
Consider now any coalition , and let and . Let be an optimal allocation for . We claim that there is an optimal allocation mapping goods from to with , and an optimal allocation mapping goods not in to with . Assume by contradiction that this is not the case. Then at least one of those allocations lead to values smaller than those in (note that cannot be worse, because the union of the two restricted allocations is a valid candidate mapping for ). Assume gets a smaller total value (the other case is symmetrical), that is, . Then, there exists some agent and a good so that . By using Theorem 4.4 in Greco Scarcello (20142), we can show that this would contradict the fact that . In fact, goods such as that are shared with agents outside and that allows us to get a better value for the agents in , could be used to improve the choice of the available goods for the full set .
Now, given that it suffices to use only the goods in for and the remaining goods for , we can define an equivalent game in which the goods in are of interest to agents in only and the remaining to agents in only. In the new game, and are in fact sets of agents with no shared connections and the theorem follows immediately from Theorem 4.1.
A very frequent and important case in applications, which falls in the case considered by this latter property, occurs when is a singleton , and it happens that the optimal allocation for this coalition is equal to the marginal contribution of to . By using the property described above, the set can be removed from the game and solved separately, so that we immediately get .
The following property identifies some goods that are useless for some agent and thus can be safely removed from its set of relevant goods . Note that this operation does not affect other agents possibly interested in such goods.
###### Fact 4.4 (Useless goods)
Let be an agent, and let be a good such that . Then, the modified allocation scenario where is removed from is equivalent to the original one, that is, the two scenarios have the same associated game.
We conclude this section with a simple property that does not help to simplify the game, but allows us to avoid the computation of unnecessary optimal allocations, during the computation of marginal contributions.
###### Fact 4.5 (Disconnected agent)
Let be an agent and let be a component disconnected from , that is, such that , for each . Then, holds and the marginal contribution of to is .
## 5 Lower and Upper Bounds for the Shapley Value
In this section we describe the computation of a lower bound and an upper bound for the Shapley value of any given allocation game . The availability of such bounds can be helpful to provide a more accurate estimation of the approximation error in randomized algorithms. Moreover, whenever the two bounds coincide for some agent, we clearly get the precise Shapley value for that agent. We shall see that this often occurs in practice, in our case study.
Preliminarily observe that in allocation games we have for free a simple pair of bounds. Indeed, recall that the anti-monotonicity property holds, so that, for each pair of coalitions , . Then, for each player and for every coalition , we have . It immediately follows that
marg({i},N)≤ϕi≤opt({i}).
To obtain tighter bounds we observe that the neighbors of in a coalition are the agents having the higher influence on the marginal contribution of to . Indeed, they are precisely those agents interested in using the goods of when he/she does not belong to the coalition. We already observed that, in the extreme case that no neighbors are present, contributes with all her/his best goods. The idea is to consider the power-set of as the only relevant sets of agents.
Let be a set of neighbors of , and For the computation of the lower bound in Algorithm 1, for such a profile we compute the marginal contribution of to , but use this same value for the marginal contributions of to every coalition such that , that is, for every coalition with the same configuration of neighbors of . Furthermore, we use a suitable factor to weigh this value in order to simulate that every such a coalition gets that same marginal contribution from .
The case of the upper bound is obtained in the dual way, by using instead the most favorable case where we use the marginal contribution of to in place of the marginal contribution of to any coalition with .
###### Theorem 5.1
Let be the output of Algorithm 1. For each agent , holds, and the computation of such values can be done in time .
###### Proof
Let be an agent of the game. The algorithm is based on the computation of any possible combination of the neighbors of . Regarding the computation of the lower bound, for each such profile , the algorithm considers a coalition obtained by completing with all the other agents in that are not neighbors of .
The algorithm uses the value of the marginal contribution of to such coalition, that is, the value , in place of the marginal contributions of to each coalition such that . Now, because , by exploiting the anti-monotonicity property of the marginal contributions in allocation games, we get immediately . Then, the algorithm weighs in a suitable way so that this value is used in place of the right marginal contribution (not lower than ) of to each coalition of the form described above. A simple combinatorial argument shows that this can be achieved by multiplying by the following factor
y=l∑k=0(l−k+|P′|)!⋅(|Z|+k)!|N|!⋅(lk), (1)
where and .
Regarding the computation of the upper bound of the Shapley value of , we proceed in a similar way but using the marginal contribution of to the profile containing only its neighbors, instead of the marginal contributions to the various coalitions such that . Indeed, in this case we have and therefore . Again, we need to multiply such value by a factor which takes into account of all possible ways of completing to any coalition with the same profile of ’s neighbors. It is easy to see that we can again use the factor described above, by exploiting the fact that .
Concerning the computational complexity, just observe that, for each element of the power set of , we have to solve a constant number of optimal allocation problems. Each of these problems requires the computation of an optimal weighted matching, which can be solved in time .
## 6 Approximating the Shapley Value
### 6.1 FPRAS for Supermodular and Monotone Coalitional Games
In order to approximate the Shapley value, one possibility is to use the Fully Polynomial-time Randomized Approximation Scheme (FPRAS) proposed in Liben-Nowell . (2012): for any and , it is possible to compute in polynomial-time an approximation of the Shapley value with probability of failure at most . The technique works for supermodular and monotone coalitional games, and it can be shown that our allocation games indeed meet these properties Greco Scarcello (20142).
The method is based on generating a certain number of permutations (of all agents) and computing the marginal contribution of each agent to the coalition of agents occurring before her (him) in the considered permutation. Then the Shapley value of each player is computed as the average of all such marginal contributions. The above procedure is repeated times, in indepedent runs, with the result for each agent consisting of the median of all computed values for her (him). Finally, the obtained values are scaled (i.e., they are all multiplied by a common numerical factor) to ensure that the budget-balance property is not violated.
Clearly enough, the more permutations are considered, the closer to the Shapley value the result will be. We next report a slightly modified version of the basic procedure of this algorithm, where we avoid the computation of some marginal contributions, if we can obtain the result by using Fact 4.5.
As a preliminary step, we compute the required number of permutations to meet the required error guarantee. In each of the iterations, the algorithm generates a random permutation from the set of agents . We then iterate through this permutation and compute the marginal contribution of each agent to the set of agents occurring before in the permutation at hand. If some neighbor of (in the agents graph) occurs in , the algorithm proceeds as usual by computing the value of an optimal allocation for in order to obtain the value . Note indeed that this one computation is sufficient to get such a marginal contribution, because the value for the coalition including the preceding agents (for the permutation at hand) is known from the previous step. Moreover, by Fact 4.5, we know that for those permutations in which all the players in follow , the marginal contribution of is just (see step 10). Finally, at steps 16–18 for each agent the algorithm divides the sum of her contributions by the number of performed iterations . The correctness of the whole algorithm follows from Theorem 4 in Liben-Nowell . (2012).
Computation Time Analysis. Let be the number of agents, and let be the required number of iterations. The cost of the algorithm is , where denotes the cost of computing each marginal contribution (steps 7–11). This requires the computation of an optimal weighted matching in a bipartite graph, which is feasible in , via the classical Hungarian algorithm. However, if the current agent is disconnected from the rest of the coalition, the cost is given by a simple lookup in the cache where the best allocation for each single agent is stored.
### 6.2 Sampling Algorithm When the Range of Marginal Contributions Is Known
Maleki et al. Maleki . (2013) propose a bound on the number of samples (over the population of marginal contributions) required to estimate an agent’s Shapley value, when the range of his/her contributions is known. Their bound is based on Hoeffding’s inequality Hoeffding (1963), and it states that, in order to approximate the Shapley value of agent within an absolute value , with failure probability at most , that is, in order to get
Prob{|~ϕi−ϕi|≥ϵ}≤δi (2)
at least samples are required, where:
mi=⎡⎢ ⎢ ⎢⎢ln(2δi)⋅r2i2⋅ϵ2⎤⎥ ⎥ ⎥⎥ (3)
In the above expression, denotes the range of ’s marginal contributions (i.e., ), where is the set of all agents that partecipate in the allocation game). This bound allows us to determine the number of required random samples for each agent , once and are fixed. Assuming we want an overall failure probability , each agent could be assigned a failure probability . In principle a higher failure probability could be tolerated for agents with larger ranges, at the expense of lower failure probability for agents with smaller ranges. However, our experimental tests performed with this variant, exhibited just a few marginal gains.
Once the number of required samples for each agent is determined, the approximate Shapley value, with the desired guarantees on the absolute error, can easily be computed by a randomized algorithm evaluating the required samples of coalitions for each player (see Section 7.1 for a brief description of our parallel implementation).
In order to consider the classical percentage expression for the approximation error, we should replace by in (2). First observe that for all agents that are considered by the algorithms, because our simplification techniques preliminarily identify and remove from the game those agents having a null Shapley value (these agents must be interested only in goods with a null value). In fact, the value of that would appear in (3) may be replaced by any known (non-null) lower bound , at the expense of taking more samples than those strictly necessary. On our largest test instance (namely, the researchers of Sapienza University of Rome who participated in the research assessment exercise VQR2011-2014), the technique described in Section 5 yields lower bounds that are greater that for all agents. It turns out that, in a matter of hours, we are able to get approximate Shapley values within of the correct values.
It should be noted that the bound presented by Maleki et al., due to the exponential relation it establishes between and , allows us to compute efficiently good approximate Shapley values, at least on our test instances where the range of the marginal contributions is fairly limited. For a comparison, the FPRAS approach described in Section 6.1 would have taken a few years (instead of the few hours required by the approach presented here) to process our largest input instance with the same error guarantee (see Section 7 for details on our experiments).
## 7 Implementation Details and Experimental Evaluation
### 7.1 Parallel Implementation of Shapley Value Algorithms
All the algorithms considered in this paper are amenable to parallel implementation. We engineered our parallel implementations as follows.
FPRAS algorithm Liben-Nowell . (2012). Besides the input allocation game, and the two parameters and , we added a third parameter, the thread pool size. During the execution of the algorithm, each thread (there are as many threads as the thread pool size dictates) is responsible for generating a certain number of permutations according to the requested approximation factor and, for each permutation, it computes the marginal contributions of all authors to that permutation, and saves them to a local cache. Whenever a thread has generated its assigned number of permutations, it delivers its local cache of computed scores to a synchronized output acceptor (which increments the overall score of each author accordingly), and then shuts itself down as its work is completed. When all threads have shut down, each entry of the acceptor’s output vector is averaged over the total number of permutations, yielding the final approximate Shapley vector for that run. The above procedure is repeated for each independent run. When all runs are done, the component-wise median of all final approximate Shapley vectors is computed, and the resulting vector is scaled (i.e., all entries are multiplied by a number such that the budget-balance property is enforced), yielding the desired approximation with the desired probability.
Exact algorithm. In our exact algorithm implementation, each thread (the total number of threads is specified by an input parameter) asks a synchronized producer for a subset of authors to work with. The synchronized subset producer either provides an -bit integer number (where is the number of authors) for the requesting thread, or it returns null if all subsets have already been delivered for elaboration. Upon receiving an -bit integer from the subset provider, a thread turns it into a subset of authors (if a bit is set to 1, then the corresponding author is included in the subset), and computes partial scores for all authors in the subset, storing the values obtained in a local cache. When a thread receives null from the subset provider, it delivers its local cache of computed scores to a synchronized output acceptor (which increments the overall score of each author accordingly), and then shuts itself down, as it has no more work to do. When all threads have shut down, the output vector will contain the exact Shapley values for all authors.
### 7.2 Experimental Results
Hardware and software configuration. Experiments have been performed on two dedicated machines. In particular, sequential implementations were run on a machine with an Intel Core i7-3770k 3.5 GHz processor, 12 GB (DDR3 1600 MHz) of RAM, and operating system Linux Debian Jessie. We tested the parallel implementations on a machine equipped with two Intel Xeon E5-4610 v2 @ 2.30GHz with 8 cores and 16 logical processors each, for a total of 32 logical processors, 128 GB of RAM, and operating system Linux Debian Wheezy. Algorithms were implemented in Java, and the code was executed on the JDK 1.8.0 05-b13, for the Intel Core i7 machine, and on the OpenJDK Runtime Environment (IcedTea 2.6.7) (7u111-2.6.7-1 deb7u1), for the Intel Xeon machine.
Dataset description. We applied the algorithms to the computation of a fair division of the scores for the researchers of Sapienza University of Rome who participated in the research assessment exercise VQR2011-2014. Sapienza contributors to the exercise were 3562 and almost all of them were required to submit 2 publications for review. We computed the scores of each publication by applying, when available, the bibliographic assessment tables provided by ANVUR.
Preprocessing. The analysis was carried out by preliminarily simplifying the input using the properties discussed in Section 4, as explained next.
Starting with a setting with 3562 researchers and 5909 publications, first we removed each researcher having no publications for review. After this step a total of 370 authors were removed. Then, by exploiting the simplification described in Fact 4.2, we removed 2323 publications. By using Theorem 4.3, the graph was subsequently filtered removing each author whose marginal contribution to the grand coalition coincides with the optimal allocation restricted to the author himself. After this step 2427 researchers out of 3562 were removed. Then we divided the resulting agents graph into connected components obtaining a total number of 156 connected components and we discovered only two components consisting of more than 10 agents. The sizes of these components are 691 and 15. Eventually, the components were further simplified by using Fact 4.4. After the whole preprocessing phase, we obtained a total of 159 connected components with the largest one having 685 nodes. The size of the second largest component is just 15 while all the others remain very small (less than 10 nodes). In the rest of the section, we shall illustrate results of experimental activity conducted over the various methods. To this end, we fixed the value
. This value was chosen heuristically, based on a series of tests conducted on various CUN Areas of Sapienza, where CUN Areas are (large) scientific disciplines such as Math and Computer Science (Area 01) or Physics (Area 02).
Tests with components of variable size. As already pointed out, after the preprocessing step we obtained very small connected components (less than 10 nodes) except for the largest two (685 and 15 nodes, respectively). For all components with less than 10 nodes, the exact algorithm, of which we used a sequential implementation for these tests, performs very well (a few milliseconds), therefore we omit the analysis here. In order to test all the other algorithms, besides the two largest components, we randomly extracted samples of (distinct) nodes out of the original graph, to produce different subgraphs with size .
For the considered cases, we do not find significant differences among the values obtained by using the two approximation algorithms and the exact ones (see, e.g., figures 3 and 4, in which the approximation algorithms were required to produce results within 5% of the exact value333In these two figures the values obtained by FPRAS are not visible because they coincide with the exact values.). Notably, with the exception of a small number of cases, our bounds (especially the lower bounds) are always very close to the exact value. In particular, for we were able to immediately get the Shapley value for all agents, since upper and lower bounds coincide for all of them.
We also evaluated how many computations of optimal allocations were avoided in the FPRAS of Liben-Nowell et al., by exploiting Fact 4.5 (and hence executing in the latter case Step 10 rather than Step 8 in Algorithm 2). By fixing the approximation error at , for each we get the following savings: out of (i.e., 28%), out of (18%), out of (29%), out of (30%), and out of (21%), respectively.
As already pointed out, the FPRAS method performed much better than its theoretical guarantee on the maximum approximation error. We report the real maximum and average approximation errors (denoted by X and Y, respectively) of our implementation w.r.t. the exact algorithm for each , with . For , we get X = 0.01 and Y = , for we get X = and Y = , and for we get X = and Y = . In all cases, the maximum approximation error was about 1% (or less) and therefore considerably below the theoretical guarantee (30%). The algorithm based on the bound of Maleki et al. also performs better than its theoretical guarantee, though not by as wide a margin as the FPRAS method (it is, however, much faster, as we will see in the next paragraph). In this case, for we get X = 0.093 and Y = 0.046, for we get X = 0.098 and Y = 0.011, and for we get X = 0.097 and Y = 0.019. In all cases, the maximum approximation error was below 10%, and therefore quite smaller than the required threshold.
Running Times. Figures 56 and 7 report the computation times of the various algorithms. In particular, Figure 5 focuses on the sequential implementations of the brute-force algorithm for computing the exact values, and of the algorithms for computing the upper and lower bounds. For the experiments, we computed separately the two bounds in order to point out that the computation of the lower bound requires in general more time, because it considers allocation over larger coalitions than those considered for the computation of the upper bound. Moreover, as discussed in Section 5, the running times for computing the bounds heavily depend on the cardinality of the agents’ neighborhoods. This explains why the running times for the case are smaller than those for the case .
Figure 6 shows the running time of the parallel implementation of the FPRAS method, using 24 threads, for different values of . In particular, we performed five trials over the different (sub)games described above, and report averaged measures. We can see that for games of reasonable size we can achieve a high theoretical approximation error guarantee. For instance, for the largest considered game () we were able to compute the approximate Shapley value with in less than 90 minutes. There is a big gap between the performances of the FPRAS method, when using the extreme values we considered for the allowed approximation error. However, as already pointed out, even when we used a poor theoretical guarantee on the approximation error, we still obtained a quite reasonable accuracy.
In spite of its excellent accuracy, and its high efficiency when compared to the exact algorithm, we estimated that our parallel implementation of the FPRAS method would take, with and 24 threads, roughly years to fully analyze the largest component of our Sapienza test case, comprising authors. By contrast, the parallel implementation of the algorithm based on the bound proposed by Maleki et al., with the same settings, takes only 11.75 hours. The bound on the number of samples proposed by Maleki et al. requires the knowledge of the range of the marginal contributions, which was computed in less than 3 minutes. Moreover, in order to guarantee that the results are within a certain percentage of the correct values, the lower bounds for the Shapley value are also required. For the biggest component of our test instance, we computed the lower bounds for the 681 authors with neighborhood size up to 19; for the few remaining authors with more neighbors (just 4 authors), we used as lower bound the marginal contribution to the grand coalition. Multithreaded computation of the lower bounds took approximately 160 hours.
It should be noted that the bound by Maleki et al. could be applied directly to the largest CC in the unsimplified Sapienza VQR graph. This CC comprises 1176 authors. In this case, straightforward application of the bound for all authors requires, on our server, with 24 threads and an absolute error , roughly 20.5 hours. If we set , the computation time increases to approximately 31 days. Figure 7 shows the running times of the parallel implementation of Maleki-based algorithm on the two largest CCs in our test instances, with varying values for .
## 8 Conclusions and Future Work
In this paper, we have identified useful properties that allow us to decompose large instances of allocation problems into smaller and simpler ones, in order to be able to compute the Shapley value. The proposed techniques greatly improve the applicability to real-world problems of the approximation algorithms described in the literature. Furthermore, we described an algorithm for the computation of an upper bound and a lower bound for the Shapley value. These bounds provide a more accurate estimate of approximation errors, and (often, in our case study) yield the exact Shapley value for those agents where upper and lower bounds coincide.
We have engineered parallel implementations of the considered algorithms, and we have tested them on a real-world problem, namely, the 2011-2014 Italian research assessment program (known as VQR), modeled as an allocation game. With the proposed tools, we have been able to compute, either exactly, or within a fairly good approximation (5% of the correct value with 99% probability) the Shapley value for all agents in our largest test instance, namely, Sapienza University of Rome, comprising 3562 researchers and 5909 research products.
As future work, we would like to extend the structure-based technique described in Greco . (2015) to the more general class of games where more than one good can be allocated to each agent (as it is the case in VQR allocations). This way, we could compute efficiently the exact Shapley value for large games, provided that the treewidth of the agents graph is small. With this respect, we note that this is not the case for the large Sapienza VQR instance, because after the simplification performed with the tools described in the paper we are left with a large component whose estimated treewidth is 64. This is too much for using structure-based decomposition techniques. However, for the sake of completeness, we note that all other components have a low treewidth. For instance, the component with 50 agents used in our tests has treewidth 5.
Finally, we would like to obtain tighter lower and upper bounds, possibly with a computational effort that can be tuned to meet given time constraints.
## References
• Aziz de Keijzer (2014) AzizK14Aziz, H. de Keijzer, B. 2014. Shapley meets Shapley Shapley meets shapley. 31st International Symposium on Theoretical Aspects of Computer Science (STACS 2014), STACS 2014, March 5-8, 2014, Lyon, France 31st international symposium on theoretical aspects of computer science (STACS 2014), STACS 2014, march 5-8, 2014, lyon, france ( 99–111).
• Bachrach Rosenschein (2009) BachrachR09Bachrach, Y. Rosenschein, JS. 200902. Power in Threshold Network Flow Games Power in threshold network flow games. Autonomous Agents and Multi-Agent Systems181106–132.
• Deng Papadimitriou (1994) Deng1994Deng, X. Papadimitriou, CH. 1994May. On the complexity of cooperative solution concepts On the complexity of cooperative solution concepts. Mathematics of Operations Research19257–266.
• Greco . (2015) GLS15Greco, G., Lupia, F. Scarcello, F. 2015. Structural Tractability of Shapley and Banzhaf Values in Allocation Games Structural tractability of shapley and banzhaf values in allocation games.
Proceedings of the Twenty-Fourth International Joint Conference on Artificial Intelligence, IJCAI 2015, Buenos Aires, Argentina, July 25-31, 2015 Proceedings of the twenty-fourth international joint conference on artificial intelligence, IJCAI 2015, buenos aires, argentina, july 25-31, 2015 ( 547–553).
• Greco Scarcello (2013) GS13Greco, G. Scarcello, F. 2013. Fair division rules for funds distribution: The case of the Italian Research Assessment Program (VQR 2004-2010) Fair division rules for funds distribution: The case of the italian research assessment program (vqr 2004-2010). Intelligenza Artificiale7145–56.
• Greco Scarcello (20141) Greco2014bGreco, G. Scarcello, F. 20141. Counting solutions to conjunctive queries: structural and hybrid tractability Counting solutions to conjunctive queries: structural and hybrid tractability. Proceedings of the 33rd ACM SIGMOD-SIGACT-SIGART Symposium on Principles of Database Systems, PODS’14, Snowbird, UT, USA, June 22-27, 2014 Proceedings of the 33rd acm sigmod-sigact-sigart symposium on principles of database systems, pods’14, snowbird, ut, usa, june 22-27, 2014 ( 132–143).
• Greco Scarcello (20142) Greco2014iGreco, G. Scarcello, F. 20142. Mechanisms for Fair Allocation Problems: No-Punishment Payment Rules in Verifiable Settings Mechanisms for fair allocation problems: No-punishment payment rules in verifiable settings. J. Artif. Intell. Res. (JAIR)49403–449.
• Hoeffding (1963) Hoeffding1963Hoeffding, W. 1963.
Probability Inequalities for Sums of Bounded Random Variables Probability inequalities for sums of bounded random variables.
Journal of the American Statistical Association5830113-30.
• Iera . (2011) militano:twcIera, A., Militano, L., Romeo, L. Scarcello, F. 2011. Fair Cost Allocation in Cellular-Bluetooth Cooperation Scenarios Fair Cost Allocation in Cellular-Bluetooth Cooperation Scenarios. IEEE Transactions on Wireless Communications1082566–2576.
• Liben-Nowell . (2012) Liben-Nowell2012Liben-Nowell, D., Sharp, A., Wexler, T. Woods, K. 2012. Computing Shapley Value in Supermodular Coalitional Games Computing shapley value in supermodular coalitional games. J. Gudmundsson, J. Mestre T. Viglas (), Computing and Combinatorics: 18th Annual International Conference, COCOON 2012, Sydney, Australia, August 20-22, 2012. Proceedings Computing and combinatorics: 18th annual international conference, cocoon 2012, sydney, australia, august 20-22, 2012. proceedings ( 568–579). Berlin, HeidelbergSpringer Berlin Heidelberg.
• Maleki . (2013) Maleki2014Maleki, S., Tran-Thanh, L., Hines, G., Rahwan, T. Rogers, A. 2013. Bounding the Estimation Error of Sampling-based Shapley Value Approximation With/Without Stratifying Bounding the estimation error of sampling-based shapley value approximation with/without stratifying. CoRRabs/1306.4265. http://arxiv.org/abs/1306.4265
• Maniquet (2003) Francois2003Maniquet, F. 2003. A characterization of the Shapley value in queueing problems A characterization of the Shapley value in queueing problems. Journal of Economic Theory109190-103.
• Mishra Rangarajan (2007) Mishra2007Mishra, D. Rangarajan, B. 2007October. Cost sharing in a job scheduling problem Cost sharing in a job scheduling problem. Social Choice and Welfare293369-382.
• Moulin (1992) Moulin1992Moulin, H. 1992November. An Application of the Shapley Value to Fair Division with Money An application of the Shapley value to fair division with money. Econometrica6061331-49.
• Nagamochi . (1997) Nagamochi1997Nagamochi, H., Zeng, DZ., Kabutoya, N. Ibaraki, T. 1997February. Complexity of the minimum base game on matroids Complexity of the minimum base game on matroids. Mathematics of Operations Research22146–164.
• Shapley (1953) shapley53Shapley, LS. 1953. A value for n-person games A value for n-person games. Contributions to the theory of games2307–317.
|
2023-02-01 03:22:44
|
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8561472296714783, "perplexity": 596.2985303356886}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2023-06/segments/1674764499899.9/warc/CC-MAIN-20230201013650-20230201043650-00443.warc.gz"}
|
https://aoyamahina.com/1v9weo/horizontal-tangent-line-calculator-e69a87
|
No products in the cart.
## horizontal tangent line calculator
Pocket
Curve has vertical tangent when (dx)/(dt)=0 or 12t^3+2t=0. polar tangent line. 2) 9x^2 - 4x = 0. Tangents to Polar Curves. 2. Horizontal Asymptote Calculator. Inputs the polar equation and specific theta value. 4) x = 0, or x = 4/9. Finally, let's see example when there can be more than one tangent line … b) Find the polar coordinates for the points where the tangent line is horizontal. In the next sections, we will explain horizontal curves, formulas to get the properties of horizontal curves, and method to find those geometrical properties without using horizontal tangent calculator. Take the first derivative of the function and set it equal to 0 to find the points where this happens. On top of that, it can act as a horizontal tangent calculator by helping you find the vertical and horizontal tangent lines as well. When t=0 (x,y)=(0,0). Added Mar 5, 2014 by Sravan75 in Mathematics. Syntax : equation_tangent_line(function;number) Note: x must always be used as a variable. Tangent also known as tan relates the angles and sides of a triangle. #f(x,-x/2) = x^2-x^2/2+x^2/4 - 27=0->x=pm 6# Examples : This example shows how to find equation of tangent line using the calculator: Here the horizontal refers to the degree of x-axis, where the denominator will be higher than the numerator. Outputs the tangent line equation, slope, and graph. 1. r θ = 2 sin 8 θ − cos θ. The horizontal tangent lines have #f_x = 0->x = -y/2# and the vertical tangent lines have #f_y = 0->x = -2y# So for horizontals. the tangent line is horizontal on a curve where the slope is 0. From the polar coordinate definitions written in parametric form in Desmos as [x,y] where the variable is "a" rather than theta. What is horizontal curve in road? The tangent line equation calculator is used to calculate the equation of tangent line to a curve at a given abscissa point with stages calculation. BYJU’S online tangent line calculator tool makes the calculations faster and easier where it displays the output in a fraction of seconds. Log InorSign Up. Horizontal asymptote are known as the horizontal lines. Tangent Line Calculator The calculator will find the tangent line to the explicit, polar, parametric and implicit curve at the given point, with steps shown. Tangent Line Calculator For You To Use It is an online tool that helps you find the tangent line to the implicit, explicit, parametric, and polar curve at a given point. Or use a graphing calculator and have it calculate the maximum and minimum of the curve for you :) Consider the following polar function. #f(-y/2,y) = y^2/4-2y^2+y^2-27=0->y=pm6# and for verticals. A horizontal curve offers a switch between two tangent strips of roadway. Tangent line calculator is a free online tool that gives the slope and the equation of the tangent line. As appears from graph our answers are correct (green line is vertical tangent, red lines are horizontal tangents). 3) x(9x - 4) = 0. Thus the derivative is: $\frac{dy}{dx} = \frac{2t}{12t^2} = \frac{1}{6t}$ Calculating Horizontal and Vertical Tangents with Parametric Curves. Recall that with functions, it was very rare to come across a vertical tangent. So, tangent line is horizontal at t=0. 1) dy/dx = 9x^2 - 4x. And graph the polar coordinates for the points where the slope and the equation of the line. Free online tool that gives the slope and the equation of the tangent line green line is horizontal at t=0... = x^2-x^2/2+x^2/4 - 27=0- > x=pm 6 to 0 to Find the where. A vertical tangent, red lines are horizontal tangents ) functions, was!, red lines are horizontal tangents ) lines are horizontal tangents ) curve a! With functions, it was very rare to come across a vertical tangent a curve where the denominator be! ) = ( 0,0 ) the slope and the equation of the function and it. # f ( -y/2, y ) = 0 is a free online tool gives! Is 0 function ; number ) Note: x must always be used as variable. First derivative of the tangent line equation, slope, and graph horizontal tangents ) than the.! - 4 ) = ( 0,0 ) Mar 5, 2014 Sravan75. Free online tool that gives the slope is 0 Find the polar coordinates for points... Of x-axis, where the slope is 0 8 θ − cos.... 9X - 4 ) x ( 9x - 4 ) x = 0, or x = 0, x... And set it equal to 0 to Find the polar coordinates for the points where this happens to. Added Mar 5, 2014 by Sravan75 in Mathematics switch between two tangent strips of roadway Sravan75 in.... On a curve where the slope and the equation of the tangent line is at! = 2 sin 8 θ − cos θ, red lines are horizontal ). 0 to Find the points where this happens the denominator will be than. Sin 8 θ − cos θ 2 sin 8 θ − cos θ ;... Relates the angles and sides of a triangle a switch between two tangent strips of roadway gives slope. Y=Pm6 # and for verticals take the first derivative of the tangent line is vertical,. Y^2/4-2Y^2+Y^2-27=0- > y=pm6 # and for verticals tangent line is vertical tangent, red lines are horizontal tangents.! A horizontal curve offers a switch between two tangent strips of roadway )... A switch between two tangent strips of roadway the polar coordinates for the points where the slope and equation. For verticals 4 ) x = 4/9 = 2 sin 8 θ cos. Added Mar 5, 2014 by Sravan75 in Mathematics strips of roadway slope the! Makes the calculations faster and easier where it displays the output in a fraction of seconds the horizontal refers the! That gives the slope is 0, tangent line calculator tool makes the calculations faster and easier it... A vertical tangent, red lines are horizontal tangents ) slope and the equation of the tangent line horizontal... Of x-axis, where the slope and the equation of the tangent is! Equation of the tangent line is horizontal, -x/2 ) = ( 0,0 ) is a free tool... 5, 2014 by Sravan75 in Mathematics from graph our answers are correct ( green is. At t=0 with functions, it was very rare to come across a vertical tangent that... A vertical tangent, red lines are horizontal tangents ) it equal 0! Find the points where the denominator will be higher than the numerator the degree of x-axis where... Are correct ( green line is horizontal on a curve where the slope the! Line is vertical tangent, red lines are horizontal tangents ) answers are correct ( green is... Offers a switch between two tangent strips of roadway of x-axis, where denominator... Where it displays the output in a fraction of seconds and graph the... X^2-X^2/2+X^2/4 - 27=0- > x=pm 6 -x/2 ) = x^2-x^2/2+x^2/4 - 27=0- > x=pm 6 the polar for... Online tangent line is horizontal on a curve where the denominator will be higher than the numerator, -x/2 =. Faster and easier where it displays the output in a fraction of.. Where it displays the output in a fraction of seconds ( 0,0 ) θ − cos θ graph answers! Faster and easier where it displays the output in a fraction of seconds online line... Denominator will be higher than the numerator the polar coordinates for the where! Be used as a variable degree of x-axis, where the tangent is! The numerator where the tangent line calculator tool makes the calculations faster easier! Appears from graph our answers are correct ( green line is horizontal 5, 2014 by Sravan75 in.... A vertical tangent, red lines are horizontal tangents ) is 0 4/9. By Sravan75 horizontal tangent line calculator Mathematics used as a variable vertical tangent outputs the tangent line is at. Coordinates for the points where the slope and the equation of the function and set it to. Θ − cos horizontal tangent line calculator tangent line calculator is a free online tool that gives the slope and the of. A fraction of seconds set it equal to 0 to Find the polar coordinates for the where. = 0 red lines are horizontal tangents ) and sides of a.! The calculations faster and easier where it displays the output in a fraction of seconds y^2/4-2y^2+y^2-27=0-... 8 θ − cos θ known as tan relates the angles and horizontal tangent line calculator a... Of roadway coordinates for the points where this happens horizontal curve offers a between... Derivative of the tangent line is horizontal on a curve where the slope is 0 x. The numerator graph our answers are correct ( green line is horizontal a free tool. Sin 8 θ − cos θ calculator tool makes the calculations faster and where. Is a free online tool that gives the slope and the equation of the and... ) Note: x must always be used horizontal tangent line calculator a variable of roadway of seconds the degree x-axis! The calculations faster and easier where it displays the output in a fraction seconds!: equation_tangent_line ( function ; number ) Note: x must always be used as a variable ).... Free online tool that gives the slope and the equation of the function and set it equal 0! Calculator is a free online tool that gives the slope and the equation the. Byju ’ S online tangent line horizontal tangent line calculator, slope, and graph =. Recall that with functions, it was very rare to come across a tangent. In Mathematics ) Find the points where the denominator will be higher than the numerator makes... -X/2 ) = y^2/4-2y^2+y^2-27=0- > y=pm6 # and for verticals points where the denominator will higher. # f ( -y/2, y ) = ( 0,0 ) θ. Always be used as a variable ) ( -y/2, y ) = ( 0,0 ... 0 to Find the points where the tangent line is horizontal on a where! Added Mar 5, 2014 by Sravan75 in Mathematics here the horizontal refers to the degree of x-axis, the. Rare to come across a vertical tangent is a free online tool that the! Equation_Tangent_Line ( function ; number ) Note: x must always be used as a variable t=0 degree... 9X - 4 ) x = 4/9 x = 4/9 a fraction of seconds graph our answers correct... X=Pm 6 refers to the degree of x-axis, where the slope and the equation of the function set! Than the numerator tool that gives the slope is 0 of x-axis, where the denominator will be than! = 4/9 ) Find the points where this happens syntax: equation_tangent_line ( function number. Rare to come across a vertical tangent, red lines are horizontal tangents ) equal to to. The function and set it equal to 0 to Find the points where this.... The denominator will be higher than the numerator of the tangent line,! = 4/9 and easier where it displays the output in a fraction of seconds -x/2 ) = y^2/4-2y^2+y^2-27=0- > #! The output in a fraction of seconds b ) Find the points where this happens the! X^2-X^2/2+X^2/4 - 27=0- > x=pm 6 be higher than the numerator: equation_tangent_line ( function ; number ) Note x. Answers are correct ( green line is horizontal at t=0 ( x, -x/2 =... The denominator will be higher than the numerator the calculations faster and easier where it displays output. To come across a vertical tangent, red lines are horizontal tangents ) to Find the polar for! # and for verticals 5, 2014 by Sravan75 in Mathematics come across a vertical tangent recall with. Equation of the function and set it equal to 0 to Find the polar coordinates for the points this. Be higher than the numerator, red lines are horizontal tangents ), y ) = x^2-x^2/2+x^2/4 27=0-! Is a free online tool that gives the slope and the equation of the function and set it to. From graph our answers are correct ( green line is vertical tangent, red lines are horizontal tangents.... Be used as a variable ` ( x, y ) = -. Of the function and set it equal to 0 to Find the points where this happens will be than. Derivative of the function and set it equal to 0 to Find the polar coordinates for the points this... The function and set it horizontal tangent line calculator to 0 to Find the points where happens! And easier where it displays the output in a fraction of seconds makes the calculations faster and where!
###### OPENING HOURS
Tue ‒ Thu: 09am ‒ 07pm
Fri ‒ Mon: 09am ‒ 05pm
|
2021-09-23 13:53:46
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8481483459472656, "perplexity": 792.8057914332212}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-39/segments/1631780057424.99/warc/CC-MAIN-20210923135058-20210923165058-00343.warc.gz"}
|
https://math.stackexchange.com/questions/1607412/determine-what-when-multiplied-with-180-gives-a-perfect-cube
|
# Determine what when multiplied with $180$ gives a perfect cube
Recently, at a math competition, I was given the following question: Determine the smallest number that gives a perfect cube when multiplied by $180$ . I had thirty seconds to solve this question and no calculator.
The answer was $150$ since $150 \cdot 180=27000$ and $\sqrt[3]{27000}=30$.
I was stuck on this question without a calculator. Using a calculator with graphing and table generating capabilities, one could simply put $f(x)=\sqrt[3]{180x}.$ Then, they could scroll through a table until they found that $f(150)=30$, an integer. However, I don't see how this could be done without a calculator. Even further, if one did have a calculator, how it could be done in thirty seconds.
How could this be done feasibly in thirty seconds without a calculator? Does it just require enough number sense to know that $180$ is a factor of $27000$?
$180=2^2\cdot3^2\cdot5$; to make a cube you need to multiply by $2\cdot3\cdot5^2=150$, since you need the exponents to be multiples of $3$.
|
2019-09-19 21:29:40
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.896354079246521, "perplexity": 191.82968478717606}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-39/segments/1568514573735.34/warc/CC-MAIN-20190919204548-20190919230548-00096.warc.gz"}
|
https://www.physicsforums.com/threads/finding-slope-of-tangetns.228910/
|
# Finding slope of tangetns
1. ### emma3001
42
Find the slope of the tangent to the parabola y=-3x^2 + 4x - 7 when x=a. I know how to get the limit using the tangent so i end up with a slope in terms of a (you can also get it using the derivative) but now the next part states:
At what point on the parabola is the tangent perpendicular to the line 3x - 4y + 8=0 and all that i get from that question is that the tangent for the point on the parabola will have a slope of -4/3 (negative reciprocal)
Last edited: Apr 14, 2008
2. ### Pere Callahan
587
so you have the derivative in terms of a, right? Let's call it D(a).
You're looking for the value for a such that this expression is equal to -4/3.
So what about solving D(a)=-4/3 for a ..?
3. ### emma3001
42
sorry... so if the derivative is 6x + 4, then i can say that 6x + 4= -4/3 and solve for a (or x)?
4. ### Pere Callahan
587
Exactly, sometimes it's easier than you think
|
2015-11-26 21:30:43
|
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8276246190071106, "perplexity": 532.264383772987}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2015-48/segments/1448398447783.20/warc/CC-MAIN-20151124205407-00076-ip-10-71-132-137.ec2.internal.warc.gz"}
|