url
stringlengths 14
2.42k
| text
stringlengths 100
1.02M
| date
stringlengths 19
19
| metadata
stringlengths 1.06k
1.1k
|
|---|---|---|---|
https://cs.stackexchange.com/questions/59520/why-isnt-a-binary-counter-used-for-generating-k-combinations
|
# Why isn't a binary counter used for generating $k$-combinations?
I have been looking at algorithms that allow you to generate $k$-combinations of a given string.
My question is that why isn't the algorithm used to generate power sets used to generate the combinations of a given string? That is, we can just use a binary counter of length $n$ with $k$ ones set to select $k$ elements from a length $n$ string. The method is very simple.
I mean, the power set can represent all $k$-combinations of the input. So why use complex algorithms like the ones described here (e.g., Gray codes)?
For concreteness, suppose you have a string $S$ of length $n = 10$. You want to generate all $k$-combinations of $S$ for some $k$, say $k = 3$. When you use a binary counter (using $n$ bits), you go through $2^n = 2^{10} = 1024$ values, corresponding to all possible subsets. On the other hand, there are only ${10 \choose 3} = 120$ possible 3-combinations choosable from $S$. So wouldn't it be nice if you could only perform 120 steps instead of 1024? I encourage you to play around with larger numbers, and consider how quickly these numbers actually grow.
|
2021-09-18 02:45:43
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.7273449301719666, "perplexity": 207.31792116173528}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-39/segments/1631780056120.36/warc/CC-MAIN-20210918002951-20210918032951-00167.warc.gz"}
|
https://math.stackexchange.com/questions/360195/if-lim-limits-x-to-infty-fx0-then-lim-limits-x-to-infty-fx
|
# If $\lim\limits_{x \to \infty} f'(x)>0$ then $\lim\limits_{x \to \infty} f(x)= \infty$
I want to show that:
If $\lim\limits_{x \to \infty} f'(x)>0$ then $\lim\limits_{x \to \infty} f(x)= \infty$
My attempt: Since $\lim\limits_{x \to \infty} f'(x)>0$ there is some $\delta >0$ such that $\forall x>y\geq \delta \implies f(x)>f(y)$ i.e $f$ is increasing on $[\delta,\infty)$.
How to proceed?
When $\lim_{x\to \infty} f'(x)>0$ then there is a $x_0$ such that $f'(x)>\varepsilon$ for all $x>x_0$ and some $\varepsilon>0$ with the mean value theorem we have $$f(x)-f(x_0)=f'(\xi) (x-x_0)$$ $x-x_0$ is not bounded and $f'(\xi)$ is strictly bigger zero, hence the righthand side will go to infinity when $x$ goes to infinity.
Intuitively, if $\lim _{x\to \infty }f'(x)>0$, then for all large enough $x$ holds that $f'(x)>c>0$ for some constant $c$. That means that the slope of $f$ is at least $c$, and thus that the functions grows at least as fast as $cx$ does. Since $\lim _{x\to \infty }cx=\infty$, the result follows.
Now, take this intuitive proof and turn it into an actual proof by making all the steps precise. You'll want to use the mean value theorem to get a precise relation between values of $f$ and values of $f'$.
|
2021-08-02 02:20:23
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9787651896476746, "perplexity": 36.80889122442766}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-31/segments/1627046154302.46/warc/CC-MAIN-20210802012641-20210802042641-00601.warc.gz"}
|
https://yufree.cn/metaworkflow/exprimental-designdoe.html
|
# Chapter 2 Exprimental design(DoE)
Before you perform any metabolomic studies, a clean and meaningful experimental design is the best start. You need at least two groups: treated group and control group. Also you could treat this group infomation as the one primary variable or primary variables to be explored for certain research purposes.
The numbers of samples in each group should be carefully calculated. Supposing the metaoblites of certain biological process only have a few metabolites, the first goal of the exprimenal design is to find the differences of each metabolite in different group. For each metabolite, such comparision could be treated as one t-test. You need to perform a Power analysis to get the numbers. For example, we have two groups of samples with 10 samples in each group. Then we set the power at 0.9, which means 1 minus Type II error probability, the standard deviation at 1 and the significance level(Type 1 error probability) at 0.05. Then we get the meanful delta between the two groups should be higher than 1.53367 under this experiment design. Also we could set the delta to get the minimized numbers of the samples in each group. To get those data such as the standard deviation or delta for power analysis, you need to perform pre-experiments.
power.t.test(n=10,sd=1,sig.level = 0.05,power = 0.9)
##
## Two-sample t test power calculation
##
## n = 10
## delta = 1.53367
## sd = 1
## sig.level = 0.05
## power = 0.9
## alternative = two.sided
##
## NOTE: n is number in *each* group
power.t.test(delta = 5,sd=1,sig.level = 0.05,power = 0.9)
##
## Two-sample t test power calculation
##
## n = 2.328877
## delta = 5
## sd = 1
## sig.level = 0.05
## power = 0.9
## alternative = two.sided
##
## NOTE: n is number in *each* group
However, since sometimes we could not perform prelimintery experiment, we could directly compute the power based on false discovery rate control. If the power is lower than certain value, say 0.8, we just exclude this peak as significant features. Other study Blaise et al. (2016) show a method based on simulation to estimate the sample size. They usd BY correction to limit the influnces from correlationship. However, the nature of omics study make the power analysis hard to use one numbers and all the methods are trying to find a balance to represent more peaks with least samples(save money).
If there are other co-factors, a linear model or randomizing would be applied to eliminated their influences. You need to record the values of those co-factors for further data analysis. Common co-factors in metabolomic studies are age, gender, location, etc.
If you need data correction, some background or calibration samples are required. However, control samples could also be used for data correction in certain DoE.
Another important factors are instrumentals. High-resolution mass spectrum is always preferred. As shown in Lukas’s study Najdekr et al. (2016):
the most effective mass resolving powers for profiling analyses of metabolite rich biofluids on the Orbitrap Elite were around 60000–120000 fwhm to retrieve the highest amount of information. The region between 400–800 m/z was influenced the most by resolution.
However, elimination of peaks with high RSD% within group were always omited by most study. Based on pre-experiment, you could get a description of RSD% distribution and set cut-off to use stable peaks for further data analysis. To my knowledge, 50% is suitable considering the batch effects.
## 2.1 Software
• MetSizeR GUI Tool for Estimating Sample Sizes for Metabolomic Experiments.
### References
Blaise, Benjamin J., Gonçalo Correia, Adrienne Tin, J. Hunter Young, Anne-Claire Vergnaud, Matthew Lewis, Jake T. M. Pearce, et al. 2016. “Power Analysis and Sample Size Determination in Metabolic Phenotyping.” Anal. Chem. 88 (10): 5179–88. doi:10.1021/acs.analchem.6b00188.
Najdekr, Lukáš, David Friedecký, Ralf Tautenhahn, Tomáš Pluskal, Junhua Wang, Yingying Huang, and Tomáš Adam. 2016. “Influence of Mass Resolving Power in Orbital Ion-Trap Mass Spectrometry-Based Metabolomics.” Anal. Chem. 88 (23): 11429–35. doi:10.1021/acs.analchem.6b02319.
|
2019-02-18 11:24:57
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.6504257321357727, "perplexity": 3438.041764410776}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-09/segments/1550247484928.52/warc/CC-MAIN-20190218094308-20190218120308-00426.warc.gz"}
|
https://www.intechopen.com/chapters/52748
|
Open access peer-reviewed chapter
# Oscillation Criteria for Second‐Order Neutral Damped Differential Equations with Delay Argument
By Said R. Grace and Irena Jadlovská
Submitted: May 4th 2016Reviewed: September 21st 2016Published: March 15th 2017
DOI: 10.5772/65909
## Abstract
The chapter is devoted to study the oscillation of all solutions to second‐order nonlinear neutral damped differential equations with delay argument. New oscillation criteria are obtained by employing a refinement of the generalized Riccati transformations and integral averaging techniques.
### Keywords
• neutral differential equation
• damping
• delay
• second‐order
• generalized Riccati technique
• oscillation
## 1. Introduction
In the chapter, we are mainly concerned with the oscillatory behavior of solutions to second‐order nonlinear neutral damped differential equations with delay argument of the form
(r(t)(z(t))α)+p(t)(z(t))α+q(t)f(x(σ(t)))=0,tt0,E1
where α1is a quotient of positive odd integers and
z(t)=x(t)+a(t)x(τ(t)).E2
Throughout, we suppose that the following hypotheses hold:
1. r,p,qC(,+), where =[t0,)and +=(0,);
2. aC(,), 0a(t)1;
3. τC(,), τ(t)t, τ(t)as t;
4. σC1(,), σ(t)t, σ(t)0, σ(t)as t;
5. fC(,), such that xf(x)>0and f(x)/xβk>0for x=0, where kis a constant and βis the ratio of odd positive integers.
By a solution of Eq. (1), we mean a nontrivial real‐valued function x(t), which has the property z(t)C1([Tx,)), r(t)(z(t))αC1([Tx,)), Txt0, and satisfies Eq. (1) on [Tx,). In the sequel, we will restrict our attention to those solutions x(t)of Eq. (1) that satisfy the condition
sup{|x(t)|:Tt<}>0forTTx.E3
We make the standing hypothesis that Eq. (1) admits such a solution. As is customary, a solution of Eq. (1) is said to be oscillatory if it is neither eventually positive nor eventually negative on [Tx,)and otherwise, it is termed nonoscillatory. The equation itself is called oscillatory if all its solutions are oscillatory.
Remark 1.All the functional inequalities considered in the sequel are assumed to hold eventually, that is, they are satisfied for all tlarge enough.
Oscillation theory was created in 1836 with a paper of Jacques Charles François Sturm published in Journal des Mathematiqués Pures et Appliqueés. His long and detailed memoir [1] was one of the first contributions in Liouville's newly founded journal and initiated a whole new research into the qualitative analysis of differential equations. Heretofore, the theory of differential equations was primarily about finding solutions of a given equation and so was very limited. Contrarily, the main idea of Sturm was to obtain geometric properties of solutions (such as sign changes, zeros, boundaries, and oscillation) directly from the differential equation, without benefit of solutions themselves.
Henceforth, the oscillation theory for ordinary differential equations has undergone a significant development. Nowadays, it is considered as coherent, self‐contained domain in the qualitative theory of differential equations that is turning mainly toward the study of solution properties of functional differential equations (FDEs).
The problem of obtaining sufficient conditions for asymptotic and oscillatory properties of different classes of FDEs has experienced long‐term interest of many researchers. This is caused by the fact that differential equations, especially those with deviating argument, are deemed to be adequate in modeling of the countless processes in all areas of science. For a summary of the most significant efforts and recent findings in the oscillation theory of FDEs and vast bibliography therein, we refer the reader to the excellent monographs [26].
In a neutral delay differential equation the highest‐order derivative of the unknown function appears both with and without delay. The study of qualitative properties of solutions of such equations has, besides its theoretical interest, significant practical importance. This is due to the fact that neutral differential equations arise in various phenomena including problems concerning electric networks containing lossless transmission lines (as in high‐speed computers where such lines are used to interconnect switching circuits), in the study of vibrating masses attached to an elastic bar or in the solution of variational problems with time delays. We refer the reader to the monograph [7] for further applications in science and technology.
So far, most of the results obtained in the literature has centered around the special undampedform of Eq. (1), i.e., when p(t)=0(for example, see Refs. [818]). For instance, in one of the pioneering works on the subject, Grammatikopoulos et al. [8] studied the second‐order neutral differential equation with constant delay of the form
(x(t)+a(t)x(tτ)+q(t)x(tτ)=0E4
and proved that Eq. (4) is oscillatory if
t0q(s)(1a(sτ))ds=.E5
Later on, Grace and Lalli [9] extended the results from [8] to the more general equation
(r(t)(x(t)+a(t)x(tτ))+q(t)f(x(tτ))=0,E6
with
f(x)xk,k>0andt0dsr(s)=E7
and showed that Eq. (6) is oscillatory if there exists a continuously differentiable function ρ(t)such that
t0(ρ(s)q(s)(1a(sτ))(ρ(s))2r(sτ)4kρ(s))ds=.E8
In Ref. [10], Dong has involved to study the oscillation problem for a half‐linear case of Eq. (1) and by defining a sequence of continuous functions has obtained various kinds of better results. Afterward, his approach has been further developed by several authors, see, e.g., [1114]. However, it appears that very little is known regarding the oscillation of Eq. (1) with p(t)0and αβ. Motivated by the results of Ref. [10], this chapter presents some new oscillation criteria, which are applicable on Eq. (1).
On the other hand, Eq. (1) can be considered as a natural generalization of the second‐order delay differential equation of the form
(r(t)(x(t))α)+p(t)(x(t))α+q(t)f(x(σ(t)))=0.E9
Very recently, the authors of [19] studied the oscillation problem of Eq. (9) with p(t)=0and α=β. Their ideas, which are based on careful investigation of classical techniques covering Riccati transformations and integral averages, will be extended to the more general equation (1).
## 2. Main results
For the simplicity and without further mention, we use the following notations:
A(t)=exp(t0tp(s)r(s)ds),Q(t)=kq(t)(1a(σ(t)))β,E10
R(t)=t(A(s)r(s))1αds,Q˜(t)=q(t)(1a(σ(t))R(τ(σ(t)))R(σ(t)))β,E11
P(t)=ϕ(t)ϕ(t)p(t)r(t),q˜(t)=Q(t)+p(t)A(t)r(t)tQ(s)A(s)ds,E12
where ϕ(t)C1(,)is a given function and will be specified later.
The organization of this chapter is as follows. Before stating our main results, we present two lemmas that ensure that any solution x(t)of Eq. (1) satisfies the condition
z(t)>0,z(t)>0,(r(t)(z(t))α)<0,E13
for tsufficiently large. Next, we get our main oscillation results for Eq. (1) by employing the generalized Riccati transformations and integral averaging techniques. We base our arguments on the assumption that the function P(t)is positive or negative.
Lemma 1.Assume that
t0(A(s)r(s))1αds=E14
holds and Eq. (1) has a positive solution x(t)on . Then there exists a T, sufficiently large, such that
z(t)>0,z(t)>0,(r(t)(z(t))α)<0,E15
on [T,).
Proof.Since, x(t)is a positive solution of Eq. (1) on , then, by the assumptions (iii)and (iv), there exists a t1such that x(τ(t))>0and x(σ(t))>0on [t1,). Define the function z(t)as in Eq. (2). Then it is easy to see that z(t)x(t)>0, for tt1, and at the same time, from Eq. (1), we get
(r(t)(z(t))α)+p(t)(z(t))α=q(t)f(x(σ(t)))<0.E16
We assert that r(t)A(t)(z(t))αis decreasing. Clearly, by writing the left‐hand side of Eq. (16) in the form
(r(t)(z(t))α)+p(t)r(t)r(t)(z(t))α<0,E17
we get
(r(t)A(t)(z(t))α)=q(t)A(t)f(x(σ(t)))<0E18
and so the assertion is proved.
Now, we claim that z(t)>0on [t1,). If not, then there exists t2[t1,)such that z(t2)<0. Using the fact that r(t)A(t)(z(t))αis decreasing, we obtain, for tt2,
r(t)A(t)(z(t))α<c:=r(t2)A(t2)(z(t2))α<0.E19
Integrating the above inequality from t2to t, we find that
z(t)<z(t2)+c1αt2t(A(s)r(s))1αdsE20
for tt2.By condition (14), z(t)approaches to as t, which contradicts the fact that z(t)is eventually positive. Therefore, z(t)>0and from Eq. (1), we have that (r(t)(z(t))α)<0. The proof is complete.
Lemma 2.Assume that
t0(A(u)r(u)t0uQ˜(s)Rβ(σ(s))A(s)ds)1αdu=,E21
holds and Eq. (1) has a positive solution x(t)on . Then there exists T, sufficiently large, such that
z(t)>0,z(t)>0,(r(t)(z(t))α)<0,E22
on [T,).
Proof.Similarly to the proof of Lemma 1, we assume that there exists t2such that z(t)<0on [t2,). Taking Eq. (18) into account, we have
z(s)(r(t)A(t)A(s)r(s))1αz(t),E23
for stt2. Integrating the above inequality from tto t, ttt2, we get
z(t)z(t)+(r(t)A(t))1αz(t)tt(r(s)A(s))1αds.E24
Letting t, we have
z(t)R(t)(r(t)A(t))1αz(t),E25
which yields
(z(t)R(t))0E26
and hence we see that z(t)R(t)is nondecreasing. By Eq. (2) and (iii), we have
x(t)=z(t)a(t)x(τ(t))z(t)a(t)z(τ(t))(1a(t)R(τ(t))R(t))z(t),E27
which together with Eq. (1) and the assumption (v)yields
(r(t)(z(t))α)+p(t)(z(t))αkq(t)(1a(σ(t))R(τ(σ(t)))R(σ(t)))βzβ(σ(t))=kQ˜(t)zβ(σ(t)).E28
On the other hand, from Eq. (23), we have
r(t)(z(t))αA(t)r(t2)(z(t2))αA(t2),E29
that is,
r(t)A(t)(z(t))αr(t2)A(t2)(z(t2))α:=γαE30
for some positive constant γ. Setting Eq. (30) into Eq. (25), we obtain
z(t)γR(t)E31
and so, Eq. (28) becomes
(r(t)(z(t))α)+p(t)(z(t))αγ˜Q˜(t)Rβ(σ(t)),E32
where γ˜:=kγβ. Now, if we define the function
U(t)=r(t)(z(t))α>0,E33
then
U(t)+p(t)r(t)U(t)γ˜Q˜(t)Rβ(σ(t)),E34
or, equally
(U(t)A(t))γ˜Q˜(t)Rβ(σ(t))A(t).E35
Integrating the above inequality from t2to t, we get
U(t)γ˜A(t)t2tQ˜(s)Rβ(σ(s))A(s)dsE36
or
r(t)(z(t))αγ˜A(t)t2tQ˜(s)Rβ(σ(s))A(s)ds.E37
It follows from this last inequality that
0<z(t)z(t2)γ˜t2t(A(u)r(u)t2uQ˜(s)Rβ(σ(s))A(s)ds)1αduE38
for tt2.As t, then by condition Eq. (21), z(t)approaches to , which contradicts the fact that z(t)is eventually positive. Therefore, z(t)>0and from Eq. (1), we have (r(t)(z(t))α)<0. The proof is complete.
Lemma 3.Assume that
t0(A(u)r(u)t0uQ˜(s)A(s)ds)1αdu=,E39
holds and Eq. (1) has a positive solution x(t)on . Then there exists T, sufficiently large, such that either
z(t)>0,z(t)>0,(r(t)(z(t))α)<0,E40
on [T,)or limtx(t)=0.
Proof.As in the proof of Lemma 1, we assume that there exists t2such that z(t)<0on [t2,). So, z(t)is decreasing and
limtz(t)=:b0E41
exists. Therefore, there exists t3[t2,)such that
z(σ(t))>z(t)b>0.E42
As in the proof of Lemma 2, we obtain Eq. (27), i.e.,
x(σ(t))(1a(σ(t))R(τ(σ(t)))R(σ(t)))z(σ(t))b(1a(σ(t))R(τ(σ(t)))R(σ(t))),fortt3.E43
Thus,
(r(t)(z(t))α)+p(t)(z(t))αb˜q(t)(1a(σ(t))R(τ(σ(t)))R(σ(t)))β=b˜Q˜(t),E44
where b˜:=kbβ.
Define the function U(t)as in Eq. (103). Then Eq. (44) becomes
(U(t)A(t))b˜Q˜(t)A(t).E45
Integrating the above inequality twice from t3to t, one gets
0<z(t)z(t3)b˜t3t(A(u)r(u)t3uQ˜(s)A(s)ds)1αdu,E46
for tt3.As t, then by condition (39), z(t)approaches to , which contradicts the fact that z(t)is eventually positive. Thus, b=0and from 0x(t)z(t),we see that limtx(t)=0. The proof is complete.
Using results of Lemmas 1 and 2, we can obtain the following oscillation criteria for Eq. (1).
Theorem 1.Let conditions (i)(v)and one of the conditions (14) or (21) hold. Furthermore, assume that there exists a positive continuously differentiable function ϕ(t)such that, for all sufficiently large, T, T1T,
P(t)0E47
on [T,)and
limsupt{ϕ(t)A(t)tQ(s)A(s)ds+T1t[ϕ(s)Q(s)αα(α+1)α+1ϕ(s)r(σ(s))(P(s))α+1(βσ(s)ψ(s))α]ds}=,E48
where
ψ(t)={c1,c1issomepositiveconstantifβ>α1,ifβ=αc2(Ttr1α(s)ds)βαα,c2issomepositiveconstantifβ<α.E49
Then, Eq. (1) is oscillatory.
Proof.Suppose to the contrary that x(t)is a nonoscillatory solution of Eq. (1). Then, without loss of generality, we may assume that there exists Tlarge enough, so that x(t)satisfies the conclusions of Lemma 1 or 2 on [T,)with
x(t)>0,x(τ(t))>0,x(σ(t))>0E50
on [T,). In particular, we have
z(t)>0,z(t)>0,(r(t)(z(t))α)<0,fortT.E51
By Eq. (2) and the assumption (iii), we get
x(t)=z(t)a(t)x(τ(t))z(t)a(t)z(τ(t))(1a(t))z(t),E52
which together with Eq. (1) implies
(r(t)(z(t))α)+p(t)r(t)(z(t))αkq(t)(1a(σ(t)))βzβ(σ(t))=Q(t)zβ(σ(t)).E53
We consider the generalized Riccati substitution
w(t)=ϕ(t)r(t)(z(t))αzβ(σ(t))>0,fortT.E54
As in the proof of Lemma 1, we get Eq. (18), which in view of the assumption (v)yields
(r(t)A(t)(z(t))α)Q(t)A(t)zβ(σ(t)).E55
Integrating Eq. (55) from tto and using the fact that z(t)is increasing, we have
r(t)A(t)(z(t))αtQ(s)A(s)zβ(σ(s))dszβ(σ(t))tQ(s)A(s)ds.E56
So it follows from Eq. (56) and the definition (54) of w(t)that
w(t)=ϕ(t)r(t)(z(t))αzβ(σ(t))ϕ(t)A(t)tQ(s)A(s)ds.E57
By Eq. (53) we can easily prove that
w(t)=(r(t)(z(t))α)ϕ(t)zβ(σ(t))+(ϕ(t)zβ(σ(t)))r(t)(z(t))αϕ(t)zβ(σ(t))(p(t)(z(t))β+Q(t)zβ(σ(t)))+r(t)(z(t))α(ϕ(t)zβ(σ(t))ϕ(t)(zβ(σ(t)))zβ+1(σ(t)))ϕ(t)Q(t)+w(t)(ϕ(t)ϕ(t)p(t)r(t))βϕ(t)r(t)(z(t)βz(σ(t)σ(t)zβ+1(σ(t)).E58
On the other hand, since r(t)(z(t))αis decreasing, we have
z(σ(t))z(t)(r(t)r(σ(t)))1αE59
and thus Eq. (58) becomes
w(t)ϕ(t)Q(t)+P(t)w(t)βϕ(t)σ(t)r1α(σ(t))(w(t)ϕ(t))α+1αzβαα(σ(t)).E60
Now, we consider the following three cases:
Case I: β>α.
In this case, since z(t)>0for tT, then there exists T1Tsuch that z(σ(t))cfor tT1. This implies that
zβαα(σ(t))cβαα:=c1E61
Case II: β=α.
In this case, we see that zβαα(σ(t))=1.
Case III: β<α.
Since r(t)(z(t))αis decreasing, there exists a constant dsuch that
r(t)(z(t))αdE62
for tT. Integrating the above inequality from Tto t, we have
z(t)z(T)+Tt(dr(s))1αds.E63
Hence, there exists T1Tand a constant d1depending on dsuch that
z(t)d1Ttr1α(s)ds,fortT1E64
and thus
zβαα(σ(t))d1βαα(Ttr1α(s)ds)βαα=d2(Ttr1α(s)ds)βααE65
for some positive constant d2.
Using these three cases and the definition of ψ(t), we get
w(t)ϕ(t)Q(t)+P(t)w(t)βσ(t)ψ(t)(ϕ(t)r(σ(t)))1αw1+αα(t)E66
for tT1T. Setting
A:=P(t),E67
B:=βσ(t)ψ(t)(ϕ(t)r(σ(t)))1α,E68
and using the inequality
AuBu1+αααα(α+1)α+1Aα+1Bα,E69
we obtain
w(t)ϕ(t)Q(t)+αα(α+1)α+1ϕ(t)r(σ(t))(P(t))α+1(βσ(t)ψ(t))α.E70
Integrating the above inequality from T1to t, we have
w(t)w(T1)T1t(ϕ(s)Q(s)αα(α+1)α+1ϕ(s)r(σ(s))(P(s))α+1(βσ(s)ψ(s))α)ds.E71
Taking Eq. (57) into account, we get
w(T1)ϕ(t)A(t)tQ(s)A(s)ds+T1t(ϕ(s)Q(s)αα(α+1)α+1ϕ(s)r(σ(s))(P(s))α+1(βσ(s)ψ(s))α)ds.E72
Taking the lim sup on both sides of the above inequality as t, we obtain a contradiction to the condition (48). This completes the proof.
Remark 2.Note that the presence of the term ϕ(t)A(t)tQ(s)A(s)dsin Eq. (57) improves a number of related results in, e.g., [9, 1318, 20].
Setting ϕ(t)=tin Eq. (57), then the following corollary becomes immediate.
Corollary 1.Let conditions (i)(v)and one of the conditions (14) or (21) hold. Assume that, for all sufficiently large, T, T1T,
tp(t)r(t)E73
on [T,)and
limsupt{tA(t)tQ(s)A(s)ds+T1t[sQ(s)αα(α+1)α+1sr(σ(s))(1sp(s)r(s))α+1(βσ(s)ψ(s))α]ds}=,E74
where ψ(t)is as in Theorem 1. Then Eq. (1) is oscillatory.
Corollary 2.Assume that the conditions (39) and (74) hold. Then Eq. (1) is oscillatory or limtx(t)=0.
Next, we present some complementary oscillation results for Eq. (1) by using an integral averaging technique due to Philos. We need the class of functions F. Let
D0={(t,s):t>st0}andD={(t,s):t>st0}E75
The function H(t,s)C(D,)is said to belong to a class Fif
(a)H(t,t)=0for tT, H(t,s)>0for (t,s)D0
(b)H(t,s)has a continuous and nonpositive partial derivative on D0with respect to the second variable such that
s(H(t,s)ϕ(s))H(t,s)ϕ(s)p(s)r(s)=h(t,s)(H(t,s)ϕ(s))αα+1E76
for all (t,s)D0.
Theorem 2.Let conditions (i)(v)and one of the conditions (14) or (21) hold. Furthermore, assume that there exist functions H(t,s), h(t,s)Fsuch that, for all sufficiently large, T, for T1T,
limsupt1H(t,T1)T1t(H(t,s)(ϕ(s)Q(s)+ρ(s)ϕ(s)p(s))αα(α+1)α+1hα+1(t,s)r(σ(s))βα(σ(s)ψ(s))α)ds=E77
where ϕ(t)and ρ(t)are continuously differentiable functions and ψ(t)is as in Theorem 1. Then Eq. (1) is oscillatory.
Proof.Suppose to the contrary that x(t)is a nonoscillatory solution of Eq. (1). Then, without loss of generality, we may assume that there exists Tlarge enough, so that x(t)satisfies the conclusions of Lemma 1 or 2 on [T,)with
x(t)>0,x(τ(t))>0,x(σ(t))>0E78
on [T,). In particular, we have
z(t)>0,z(t)>0,(r(t)(z(t))α)<0,fortT.E79
Define the function w(t)as
w(t)=ϕ(t)r(t)((z(t))αzβ(σ(t))+ρ(t))ϕ(t)r(t)ρ(t),E80
where ρ(t)C1(,). Similarly to the proof of Theorem 1, we obtain the inequality
w(t)ϕ(t)Q(t)+ϕ(t)(r(t)ρ(t))+(ϕ(t)ϕ(t)p(t)r(t))w(t)βσ(t)ψ(t)(ϕ(t)r(σ(t)))1α(w(t)ϕ(t)r(t)ρ(t))1+αα.E81
Multiplying Eq. (81) by H(t,s), integrating with respect to sfrom T1to tfor tT1T, and using (a)and (b), we find that
T1tH(t,s)ϕ(s)(Q(s)(r(s)ρ(s)))dsTtH(t,s)w(s)ds+T1tH(t,s)(ϕ(s)ϕ(s)p(s)r(s))w(s)dsT1tβH(t,s)σ(s)ψ(s)(ϕ(s)r(σ(s)))1α(w(s)ϕ(s)r(s)ρ(s))1+ααds=H(t,s)w(s)|T1t+T1t(sH(t,s)+H(t,s)(ϕ(s)ϕ(s)p(s)r(s)))w(s)dsT1tβH(t,s)σ(s)ψ(s)(ϕ(s)r(σ(s)))1α(w(s)ϕ(s)r(s)ρ(s))1+ααds=H(t,T1)w(T1)+T1th(t,s)ϕ(s)(H(t,s)ϕ(s))αα+1w(s)dsT1tβH(t,s)σ(s)ψ(s)(ϕ(s)r(σ(s)))1α(w(s)ϕ(s)r(s)ρ(s))1+ααdsE82
Setting
A:=h(t,s)ϕ(s)[H(t,s)ϕ(s)]αα+1,B:=βH(t,s)σ(s)ψ(s)(ϕ(s)r(σ(s)))1αE83
and
C:=ϕ(s)r(s)ρ(s)E84
and using the inequality
AuB(uC)1+ααAC+αα(α+1)α+1Aα+1Bα,E85
we obtain
T1tH(t,s)ϕ(s)(Q(s)(r(s)ρ(s)))dsH(t,T1)w(T1)+T1th(t,s)r(s)ρ(s)[H(t,s)ϕ(s)]αα+1ds+T1tαα(α+1)α+1hα+1(t,s)r(σ(s))βα(σ(s)ψ(s))αdsE86
Thus,
H(t,T1)w(T1)T1tH(t,s)ϕ(s)(Q(s)(r(s)ρ(s)))ds+T1th(t,s)r(s)ρ(s)[H(t,s)ϕ(s)]αα+1dsT1tαα(α+1)α+1hα+1(t,s)r(σ(s))βα(σ(s)ψ(s))αds.E87
That is,
H(t,T1)w(T1)T1tH(t,s)ϕ(s)(Q(s)(r(s)ρ(s)))ds+T1tr(s)ρ(s)(s(H(t,s)ϕ(s))H(t,s)ϕ(s)p(s)r(s))dsT1tαα(α+1)α+1hα+1(t,s)r(σ(s))βα(σ(s)ψ(s))αds=T1tH(t,s)(ϕ(s)Q(s)+ρ(s)ϕ(s)p(s))dsH(t,s)ϕ(s)r(s)ρ(s)|T1tT1tαα(α+1)α+1hα+1(t,s)r(σ(s))βα(σ(s)ψ(s))αdsE88
It follows that
T1tH(t,s)(ϕ(s)Q(s)+ρ(s)ϕ(s)p(s))dsT1tαα(α+1)α+1hα+1(t,s)r(σ(s))βα(σ(s)ψ(s))αdsH(t,T1)(w(T1)ϕ(T1)r(T1)ρ(T1)),E89
which is a contradiction to Eq. (77). The proof is complete.
Remark 3.Authors in [15, 20] studied a partial case of Eq. (1) by employing the generalized Riccati substitution (80). Note that the function ρ(t)used in the generalized Riccati substitution (80) finally becomes unimportant. Thus, we can put ρ(t)=0and obtain similar results to those from [15, 20].
In the next part, we provide several oscillation results for Eq. (1) under the assumption that the function P(t)is nonpositive. These results generalize those from [10] for Eq. (1) in such sense that αβand p(t)0.
Theorem 3.Let conditions (i)(v)and one of the conditions (14) or (21) hold. Furthermore, assume that there exists a continuously differentiable function ϕ(t)such that, for all sufficiently large, T, T1T,
P(t)0E90
on [T,)and
limsupt[ϕ(t)A(t)tQ(s)A(s)ds+T1tϕ(s)(Q(s)A(s)P(s)sQ(u)A(u)du)ds]=.E91
Then Eq. (1) is oscillatory.
Proof.Suppose to the contrary that x(t)is a nonoscillatory solution of Eq. (1). Then, without loss of generality, we may assume that there exists Tlarge enough, so that x(t)satisfies the conclusions of Lemma 1 or 2 on [T,)with
x(t)>0,x(τ(t))>0,x(σ(t))>0E92
on [T,). In particular, we have
z(t)>0,z(t)>0,(r(t)(z(t))α)<0,fortT.E93
Proceeding as in the proof of Theorem 1, we obtain the inequality (66), i.e.,
w(t)ϕ(t)Q(t)+P(t)w(t)βσ(t)ψ(t)(ϕ(t)r(σ(t)))1αw1+αα(t)E94
for tT1T. Using Eq. (90), and setting Eq. (57) in Eq. (94), we get
w(t)ϕ(t)Q(t)+ϕ(t)A(t)P(t)tQ(s)A(s)dsβσ(t)ψ(t)(ϕ(t)r(σ(t)))1αw1+αα(t)ϕ(t)Q(t)+ϕ(t)A(t)P(t)tQ(s)A(s)ds,E95
that is,
w(t)+ϕ(t)Q(t)ϕ(t)A(t)P(t)tQ(s)A(s)ds0.E96
Integrating the above inequality from T1to t, we have
w(T1)w(t)+T1t(ϕ(s)Q(s)ϕ(s)A(s)P(s)sQ(u)A(u)du)dsϕ(t)A(t)tQ(s)A(s)ds+T1t(ϕ(s)Q(s)ϕ(s)A(s)P(s)sQ(u)A(u)du)dsE97
Taking the lim sup on both sides of the above inequality as t, we obtain a contradiction to condition Eq. (91). This completes the proof.
Setting ϕ(t)=1, we have the following consequence.
Corollary 3.Let conditions (i)(v)and one of the conditions (14) or (21) hold. Assume that
limsupt[A(t)tQ(s)A(s)ds+T1tq˜(s)ds]=,E98
for all sufficiently large T, for T1T. Then Eq. (1) is oscillatory.
Define a sequence of functions {yn(t)}n=0as
y0(t)=tq˜(s)ds,tTE99
yn(t)=tβσ(s)ψ(s)r1α(σ(s))(yn1(s))1+ααds+y0(t),tT,n=1,2,3,,E100
for Tt0sufficiently large.
By induction, we can see that ynyn+1, n=1,2,3,.
Lemma 4.Let conditions (i)(v)and one of the conditions (14) or (21) hold. Assume that x(t)is a positive solution of Eq. (1) on . Then there exists T, sufficiently large, such that
w(t)yn(t),E101
where w(t)and yn(t)are defined as Eqs. (54) and (100), respectively. Furthermore, there exists a positive function y(t)on [T1,), T1T, such that
limnyn(t)=y(t)E102
and
y(t)=tβσ(s)ψ(s)r1α(σ(s))(y(s))1+ααds+y0(t).E103
Proof.Similarly to the proof of Theorem 3, we obtain Eq. (95). Setting ϕ(t)=1in Eq. (95), we get
w(t)+Q(t)+p(t)A(t)r(t)tQ(s)A(s)ds+βσ(t)ψ(t)r1α(σ(t))w1+αα(t)0E104
for tT1T. Integrating Eq. (104) from tto t, we get
w(t)w(t)+ttq˜(s)ds+ttβσ(s)ψ(s)r1α(σ(s))w1+αα(s)ds0E105
or
w(t)w(t)+ttβσ(s)ψ(s)r1α(σ(s))w1+αα(s)ds0.E106
We assert that
tβσ(s)ψ(s)r1α(σ(s))w1+αα(s)ds<.E107
If not, then
w(t)w(t)ttβσ(s)ψ(s)r1α(σ(s))w1+αα(s)dsE108
as t, which contradicts to the positivity of w(t)and thus the assertion is proved. By Eq. (104), we see that w(t)is decreasing that means
limtw(t)=k,k0.E109
By virtue of Eq. (107), we have k=0. Thus, letting tin Eq. (105), we get
w(t)tq˜(s)ds+tβσ(s)ψ(s)r1α(σ(s))w1+αα(s)ds=y0(t)+tβσ(s)ψ(s)r1α(σ(s))w1+αα(s)ds,E110
that is,
w(t)tq˜(s)ds=y0(t).E111
Moreover, by induction, we have that
w(t)yn(t),fortT1,n=1,2,3,.E112
Thus, since the sequence {yn(t)}n=0is monotone increasing and bounded above, it converges to y(t). Letting nand using Lebesgue monotone convergence theorem in Eq. (100), we get Eq. (103). The proof is complete.
Theorem 4.Let conditions (i)(v)and one of the conditions (14) or (21) hold. If
liminft(1y0(t)tβσ(s)ψ(s)r1α(σ(s))(y0(s))1+ααds)>α(α+1)1+αα,E113
where ψ(t)is as in Theorem 1, then Eq. (1) is oscillatory.
Proof.Suppose to the contrary that x(t)is a nonoscillatory solution of Eq. (1). Then, without loss of generality, we may assume that there exists Tlarge enough, so that x(t)satisfies the conclusions of Lemma 1 or 2 on [T,)with
x(t)>0,x(τ(t))>0,x(σ(t))>0E114
on [T,). In particular, we have
z(t)>0,z(t)>0,(r(t)(z(t))α)<0,fortT.E115
By Eq. (113), there exists a constant γ>α(α+1)1+ααsuch that
liminft1y0(t)tβσ(s)ψ(s)r1α(σ(s))(y0(s))1+ααds>γ.E116
Proceeding as in the proof of Lemma 4, we obtain Eq. (110) and from that, we have
w(t)y0(t)1+1y0(t)tβσ(s)ψ(s)r1α(σ(s))(y0(s))1+αα(w(s)y0(s))1+ααdsE117
Let
λ=inftt1w(t)y0(t).E118
Then it is easy to see that λ1and
λ1+λ1+ααγ,E119
which contradicts the admissible value of λand γ, and thus completes the proof.
Theorem 5.Let conditions (i)(v), one of the conditions (14) or (21) hold, and yn(t)be defined as in Eq. (100). If there exists some yn(t)such that, for Tsufficiently large,
limsuptyn(t)(Tσ(t)r1α(s)ds)α>1ψ(t),E120
where ψ(t)is as in Theorem 1, then Eq. (1) is oscillatory.
Proof.Suppose to the contrary that x(t)is a nonoscillatory solution of Eq. (1). Then, without loss of generality, we may assume that there exists Tlarge enough, so that x(t)satisfies the conclusions of Lemma 1 or 2 on [T,)with
x(t)>0,x(τ(t))>0,x(σ(t))>0E121
on [T,). In particular, we have
z(t)>0,z(t)>0,(r(t)(z(t))α)<0,fortT.E122
Proceeding as in the proof of Theorem 3 and using defining w(t)as in Eq. (54), for T1T, we get
1w(t)=zβ(σ(t))r(t)(z(t))αψ(t)r(t)(z(σ(t))z(t))α=ψ(t)r(t)(z(T1)+T1σ(t)r1α(s)r1α(s)z(s)dsz(t))αψ(t)(T1σ(t)r1α(s)ds)αE123
Thus,
w(t)(Tσ(t)r1α(s)ds)α1ψ(t)(Tσ(t)r1α(s)dsT1σ(t)r1α(s)ds)αE124
And therefore,
limsuptw(t)(Tσ(t)r1α(s)ds)α1ψ(t),E125
which contradicts Eq. (120). The proof is complete.
Theorem 6.Let conditions (i)(v), one of the conditions (14) or (21) hold, and yn(t)be defined as in Eq. (100). If there exists some yn(t)such that
T1q˜(t)exp(T1tβσ(s)ψ(s)r1α(σ(s))yn1α(s)ds)dt=E126
or
T1βσ(t)ψ(t)yn1α(t)y0(t)r1α(σ(t))exp(T1tβσ(s)ψ(s)r1α(σ(s))yn1α(s)ds)dt=,E127
for Tsufficiently large and T1T, where ψ(t)is as in Theorem 1, then Eq. (1) is oscillatory.
Proof.Suppose to the contrary that x(t)is a nonoscillatory solution of Eq. (1). Then, without loss of generality, we may assume that there exists Tlarge enough, so that x(t)satisfies the conclusions of Lemma 1 or 2 on [T,)with
x(t)>0,x(τ(t))>0,x(σ(t))>0E128
on [T,). In particular, we have
z(t)>0,z(t)>0,(r(t)(z(t))α)<0,fortT.E129
From Eq. (103), we have
y(t)=βσ(t)ψ(t)r1α(σ(t))(y(t))1+ααq˜(t),E130
for all tT1T. Since y(t)yn(t), Eq. (130) yields
y(t)βσ(t)ψ(t)r1α(σ(t))yn1α(t)y(t)q˜(t).E131
Multiplying the above inequality by the integration factor
exp(T1tβσ(s)ψ(s)r1α(σ(s))yn1α(s)ds),E132
one gets
y(t)exp(T1tβσ(s)ψ(s)r1α(σ(s))yn1α(s)ds)×(y(t1)T1tq˜(s)exp(T1sβσ(u)ψ(u)r1α(σ(u))yn1α(u)du)ds),E133
from which we have that
T1tq˜(s)exp(T1sβσ(u)ψ(u)r1α(σ(u))yn1α(u)du)dsy(T1)<.E134
This is a contradiction with Eq. (126).
Now denote
u(t)=tβσ(s)ψ(s)r1α(σ(s))(y(s))1+ααdsE135
Taking the derivative of u(t), one gets
u(t)=βσ(t)ψ(t)r1α(σ(t))(y(t))1+ααβσ(t)ψ(t)r1α(σ(t))yn1α(t)y(t)=βσ(t)ψ(t)r1α(σ(t))yn1α(t)(u(t)+y0(t))E136
Proceeding in a similar manner to that above, we conclude that
T1βσ(t)ψ(t)r1α(σ(t))yn1α(t)y0(t)exp(T1tβσ(s)ψ(s)r1α(σ(s))yn1α(s)ds)dt<,E137
which contradicts to Eq. (127). The proof is complete.
chapter PDF
Citations in RIS format
Citations in bibtex format
## More
© 2017 The Author(s). Licensee IntechOpen. This chapter is distributed under the terms of the Creative Commons Attribution 3.0 License, which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.
## How to cite and reference
### Cite this chapter Copy to clipboard
Said R. Grace and Irena Jadlovská (March 15th 2017). Oscillation Criteria for Second‐Order Neutral Damped Differential Equations with Delay Argument, Dynamical Systems - Analytical and Computational Techniques, Mahmut Reyhanoglu, IntechOpen, DOI: 10.5772/65909. Available from:
### More statistics for editors and authors
Login to your personal dashboard for more detailed statistics on your publications.
### Related Content
Next chapter
#### Preservation of Synchronization Using a Tracy‐Singh Product in the Transformation on Their Linear Matrix
By Guillermo Fernadez‐Anaya, Luis Alberto Quezada‐Téllez, Jorge Antonio López‐Rentería, Oscar A. Rosas‐Jaimes, Rodrigo Muñoz‐ Vega, Guillermo Manuel Mallen‐Fullerton and José Job Flores‐ Godoy
First chapter
#### Appell-Gibbs Approach in Dynamics of Non-Holonomic Systems
By Jiří Náprstek and Cyril Fischer
We are IntechOpen, the world's leading publisher of Open Access books. Built by scientists, for scientists. Our readership spans scientists, professors, researchers, librarians, and students, as well as business professionals. We share our knowledge and peer-reveiwed research papers with libraries, scientific and engineering societies, and also work with corporate R&D departments and government entities.
|
2022-01-22 14:48:15
|
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9234246611595154, "perplexity": 2436.422566171546}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-05/segments/1642320303864.86/warc/CC-MAIN-20220122134127-20220122164127-00021.warc.gz"}
|
https://math.stackexchange.com/questions/4251630/can-the-zero-vector-be-1-1
|
# Can the zero vector be (1,1)?
Given the operation in $$\mathbb{R}^2$$:
$$(x_1,y_1) + (x_2,y_2) = (x_1x_2,y_1y_2)$$
I would like to find whether this is a vector space in $$\mathbb{R}$$. Looking at the Additive Zero Axiom, we get:
$$(x_1,y_1) + \boldsymbol{0} = (x_1(0),y_1(0)) = \boldsymbol{0}$$
To satisfy the Additive Zero Axiom, $$(x_1,y_1) + \boldsymbol{0} = (x_1,y_1)$$ must be true. For this to be true, $$\boldsymbol{0}$$ would have to be $$(1,1)$$
Is this possible, or would we be able to say this is not a vector space?
• By far nothing's gone wrong. Go ahead and check the other axioms Sep 16 at 3:13
• Welcome to the site! If the issue has been resolved, accepting and/or upvoting answers is the best way to say "thanks!": it scores points, signals resolution, and prevents bumping and automatic deletion. 2 days ago
Is not a vector space, for instance let us try find the zero element in $$\mathbb{R}^2$$ with the given operation.
Let $$P=(x,y)\in \mathbb{R}^2$$
$$(x,y)+(e_1,e_2)=(x,y)$$ implies $$(xe_1,ye_2)=(x,y)$$ and then $$e_1=1$$ and $$e_2=1$$ it is the zero element must be $$(1,1)$$.
But in this case $$(0,0)$$ isn´t invertible since $$(0,0)+(a,b)=(1,1)$$ implies $$(0,0)=(1,1)$$ which is a contradiction.
• Thank you for your help! This now makes complete sense. Sep 16 at 2:27
• You´re welcome! Sep 16 at 2:28
The additive identity is indeed $$(1,1)$$.
Let's check for inverse of $$(0,0)$$.
For any $$x, y \in \mathbb{R}$$,
$$(0, 0) + (x, y)= (0,0) \ne (1,1).$$ Hence it can't be a vector space.
• Thank you very much for your reply! This makes sense now, (0,0) does not have an additive inverse with (1,1) being the additive identity Sep 16 at 2:26
|
2021-10-20 03:05:15
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 21, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.7647790908813477, "perplexity": 386.1631015547027}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-43/segments/1634323585302.56/warc/CC-MAIN-20211020024111-20211020054111-00040.warc.gz"}
|
http://math.soimeme.org/~arunram/Notes/gradedramodules.xhtml
|
Graded <math> <msub> <mi>R</mi> <mi>α</mi> </msub> </math>-modules
## Graded ${R}_{\alpha }$-modules
A $ℤ$-graded vector space is a vector space with a decomposition $V= ⊕ i∈ℤ V i andgdim V = ∑ i∈ℤ q i dim V$ is the graded dimension of $V$. If $M$ is a $ℤ$-graded ${R}_{\alpha }$-module then, as graded vector spaces, $M= ⊕ u∈ Γ α eu Mandgdim M = ∑ u∈ Γ α gdim e u M f u$ is the graded character of $M$.
Let $\alpha ,\beta \in {Q}^{+}$ and let $k$ and $l$ be the lengths of the words in ${\Gamma }^{\alpha }$ and ${\Gamma }^{\beta }$, respectively. Then $Γ α+β = ⨆ σ∈ S k+l / S k × S l and R α ⊗ R β ↦ R α+β e u ⊗ e v ↦ e uv x i e u ⊗ e v ↦ x i e uv τ i e u ⊗ e v ↦ τ i e uv e u ⊗ x j e v ↦ x j+k e uv e u ⊗ τ j e v ↦ τ j+k e uv$ defines an injection (of nonunital algebras).
[KL1, Prop 2.16] As a right ${R}_{\alpha }\otimes {R}_{\beta }$-module, ${R}_{\alpha +\beta }$ has basis $τ σ 1 αβ | σ∈ S k+l / S k × S l ,where 1 αβ = ∑ u∈ Γ α ,v∈ Γ β e uv ,$ and, for each minimal length representative $\sigma$ of a coset in ${S}_{k+l}/\left({S}_{k}×{S}_{l}\right),$ we fix a reduced word $σ= s i 1 … s i l and set τ σ = τ i 1 … τ i l .$
Let ${R}_{\alpha }$-mod be the category of $ℤ$-graded ${R}_{\alpha }$-modules. For $M\in {R}_{\alpha }$-mod and $N\in {R}_{\beta }$ define $M∘N= Ind R α ⊗ R β R α+β M⊗N .$
Let $K\left({R}_{\alpha }\text{-mod}\right)$ be the Grothendieck group of $ℤ$-graded $R$-modules and define a $product on ⨁ α∈ Q + K R α -mod by M . N = M∘N .$
The following theorem says that the categories ${R}_{\alpha }$-mod form a categorification of ${U}_{q}{𝔫}^{-}.$ It is the main theorem of this theory.
[KL, Prop 3.4] The graded character map $gch: ⨁ α∈ Q + K R α -mod → U q 𝔫 - M ↦ gch M$ is an algebra ismorphism.
1. If $M\in {R}_{\alpha }$-mod then $\mathrm{gch}\left(M\right)\in {U}_{q}{𝔫}^{-}.$
2. If $M\in {R}_{\alpha }$-mod and $N\in {R}_{\beta }$-mod then $gch Ind R α ⊗ R β R α+β M⊗N =gch M ⊔⊔gch N .$
Proof
1. (sketch only) $gch M = ∑ u∈ Γ α gdim e u M f u = ∑ u∈ Γ α gdim Hom M R α e u f u ,$ and so by Roquier's complex [Ro, Lemma 3.13], $\mathrm{gch}\left(M\right)$ satisfies Serre relations. Thus, by Proposition ?, $\mathrm{gch}\left(M\right)\in {U}_{q}{𝔫}^{-}.$
2. (second proof) (sketch) By understanding the intertwiners thoroughly, analyse the ${R}_{\alpha }$-modules when $\alpha =-⟨{\alpha }_{i},{\alpha }_{j}^{\vee }⟩{\alpha }_{1}+{\alpha }_{2}.$ $1112↔11121↔11211↔12111↔21111$
3. This follows from the fact that $\left\{{\tau }_{\sigma }{1}_{\alpha \beta }\phantom{\rule{.5em}{0ex}}|\phantom{\rule{.5em}{0ex}}\sigma \in {S}_{k+l}/{S}_{k}×{S}_{l}\right\}$ is a basis of ${R}_{\alpha +\beta }$ as a right ${R}_{\alpha }\otimes {R}_{\beta }$-module.
|
2019-01-22 14:25:39
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 49, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9779934287071228, "perplexity": 339.9899336410937}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-04/segments/1547583857913.57/warc/CC-MAIN-20190122140606-20190122162606-00106.warc.gz"}
|
https://labs.tib.eu/arxiv/?author=Gabriel%20Zapata
|
• ### Angular and polarization trails from effective interactions of Majorana neutrinos at the LHeC(1802.07620)
May 4, 2018 hep-ph
We study the possibility of the LHeC facility to disentangle different new physics contributions to the production of heavy sterile Majorana neutrinos in the lepton number violating channel $e^{-}p\rightarrow l_{j}^{+} + 3 jets$ ($l_j\equiv e ,\mu$). This is done investigating the angular and polarization trails of effective operators with distinct Dirac-Lorentz structure contributing to the Majorana neutrino production, which parameterize new physics from a higher energy scale. We study an asymmetry in the angular distribution of the final anti-lepton and the initial electron polarization effect on the number of signal events produced by the vectorial and scalar effective interactions, finding both analyses could well separate their contributions.
• ### Preservation of Trees by semidirect Products(1801.00057)
Dec. 29, 2017 math.GR
We show that the semidirect product of a group $C$ by $A*_D B$ is isomorphic to the free product of $A\rtimes C$ and $B\rtimes C$ amalgamated at $D\rtimes C$, where $A$, $B$ and $C$ are arbitrary groups. Moreover, we apply this theorem to prove that any group $G$ that acts without inversion on a tree $T$ that possesses a segment $\Gamma$ for its quotient graph, such that, if the stabilizers of the vertex set $\{\,P,Q\,\}$ and edge $y$ of a lift of $\, \Gamma$ in $T$ are of the form $G_{P}\!\rtimes H$, $G_{Q}\!\rtimes H$ and $G_{y}\! \rtimes H$, then $G$ is isomorphic to the semidirect product of $H$ by $(\,G_P \,*_{G_y} \,G_Q \,)$. Using our results we conclude with a non-standard verification of the isomorphism between $GL_2(\mathbb{Z})$ and the free product of the dihedral groups $D_4$ and $D_6$ amalgamated at their Klein-four group.
• ### Effects of Majorana Physics on the UHE $\nu_{\tau}$ Flux Traversing the Earth(1609.07661)
Dec. 16, 2016 hep-ph
We study the effects produced by sterile Majorana neutrinos on the $\nu_{\tau}$ flux traversing the Earth, considering the interaction between the Majorana neutrinos and the standard matter as modeled by an effective theory. The surviving tau-neutrino flux is calculated using transport equations including Majorana neutrino production and decay. We compare our results with the pure Standard Model interactions, computing the surviving flux for different values of the effective lagrangian couplings, considering the detected flux by IceCube for an operation time of ten years, and Majorana neutrinos with mass $m_N \thicksim m_{\tau}$.
• ### Using decision problems in public key cryptography(math/0703656)
March 22, 2007 cs.CR, math.GR
There are several public key establishment protocols as well as complete public key cryptosystems based on allegedly hard problems from combinatorial (semi)group theory known by now. Most of these problems are search problems, i.e., they are of the following nature: given a property P and the information that there are objects with the property P, find at least one particular object with the property P. So far, no cryptographic protocol based on a search problem in a non-commutative (semi)group has been recognized as secure enough to be a viable alternative to established protocols (such as RSA) based on commutative (semi)groups, although most of these protocols are more efficient than RSA is. In this paper, we suggest to use decision problems from combinatorial group theory as the core of a public key establishment protocol or a public key cryptosystem. By using a popular decision problem, the word problem, we design a cryptosystem with the following features: (1) Bob transmits to Alice an encrypted binary sequence which Alice decrypts correctly with probability "very close" to 1; (2) the adversary, Eve, who is granted arbitrarily high (but fixed) computational speed, cannot positively identify (at least, in theory), by using a "brute force attack", the "1" or "0" bits in Bob's binary sequence. In other words: no matter what computational speed we grant Eve at the outset, there is no guarantee that her "brute force attack" program will give a conclusive answer (or an answer which is correct with overwhelming probability) about any bit in Bob's sequence.
• ### Combinatorial group theory and public key cryptography(math/0410068)
Oct. 4, 2004 cs.CR, math.GR
After some excitement generated by recently suggested public key exchange protocols due to Anshel-Anshel-Goldfeld and Ko-Lee et al., it is a prevalent opinion now that the conjugacy search problem is unlikely to provide sufficient level of security if a braid group is used as the platform. In this paper we address the following questions: (1) whether choosing a different group, or a class of groups, can remedy the situation; (2) whether some other "hard" problem from combinatorial group theory can be used, instead of the conjugacy search problem, in a public key exchange protocol. Another question that we address here, although somewhat vague, is likely to become a focus of the future research in public key cryptography based on symbolic computation: (3) whether one can efficiently disguise an element of a given group (or a semigroup) by using defining relations.
|
2020-01-23 08:18:01
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.7461864352226257, "perplexity": 533.4952910235718}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-05/segments/1579250609478.50/warc/CC-MAIN-20200123071220-20200123100220-00086.warc.gz"}
|
http://daetools.com/docs/pyDAE_user_guide.html
|
# 6. pyDAE User Guide¶
## 6.1. Importing DAE Tools modules¶
pyDAE modules can be imported in the following way:
from daetools.pyDAE import *
This will set the python sys.path for importing the platform dependent c++ extension modules (i.e. .../daetools/pyDAE/Windows_win32_py34 and .../daetools/solvers/Windows_win32_py34 in Windows, .../daetools/pyDAE/Linux_x86_64_py34 and .../daetools/solvers/Linux_x86_64_py34 in GNU/Linux), import all symbols from all pyDAE modules: pyCore, pyActivity, pyDataReporting, pyIDAS, pyUnits and import some platfom independent modules: logs, variable_types and dae_simulator.
Alternatively, only the daetools module can be imported and classes from the pyDAE extension modules accessed using fully qualified names. For instance:
import daetools
model = daetools.pyDAE.pyCore.daeModel("name")
Once the pyDAE module is imported, the other modules (such as third party linear solvers, optimisation solvers etc.) can be imported in the following way:
# Import Trilinos LA solvers (Amesos, AztecOO):
from daetools.solvers.trilinos import pyTrilinos
# Import SuperLU linear solver:
from daetools.solvers.superlu import pySuperLU
# Import SuperLU_MT linear solver:
from daetools.solvers.superlu_mt import pySuperLU_MT
# Import IPOPT NLP solver:
from daetools.solvers.ipopt import pyIPOPT
# Import BONMIN MINLP solver:
from daetools.solvers.bonmin import pyBONMIN
# Import NLOPT set of optimisation solvers:
from daetools.solvers.nlopt import pyNLOPT
Since domains, parameters and variables in DAE Tools have a numerical value in terms of a unit of measurement (quantity) the modules containing definitions of units and variable types must be imported. They can be imported in the following way:
from daetools.pyDAE.variable_types import length_t, area_t, volume_t
from daetools.pyDAE.pyUnits import m, kg, s, K, Pa, J, W
The complete list of units and variable types can be found in Variable types and Module pyUnits modules.
## 6.2. Developing models¶
In DAE Tools models are developed by deriving a new class from the base pyCore.daeModel class. An empty model definition is presented below:
class myModel(daeModel):
def __init__(self, name, parent = None, description = ""):
daeModel.__init__(self, name, parent, description)
# Declaration/instantiation of domains, parameters, variables, ports, etc:
...
def DeclareEquations(self):
# Declaration of equations, state transition networks etc.:
...
The process consists of the following steps:
1. Calling the base class constructor:
daeModel.__init__(self, name, parent, description)
2. Declaring the model structure (domains, parameters, variables, ports, components etc.) in the pyCore.daeModel.__init__() function:
One of the fundamental ideas in DAE Tools is separation of the model specification from the activities that can be carried out on that model: this way several simulation scenarios can be developed based on a single model definition. Thus, all objects are defined in two stages:
Therefore, parameters, domains and variables are only declared here, while their initialisation (setting the parameter value, setting up the domain, assigning or setting an initial condition etc.) is postponed and will be done in the simulation class
All objects must be declared as data members of the model since the base pyCore.daeModel class keeps only week references and does not own them:
def __init__(self, name, parent = None, description = ""):
self.domain = daeDomain(...)
self.parameter = daeParameter(...)
self.variable = daeVariable(...)
... etc.
and not:
def __init__(self, name, parent = None, description = ""):
domain = daeDomain(...)
parameter = daeParameter(...)
variable = daeVariable(...)
... etc.
because at the exit from the pyCore.daeModel.__init__() function the objects will go out of scope and get destroyed. However, the underlying c++ model object still holds references to them which will eventually result in the segmentation fault.
3. Specification of the model functionality (equations, state transition networks, and OnEvent and OnCondition actions) in the pyCore.daeModel.DeclareEquations() function.
Initialisation of the simulation object is done in several phases. At the point when this function is called by the framework the model parameters, domains, variables etc. are fully initialised. Therefore, it is safe to obtain the values of parameters or domain points and use them to create equations at the run-time.
A simplest DAE Tools model with a description of all steps/tasks necessary to develop a model can be found in the What’s the time? (AKA: Hello world!) tutorial (whats_the_time.py).
### 6.2.1. Parameters¶
Parameters are time invariant quantities that do not change during a simulation. Usually a good choice what should be a parameter is a physical constant, number of discretisation points in a domain etc.
There are two types of parameters in DAE Tools:
• Ordinary
• Distributed
The process of defining parameters is again carried out in two phases:
#### 6.2.1.1. Declaring parameters¶
Parameters are declared in the pyCore.daeModel.__init__() function. An ordinary parameter can be declared in the following way:
self.myParam = daeParameter("myParam", units, parentModel, "description")
Parameters can be distributed on domains. A distributed parameter can be declared in the following way:
self.myParam = daeParameter("myParam", units, parentModel, "description")
self.myParam.DistributeOnDomain(myDomain)
# Or simply:
self.myParam = daeParameter("myParam", units, parentModel, "description", [myDomain])
#### 6.2.1.2. Initialising parameters¶
Parameters are initialised in the pyActivity.daeSimulation.SetUpParametersAndDomains() function. To set a value of an ordinary parameter the following can be used:
myParam.SetValue(value)
where value can be a floating point value or the quantity object, while to set a value of distributed parameters (one-dimensional for example):
for i in range(myDomain.NumberOfPoints):
myParam.SetValue(i, value)
where the value can be either a float (i.e. 1.34) or the pyUnits.quantity object (i.e. 1.34 * W/(m*K)). If the simple floats are used it is assumed that they represent values with the same units as in the parameter definition.
Nota bene
DAE Tools (as it is the case in C/C++ and Python) use zero-based arrays in which the initial element of a sequence is assigned the index 0, rather than 1.
In addition, all values can be set at once using:
myParam.SetValues(values)
where values is a numpy array of floats/quantity objects.
#### 6.2.1.3. Using parameters¶
The most commonly used functions are:
Notate bene
The functions pyCore.daeParameter.__call__() and pyCore.daeParameter.array() return pyCore.adouble and pyCore.adouble_array objects, respectively and does not contain values. They are only used to specify equations’ residual expressions which are stored in the pyCore.adouble.Node() / pyCore.adouble_array.Node() attributes.
Other functions (such as pyCore.daeParameter.npyValues and pyCore.daeParameter.GetValue) can be used to access the values data during the simulation.
All above stands for similar functions in pyCore.daeDomain and pyCore.daeVariable classes.
1. To get a value of the ordinary parameter the pyCore.daeParameter.__call__() function (operator ()) can be used. For instance, if the variable myVar has to be equal to the sum of the parameter myParam and 15:
$myVar = myParam + 15$
in DAE Tools it is specified in the following acausal way:
# Notation:
# - eq is a daeEquation object created using the model.CreateEquation(...) function
# - myParam is an ordinary daeParameter object (not distributed)
# - myVar is an ordinary daeVariable (not distributed)
eq.Residual = myVar() - (myParam() + 15)
2. To get a value of a distributed parameter the pyCore.daeParameter.__call__() function (operator ()) can be used again. For instance, if the distributed variable myVar has to be equal to the sum of the parameter myParam and 15 at each point of the domain myDomain:
$myVar(i) = myParam(i) + 15; \forall i \in [0, n_d - 1]$
in DAE Tools it is specified in the following acausal way:
# Notation:
# - myDomain is daeDomain object
# - eq is a daeEquation object distributed on the myDomain
# - i is daeDistributedEquationDomainInfo object (used to iterate through the domain points)
# - myParam is daeParameter object distributed on the myDomain
# - myVar is daeVariable object distributed on the myDomain
i = eq.DistributeOnDomain(myDomain, eClosedClosed)
eq.Residual = myVar(i) - (myParam(i) + 15)
This code translates into a set of n algebraic equations.
Obviously, a parameter can be distributed on more than one domain. In the case of two domains:
$myVar(d_1,d_2) = myParam(d_1,d_2) + 15; \forall d_1 \in [0, n_{d1} - 1], \forall d_2 \in [0, n_{d2} - 1]$
the following can be used:
# Notation:
# - myDomain1, myDomain2 are daeDomain objects
# - eq is a daeEquation object distributed on the domains myDomain1 and myDomain2
# - i1, i2 are daeDistributedEquationDomainInfo objects (used to iterate through the domain points)
# - myParam is daeParameter object distributed on the myDomain1 and myDomain2
# - myVar is daeVariable object distributed on the myDomaina and myDomain2
i1 = eq.DistributeOnDomain(myDomain1, eClosedClosed)
i2 = eq.DistributeOnDomain(myDomain2, eClosedClosed)
eq.Residual = myVar(i1,i2) - (myParam(i1,i2) + 15)
3. To get an array of parameter values the function pyCore.daeParameter.array() can be used, which returns the pyCore.adouble_array object. The ordinary mathematical functions can be used with the pyCore.adouble_array objects: pyCore.Sqrt(), pyCore.Sin(), pyCore.Cos(), pyCore.Min(), pyCore.Max(), pyCore.Log(), pyCore.Log10(), etc. In addition, some additional functions are available such as pyCore.Sum() and pyCore.Product().
For instance, if the variable myVar has to be equal to the sum of values of the parameter myParam for all points in the domain myDomain, the function pyCore.Sum() can be used.
The pyCore.daeParameter.array() function accepts the following arguments:
• plain integer (to select a single index from a domain)
• python list (to select a list of indexes from a domain)
• python slice (to select a portion of indexes from a domain: startIndex, endIindex, step)
• character * (to select all points from a domain)
• empty python list [] (to select all points from a domain)
Basically all arguments listed above are internally used to create the pyCore.daeIndexRange object. pyCore.daeIndexRange constructor has three variants:
1. The first one accepts a single argument: pyCore.daeDomain object. In this case the returned pyCore.adouble_array object will contain the parameter values at all points in the specified domain.
2. The second one accepts two arguments: pyCore.daeDomain object and a list of integer that represent indexes within the specified domain. In this case the returned pyCore.adouble_array object will contain the parameter values at the selected points in the specified domain.
3. The third one accepts four arguments: pyCore.daeDomain object, and three integers: startIndex, endIndex and step (which is basically a slice, that is a portion of a list of indexes: start through end-1, by the increment step). More info about slices can be found in the Python documentation. In this case the returned pyCore.adouble_array object will contain the parameter values at the points in the specified domain defined by the slice object.
Suppose that the variable myVar has to be equal to the sum of values in the array values that holds values from the parameter myParam at the specified indexes in the domains myDomain1 and myDomain2:
$myVar = \sum values$
There are several different scenarios for creating the array values from the parameter myParam distributed on two domains:
# Notation:
# - myDomain1, myDomain2 are daeDomain objects
# - n1, n2 are the number of points in the myDomain1 and myDomain2 domains
# - eq1, eq2 are daeEquation objects
# - mySum is daeVariable object
# - myParam is daeParameter object distributed on myDomain1 and myDomain2 domains
# - values is the adouble_array object
# Case 1. An array contains the following values from myParam:
# - the first point in the domain myDomain1
# - all points from the domain myDomain2
# All expressions below are equivalent:
values = myParam.array(0, '*')
values = myParam.array(0, [])
eq1.Residual = mySum() - Sum(values)
# Case 2. An array contains the following values from myParam:
# - the first three points in the domain myDomain1
# - all even points from the domain myDomain2
values = myParam.array([0,1,2], slice(0, myDomain2.NumberOfPoints, 2))
eq2.Residual = mySum() - Sum(values)
The case 1. translates into:
$mySum = myParam(0,0) + myParam(0,1) + ... + myParam(0,n_2 - 1)$
where n2 is the number of points in the domain myDomain2.
The case 2. translates into:
$\begin{split}mySum = & myParam(0,0) + myParam(0,2) + myParam(0,4) + ... + myParam(0, n_2 - 1) + \\ & myParam(1,0) + myParam(1,2) + myParam(1,4) + ... + myParam(1, n_2 - 1) + \\ & myParam(2,0) + myParam(2,2) + myParam(2,4) + ... + myParam(2, n_2 - 1)\end{split}$
### 6.2.2. Variable types¶
Variable types are used in DAE Tools to describe variables and they contain the following information:
• Name: string
• Units: pyUnits.unit object
• LowerBound: float
• UpperBound: float
• InitialGuess: float
• AbsoluteTolerance: float
Declaration of variable types is commonly done outside of the model definition (in the module scope).
#### 6.2.2.1. Declaring variable types¶
A variable type can be declared in the following way:
# Temperature type with units Kelvin, limits 100-1000K, the default value 273K and the absolute tolerance 1E-5
typeTemperature = daeVariableType("Temperature", K, 100, 1000, 273, 1E-5)
### 6.2.3. Distribution domains¶
There are two types of domains in DAE Tools:
• Simple arrays
• Distributed domains (used to distribute variables, parameters, and equations in space)
Distributed domains can form uniform grids (the default) or non-uniform grids (user-specified). In DAE Tools many objects can be distributed on domains: parameters, variables, equations, even models and ports. Distributing a model on a domain (that is in space) can be useful for modelling of complex multi-scale systems where each point in the domain have a corresponding model instance. In addition, domain points values can be obtained as a numpy one-dimensional array; this way DAE Tools can be easily used in conjunction with other scientific python libraries: NumPy, SciPy and many other.
Again, the domains are defined in two phases:
• Declaring a domain in the model
• Initialising it in the simulation
#### 6.2.3.1. Declaring domains¶
Domains are declared in the pyCore.daeModel.__init__() function:
self.myDomain = daeDomain("myDomain", parentModel, units, "description")
#### 6.2.3.2. Initialising domains¶
Domains are initialised in the pyActivity.daeSimulation.SetUpParametersAndDomains() function. To set up a domain as a simple array the function pyCore.daeDomain.CreateArray() can be used:
# Array of N elements
myDomain.CreateArray(N)
while to set up a domain distributed on a structured grid the function pyCore.daeDomain.CreateStructuredGrid():
# Uniform structured grid with N elements and bounds [lowerBound, upperBound]
myDomain.CreateStructuredGrid(N, lowerBound, upperBound)
where the lower and upper bounds can be simple floats or quantity objects. If the simple floats are used it is assumed that they represent values with the same units as in the domain definition. Typically, it is better to use quantities to avoid mistakes with wrong units:
# Uniform structured grid with 10 elements and bounds [0,1] in centimeters:
myDomain.CreateStructuredGrid(10, 0.0 * cm, 1.0 * cm)
Nota bene
Domains with N elements consists of N+1 points.
It is also possible to create an unstructured grid (for use in Finite Element models). However, creation and setup of such domains is an implementation detail of corresponding modules (i.e. pyDealII).
In certain situations it is not desired to have a uniform distribution of the points within the given interval, defined by the lower and upper bounds. In these cases, a non-uniform structured grid can be specified using the attribute pyCore.daeDomain.Points which contains the list of the points and that can be manipulated by the user:
# First create a structured grid domain
myDomain.CreateStructuredGrid(10, 0.0, 1.0)
# The original 11 points are: [0.0, 0.1, 0.2, 0.3, 0.4, 0.5, 0.6, 0.7, 0.8, 0.9, 1.0]
# If the system is stiff at the beginning of the domain more points can be placed there
myDomain.Points = [0.0, 0.05, 0.10, 0.15, 0.20, 0.25, 0.30, 0.35, 0.40, 0.60, 1.00]
The effect of uniform and non-uniform grids is given in Fig. 6.1 (a simple heat conduction problem from the tutorial_3 has been served as a basis for comparison). Here, there are three cases:
• Black line: the analytic solution
• Blue line (10 intervals): uniform grid - a very rough prediction
• Red line (10 intervals): non-uniform grid - more points at the beginning of the domain
Fig. 6.1 Effect of uniform and non-uniform grids on numerical solution (zoomed to the first 5 points)
It can be clearly observed that in this problem the more precise results are obtained by using denser grid at the beginning of the interval.
#### 6.2.3.3. Using domains¶
The most commonly used functions are:
Nota bene
The functions pyCore.daeDomain.__call__(), pyCore.daeDomain.__getitem__() and pyCore.daeDomain.array() can only be used to build equations’ residual expressions. On the other hand, the attributes pyCore.daeDomain.Points and pyCore.daeDomain.npyPoints can be used to access the domain points at any point.
The arguments of the pyCore.daeDomain.array() function are the same as explained in Using parameters.
1. To get a point at the specified index within the domain the pyCore.daeDomain.__getitem__() function (operator []) can be used. For instance, if the variable myVar has to be equal to the sixth point in the domain myDomain:
$myVar = myDomain[5]$
the following can be used:
# Notation:
# - eq is a daeEquation object
# - myDomain is daeDomain object
# - myVar is daeVariable object
eq.Residual = myVar() - myDomain[5]
### 6.2.4. Variables¶
There are various types of variables in DAE Tools. They can be:
• Ordinary
• Distributed
and:
• Algebraic
• Differential
• Constant (that is their value is assigned by fixing the number of degrees of freedom - DOF)
Again, variables are defined in two phases:
• Declaring a variable in the model
• Initialising it, if required (by assigning its value or setting an initial condition) in the simulation
#### 6.2.4.1. Declaring variables¶
Variables are declared in the pyCore.daeModel.__init__() function. An ordinary variable can be declared in the following way:
self.myVar = daeVariable("myVar", variableType, parentModel, "description")
Variables can also be distributed on domains. A distributed variable can be declared in the following way:
self.myVar = daeVariable("myVar", variableType, parentModel, "description")
self.myVar.DistributeOnDomain(myDomain)
# Or simply:
self.myVar = daeVariable("myVar", variableType, parentModel, "description", [myDomain])
#### 6.2.4.2. Initialising variables¶
Variables are initialised in the pyActivity.daeSimulation.SetUpVariables() function:
• To assign the variable value/fix the degrees of freedom the following can be used:
myVar.AssignValue(value)
or, if the variable is distributed:
for i in range(myDomain.NumberOfPoints):
myVar.AssignValue(i, value)
# or using a numpy array of values
myVar.AssignValues(values)
where value can be either a float (i.e. 1.34) or the pyUnits.quantity object (i.e. 1.34 * W/(m*K)), and values is a numpy array of floats or pyUnits.quantity objects. If the simple floats are used it is assumed that they represent values with the same units as in the variable type definition.
• To set an initial condition use the following:
myVar.SetInitialCondition(value)
or, if the variable is distributed:
for i in range(myDomain.NumberOfPoints):
myVar.SetInitialCondition(i, value)
# or using a numpy array of values
myVar.SetInitialConditions(values)
where the value can again be either a float or the pyUnits.quantity object, and values is a numpy array of floats or pyUnits.quantity objects. If the simple floats are used it is assumed that they represent values with the same units as in the variable type definition.
• To set an absolute tolerance the following can be used:
myVar.SetAbsoluteTolerances(1E-5)
• To set an initial guess use the following:
myVar.SetInitialGuess(value)
or, if the variable is distributed:
for i in range(0, myDomain.NumberOfPoints):
myVar.SetInitialGuess(i, value)
# or using a numpy array of values
myVar.SetInitialGuesses(values)
where the value can again be either a float or the pyUnits.quantity object and values is a numpy array of floats or pyUnits.quantity objects.
#### 6.2.4.3. Using variables¶
The most commonly used functions are:
Nota bene
The functions pyCore.daeVariable.__call__(), pyCore.dt(), pyCore.d(), pyCore.d2(), pyCore.daeVariable.array(), pyCore.dt_array(), pyCore.d_array() and pyCore.d2_array() can only be used to build equations’ residual expressions. On the other hand, the functions pyCore.daeVariable.GetValue, pyCore.daeVariable.SetValue and pyCore.daeVariable.npyValues can be used to access the variable data at any point.
All above mentioned functions are called in the same way as explained in Using parameters. More information will be given here on getting time and partial derivatives.
1. To get a time derivative of the ordinary variable the function pyCore.dt() can be used. For instance, if a time derivative of the variable myVar has to be equal to some constant, let’s say 1.0:
${ d(myVar) \over {d}{t} } = 1$
the following can be used:
# Notation:
# - eq is a daeEquation object
# - myVar is an ordinary daeVariable
eq.Residual = dt(myVar()) - 1.0
2. To get a time derivative of a distributed variable the pyCore.dt() function can be used again. For instance, if a time derivative of the distributed variable myVar has to be equal to some constant at each point of the domain myDomain:
${\partial myVar(i) \over \partial t} = 1; \forall i \in [0, n]$
the following can be used:
# Notation:
# - myDomain is daeDomain object
# - n is the number of points in the myDomain
# - eq is a daeEquation object distributed on the myDomain
# - d is daeDEDI object (used to iterate through the domain points)
# - myVar is daeVariable object distributed on the myDomain
d = eq.DistributeOnDomain(myDomain, eClosedClosed)
eq.Residual = dt(myVar(d)) - 1.0
This code translates into a set of n equations.
Obviously, a variable can be distributed on more than one domain. To write a similar equation for a two-dimensional variable:
${d(myVar(d_1, d_2)) \over dt} = 1; \forall d_1 \in [0, n_1], \forall d_2 \in [0, n_2]$
the following can be used:
# Notation:
# - myDomain1, myDomain2 are daeDomain objects
# - n1 is the number of points in the myDomain1
# - n2 is the number of points in the myDomain2
# - eq is a daeEquation object distributed on the domains myDomain1 and myDomain2
# - d is daeDEDI object (used to iterate through the domain points)
# - myVar is daeVariable object distributed on the myDomaina and myDomain2
d1 = eq.DistributeOnDomain(myDomain1, eClosedClosed)
d2 = eq.DistributeOnDomain(myDomain2, eClosedClosed)
eq.Residual = dt(myVar(d1,d2)) - 1.0
This code translates into a set of n1 * n2 equations.
3. To get a partial derivative of a distributed variable the functions pyCore.d() and pyCore.d2() can be used. For instance, if a partial derivative of the distributed variable myVar has to be equal to 1.0 at each point of the domain myDomain:
${\partial myVar(d) \over \partial myDomain} = 1.0; \forall d \in [0, n]$
we can write:
# Notation:
# - myDomain is daeDomain object
# - n is the number of points in the myDomain
# - eq is a daeEquation object distributed on the myDomain
# - d is daeDEDI object (used to iterate through the domain points)
# - myVar is daeVariable object distributed on the myDomain
d = eq.DistributeOnDomain(myDomain, eClosedClosed)
eq.Residual = d(myVar(d), myDomain, discretizationMethod=eCFDM, options={}) - 1.0
# since the defaults are eCFDM and an empty options dictionary the above is equivalent to:
eq.Residual = d(myVar(d), myDomain) - 1.0
Again, this code translates into a set of n equations.
The default discretisation method is center finite difference method (eCFDM) and the default discretisation order is 2 and can be specified in the options dictionary: options["DiscretizationOrder"] = integer. At the moment, only the finite difference discretisation methods are supported by default (but the finite volume and finite elements implementations exist through the third party libraries):
• Center finite difference method (eCFDM)
• Backward finite difference method (eBFDM)
• Forward finite difference method (eFFDM)
### 6.2.5. Ports¶
Ports can be used to provide an interface of a model, that is the model inputs and outputs. The models with ports can be used to create flowsheets where models are inter-connected by (the same type of) ports. Ports can be defined as inlet or outlet depending on whether they represent model inputs or model outputs. Like models, ports can contain domains, parameters and variables.
In DAE Tools ports are defined by deriving a new class from the base pyCore.daePort class. An empty port definition is presented below:
class myPort(daePort):
def __init__(self, name, parent = None, description = ""):
daePort.__init__(self, name, type, parent, description)
# Declaration/instantiation of domains, parameters and variables
...
The process consists of the following steps:
1. Calling the base class constructor:
daePort.__init__(self, name, type, parent, description)
2. Declaring domains, parameters and variables in the pyCore.daePort.__init__() function
The same rules apply as described in the Developing models section.
Two ports can be connected by using the pyCore.daeModel.ConnectPorts() function.
#### 6.2.5.1. Instantiating ports¶
Ports are instantiated in the pyCore.daeModel.__init__() function:
self.myPort = daePort("myPort", eInletPort, parentModel, "description")
### 6.2.6. Event ports¶
Event ports are also used to connect two models; however, they allow sending of discrete messages (events) between models. Events can be triggered manually or when a specified condition is satisfied. The main difference between event and ordinary ports is that the former allow a discrete communication between models while latter allow a continuous exchange of information. Messages contain a floating point value that can be used by a recipient. Upon a reception of an event certain actions can be executed. The actions are specified in the pyCore.daeModel.ON_EVENT() function.
Two event ports can be connected by using the pyCore.daeModel.ConnectEventPorts() function. A single outlet event port can be connected to unlimited number of inlet event ports.
#### 6.2.6.1. Instantiating event ports¶
Event ports are instantiated in the pyCore.daeModel.__init__() function:
self.myEventPort = daeEventPort("myEventPort", eOutletPort, parentModel, "description")
### 6.2.7. Equations¶
There are four types of equations in DAE Tools:
• Ordinary or distributed
• Continuous or discontinuous
Distributed equations are equations which are distributed on one or more domains and valid on the selected points within those domains. Equations can be distributed on a whole domain, on a portion of it or even on a single point (useful for specifying boundary conditions).
#### 6.2.7.1. Declaring equations¶
Equations are declared in the pyCore.daeModel.DeclareEquations() function. To declare an ordinary equation the pyCore.daeModel.CreateEquation() function can be used:
eq = model.CreateEquation("MyEquation", "description")
while to declare a distributed equation:
eq = model.CreateEquation("MyEquation")
d = eq.DistributeOnDomain(myDomain, eClosedClosed)
Equations can be distributed on a whole domain or on a portion of it. Currently there are 7 options:
• Distribute on a closed (whole) domain - analogous to: $$x \in [x_0, x_n]$$
• Distribute on a left open domain - analogous to: $$x \in (x_0, x_n]$$
• Distribute on a right open domain - analogous to: $$x \in [x_0, x_n)$$
• Distribute on a domain open on both sides - analogous to: $$x \in (x_0, x_n)$$
• Distribute on the lower bound - only one point: $$x \in \{ x_0 \}$$
• Distribute on the upper bound - only one point: $$x \in \{ x_n \}$$
• Custom array of points within a domain: $$x \in \{ x_0, x_3, x_7, x_8 \}$$
where $$x_0$$ stands for the LowerBound and $$x_n$$ stands for the UpperBound of the domain.
An overview of various bounds is given in the table below. Assume that we have an equation which is distributed on two domains: x and y. The table shows various options while distributing an equation. Green squares represent portions of a domain included in the distributed equation, while white squares represent excluded portions.
x = eClosedClosed; y = eClosedClosed $$x \in [x_0, x_n], y \in [y_0, y_n]$$ x = eOpenOpen; y = eOpenOpen $$x \in ( x_0, x_n ), y \in ( y_0, y_n )$$ x = eClosedClosed; y = eOpenOpen $$x \in [x_0, x_n], y \in ( y_0, y_n )$$ x = eClosedClosed; y = eOpenClosed $$x \in [x_0, x_n], y \in ( y_0, y_n ]$$ x = eLowerBound; y = eClosedOpen $$x = x_0, y \in [ y_0, y_n )$$ x = eLowerBound; y = eClosedClosed $$x = x_0, y \in [y_0, y_n]$$ x = eUpperBound; y = eClosedClosed $$x = x_n, y \in [y_0, y_n]$$ x = eLowerBound; y = eUpperBound $$x = x_0, y = y_n$$
#### 6.2.7.2. Defining equations (equation residual expressions)¶
Equations in DAE Tools are given in implicit (acausal) form and specified as residual expressions. For instance, to define a residual expression of an ordinary equation:
${\partial V_{14} \over \partial t} + {V_1 \over V_{14} + 2.5} + sin(3.14 \cdot V_3) = 0$
the following can be used:
# Notation:
# - V1, V3, V14 are ordinary variables
eq.Residal = dt(V14()) + V1() / (V14() + 2.5) + sin(3.14 * V3())
To define a residual expression of a distributed equation:
${\partial V_{14}(x,y)) \over \partial t} + {V_1 \over V_{14}(x,y) + 2.5} + sin(3.14 \cdot V_3(x,y)) = 0; \forall x \in [0, nx], \forall y \in (0, ny)$
the following can be used:
# Notation:
# - V1 is an ordinary variable
# - V3 and V14 are variables distributed on domains x and y
eq = model.CreateEquation("MyEquation")
dx = eq.DistributeOnDomain(x, eClosedClosed)
dy = eq.DistributeOnDomain(y, eOpenOpen)
eq.Residal = dt(V14(dx,dy)) + V1() / ( V14(dx,dy) + 2.5) + sin(3.14 * V3(dx,dy) )
where dx and dy are pyCore.daeDEDI (which is short for daeDistributedEquationDomainInfo) objects. These objects are used internally by the framework to iterate over the domain points when generating a set of equations from a distributed equation. If a pyCore.daeDEDI object is used as an argument of the operator (), dt, d, d2, array, dt_array, d_array, or d2_array functions, it represents a current index in the domain which is being iterated. Hence, the equation above is equivalent to writing:
# Notation:
# - V1 is an ordinary variable
# - V3 and V14 are variables distributed on domains x and y
for dx in range(0, x.NumberOfPoints): # x: [x0, xn]
for dy in range(1, y.NumberOfPoints-1): # y: (y0, yn)
eq = model.CreateEquation("MyEquation_%d_%d" % (dx, dy) )
eq.Residal = dt(V14(dx,dy)) + V1() / ( V14(dx,dy) + 2.5) + sin(3.14 * V3(dx,dy) )
The second way can be used for writing equations that are different for different points within domains.
pyCore.daeDEDI class has the pyCore.daeDEDI.__call__ (operator ()) function defined which returns the current index as the adouble object. In addition, the class provides operators + and - which can be used to return the current index offset by the specified integer. For instance, to define the equation below:
$V_1(x) = V_2(x) + V_2(x+1); \forall x \in [0, nx)$
the following can be used:
# Notation:
# - V1 and V2 are variables distributed on the x domain
eq = model.CreateEquation("MyEquation")
dx = eq.DistributeOnDomain(x, eClosedOpen)
eq.Residal = V1(dx) - ( V2(dx) + V2(dx+1) )
#### 6.2.7.3. Supported mathematical operations and functions¶
DAE Tools support five basic mathematical operations (+, -, *, /, **) and the following standard mathematical functions: sqrt, pow, log, log10, exp, min, max, floor, ceil, abs, sin, cos, tan, asin, acos, atan, sinh, cosh, tanh, asinh, acosh, atanh, atan2, erf).
To define conditions the following comparison operators: < (less than), <= (less than or equal), == (equal), != (not equal), > (greater), >= (greater than or equal) and the following logical operators: & (logical AND), | (logical OR), ~ (logical NOT) can be used.
Nota bene
Since it is not allowed to overload Python’s operators and, or and not they cannot be used to define logical conditions; therefore, the custom operators &, | and ~ are defined and should be used instead.
#### 6.2.7.4. Interoperability with NumPy¶
The adouble class is designed with numpy library in mind. It redefines all standard mathematical functions available in numpy (i.e. sqrt, exp, log, log10, sin, cos etc.) so that the numpy functions also operate on the adouble objects. Therefore, the adouble class can be used as an ordinary data type in numpy. In addition, numpy and DAE Tools mathematical functions are interchangeable. In the example given below, the Exp() and numpy.exp() function calls produce identical results:
# Notation:
# - Var is an ordinary variable
# - result is an ordinary variable
eq = self.CreateEquation("...")
eq.Residual = result() - numpy.exp( Var() )
# The above is identical to:
eq.Residual = result() - Exp( Var() )
Often, it is desired to apply numpy/scipy numerical functions on arrays of adouble objects. In those cases the functions such as array(), d_array(), dt_array(), Array() etc. are NOT applicable since they return adouble_array objects. However, numpy arrays can be created and populated with adouble objects and numpy functions applied on them. In addition, an adouble_array object can be created from resulting numpy arrays of adouble objects, if necessary.
For instance, to define the equation below:
$sum = \sum\limits_{i=0}^{N_x-1} \left( V_1(i) + 2 \cdot V_2(i)^2 \right)$
the following code can be used:
# Notation:
# - x is a continuous domain
# - V1 is a variable distributed on the x domain
# - V2 is a variable distributed on the x domain
# - sum is an ordinary variable
# - ndarr_V1 is one dimensional numpy array with dtype=object
# - ndarr_V2 is one dimensional numpy array with dtype=object
# - Nx is the number of points in the domain x
# 1.Create empty numpy arrays as a container for daetools adouble objects
ndarr_V1 = numpy.empty(Nx, dtype=object)
ndarr_V2 = numpy.empty(Nx, dtype=object)
# 2. Fill the created numpy arrays with adouble objects
ndarr_V1[:] = [V1(x) for x in range(Nx)]
ndarr_V2[:] = [V2(x) for x in range(Nx)]
# Now, ndarr_V1 and ndarr_V2 represent arrays of Nx adouble objects each:
# ndarr_V1 := [V1(0), V1(1), V1(2), ..., V1(Nx-1)]
# ndarr_V2 := [V2(0), V2(1), V2(2), ..., V2(Nx-1)]
# 3. Create an equation using the common numpy/scipy functions/operators
eq = self.CreateEquation("sum")
eq.Residual = sum() - numpy.sum(ndarr_V1 + 2*ndarr_V2**2)
# If adouble_aray is needed after operations on a numpy array, the following two functions can be used:
# Both return an adouble_array object.
#### 6.2.7.5. Details on autodifferentiation support¶
Fig. 6.2 Equation evaluation tree in DAE Tools
The equation F in Fig. 6.2 is a result of the following DAE Tools equation:
eq = model.CreateEquation("F", "F description")
eq.Residal = dt(V14()) + V1() / (V14() + 2.5) + Sin(3.14 * V3())
As it has been described in the previous sections, domains, parameters, and variables contain functions that return adouble/adouble_array objects used to construct the evaluation trees. These functions include functions to get a value of a domain/parameter/variable (operator ()), to get a time or a partial derivative of a variable (functions dt(), d(), or d2()) or functions to obtain an array of values, time or partial derivatives (array(), dt_array(), d_array(), and d2_array()).
Another useful feature of DAE Tools equations is that they can be exported into MathML or Latex format and easily visualised.
#### 6.2.7.6. Defining boundary conditions¶
Assume that a simple heat transfer needs to be modelled: heat conduction through a very thin rectangular plate. At one side (at y = 0) we have a constant temperature (500 K) while at the opposite end we have a constant flux (1E6 W/m2). The problem can be described by a single distributed equation:
# Notation:
# - T is a variable distributed on x and y domains
# - rho, k, and cp are parameters
eq = model.CreateEquation("MyEquation")
dx = eq.DistributeOnDomain(x, eClosedClosed)
dy = eq.DistributeOnDomain(y, eOpenOpen)
eq.Residual = rho() * cp() * dt(T(dx,dy)) - k() * ( d2(T(dx,dy), x) + d2(T(dx,dy), y) )
The equation is defined on the y domain open on both ends; thus, the additional equations (boundary conditions at y = 0 and y = ny points) need to be specified to make the system well posed:
$\begin{split}T(x,y) &= 500; \forall x \in [0, nx], y = 0 \\ -k \cdot {\partial T(x,y) \over \partial y} &= 1E6; \forall x \in [0, nx], y = ny\end{split}$
To do so, the following equations can be used:
# "Bottom edge" boundary conditions:
bceq = model.CreateEquation("Bottom_BC")
dx = bceq.DistributeOnDomain(x, eClosedClosed)
dy = bceq.DistributeOnDomain(y, eLowerBound)
bceq.Residal = T(dx,dy) - Constant(500 * K) # Constant temperature (500 K)
# "Top edge" boundary conditions:
bceq = model.CreateEquation("Top_BC")
dx = bceq.DistributeOnDomain(x, eClosedClosed)
dy = bceq.DistributeOnDomain(y, eUpperBound)
bceq.Residal = - k() * d(T(dx,dy), y) - Constant(1E6 * W/m**2) # Constant flux (1E6 W/m2)
### 6.2.8. PDE on unstructured grids using the Finite Elements Method¶
DAE Tools support numerical simulation of partial differential equations on unstructured grids using the Finite Elements method. Currently, DAE Tools use the deal.II library for low-level tasks such as mesh loading, assembly of the system stiffness and mass matrices and the system load vector, management of boundary conditions, quadrature formulas, management of degrees of freedom, and the generation of the results. After an initial assembly phase the matrices are used to generate DAE Tools equations which are solved together with the rest of the system. All details about the mesh, basis functions, quadrature rules, refinement etc. are handled by the deal.II library and can be to some extent configured by the modeller.
The unique feature of this approach is a capability to use DAE Tools variables to specify boundary conditions, time varying coefficients and non-linear terms, and evaluate quantities such as surface/volume integrals. This way, the finite element model is fully integrated with the rest of the model and multiple FE systems can be created and coupled together. In addition, non-linear finite element systems are automatically supported and the equation discontinuities can be handled as usual in DAE Tools.
DAE Tools provide four main classes to support the deal.II library:
1. dealiiFiniteElementDOF_nD
In deal.II it represents a degree of freedom distributed on a finite element domain. In DAE Tools it represents a variable distributed on a finite element domain.
2. dealiiFiniteElementSystem_nD (implements daeFiniteElementObject)
It is a wrapper around deal.II FESystem<dim> class and handles all finite element related details. It uses information about the mesh, quadrature and face quadrature formulas, degrees of freedom and the FE weak formulation to assemble the system’s mass matrix (Mij), stiffness matrix (Aij) and the load vector (Fi).
3. dealiiFiniteElementWeakForm_nD
Contains weak form expressions for the contribution of FE cells to the system/stiffness matrices, the load vector, boundary conditions and (optionally) surface/volume integrals (as an output).
4. daeFiniteElementModel
daeModel-derived class that use system matrices/vectors from the dealiiFiniteElementSystem_nD object to generate a system of equations in the following form:
$\left[ M_{ij} \right] \left\{ {dx_j}\over{dt} \right\} + \left[ A_{ij} \right] \left\{ x_j \right\} = \left\{ F_i \right\}$
This system is in a general case a DAE system, although it can also be a linear or a non-linear system (if the mass matrix is zero).
A typical use-case scenario consists of the following steps:
1. The starting point is a definition of the dealiiFiniteElementSystem_nD class (where nD can be 1D, 2D or 3D). That includes specification of:
• Mesh file in one of the formats supported by deal.II (GridIn)
• Degrees of freedom (as a python list of dealiiFiniteElementDOF_nD objects). Every dof has a name which will be also used to declare DAE Tools variable with the same name, description, finite element space FE (deal.II FiniteElement<dim> instance) and the multiplicity
• Quadrature formulas for elements and their faces
2. Creation of daeFiniteElementModel object (similarly to the ordinary DAE Tools model) with the finite element system object as the last argument.
3. Definition of the weak form of the problem using the functions provided by DAE Tools (for 1D, 2D and 3D):
• phi(): corresponds to shape_value in deal.II
• dphi(): corresponds to shape_grad in deal.II
• d2phi(): corresponds to shape_hessian in deal.II
• phi_vector(): corresponds to shape_value of vector dofs in deal.II
• dphi_vector(): corresponds to shape_grad of vector dofs in deal.II
• d2phi_vector(): corresponds to shape_hessian of vector dofs in deal.II
• div_phi(): corresponds to divergence in deal.II
• JxW(): corresponds to the mapped quadrature weight in deal.II
• xyz(): returns the point for the specified quadrature point in deal.II
• normal(): corresponds to the normal_vector in deal.II
• function_value(): wraps Function<dim> object that returns a value
• dof(): returns daetools variable at the given index (adouble object)
• dof_approximation(): returns FE approximation of a quantity as a daetools variable (adouble object)
• dof_hessian_approximation(): returns FE hessian approximation of a quantity as a daetools variable (adouble object)
• vector_dof_approximation(): returns FE approximation of a vector quantity as a daetools variable (adouble object)
• vector_dof_gradient_approximation(): returns FE approximation of a vector quantity as a daetools variable (adouble object)
• adouble(): wraps any daetools expression to be used in matrix assembly
• tensor1(): wraps deal.II Tensor<rank=1>
• tensor2(): wraps deal.II Tensor<rank=2>
• tensor2(): wraps deal.II Tensor<rank=3>
More information about the finite element method in DAE Tools can be found in the API reference and in Finite Element Tutorials (in particular the Tutorial deal.II 1 source code).
### 6.2.9. State Transition Networks¶
Discontinuous equations are equations that take different forms subject to certain conditions. For example, to model a flow through a pipe one can observe three different flow regimes:
• Laminar: if Reynolds number is less than 2,100
• Transient: if Reynolds number is greater than 2,100 and less than 10,000
• Turbulent: if Reynolds number is greater than 10,000
From any of these three states the system can go to any other state. This type of discontinuities is called a reversible discontinuity and can be described using IF(), ELSE_IF(), ELSE() and END_IF() functions:
IF(Re() <= 2100) # (Laminar flow)
#... (equations go here)
ELSE_IF(Re() > 2100 & Re() < 10000) # (Transient flow)
#... (equations go here)
ELSE() # (Turbulent flow)
#... (equations go here)
END_IF()
The comparison operators operate on pyCore.adouble objects and Float values. Units consistency is strictly checked and expressions including Float values are allowed only if a variable or parameter is dimensionless. The following expressions are valid:
# Notation:
# - T is a variable with units: K
# - m is a variable with units: kg
# - p is a dimensionless parameter
# T < 0.5 K
T() < Constant(0.5 * K)
# (T >= 300 K) or (m < 1 kg)
(T() >= Constant(300 * K)) | (m < Constant(0.5 * kg))
# p <= 25.3 (use of the Constant function not necessary)
p() <= 25.3
Reversible discontinuities can be symmetrical and non-symmetrical. The above example is symmetrical. However, to model a CPU and its power dissipation one can observe three operating modes with the following state transitions:
• Normal mode
• switch to Power saving mode if CPU load is below 5%
• switch to Fried mode if the temperature is above 110 degrees
• Power saving mode
• switch to Normal mode if CPU load is above 5%
• switch to Fried mode if the temperature is above 110 degrees
• Fried mode
• Damn, no escape from here... go to the nearest shop and buy a new one! Or, donate some money to DAE Tools project :-)
What can be seen is that from the Normal mode the system can either go to the Power saving mode or to the Fried mode. The same stands for the Power saving mode: the system can either go to the Normal mode or to the Fried mode. However, once the temperature exceeds 110 degrees the CPU dies (let’s say it is heavily overclocked) and there is no return. This type of discontinuities is called an irreversible discontinuity and can be described using STN(), STATE(), END_STN() functions:
STN("CPU")
STATE("Normal")
#... (equations go here)
ON_CONDITION( CPULoad() < 0.05, switchToStates = [ ("CPU", "PowerSaving") ] )
ON_CONDITION( T() > Constant(110*K), switchToStates = [ ("CPU", "Fried") ] )
STATE("PowerSaving")
#... (equations go here)
ON_CONDITION( CPULoad() >= 0.05, switchToStates = [ ("CPU", "Normal") ] )
ON_CONDITION( T() > Constant(110*K), switchToStates = [ ("CPU", "Fried") ] )
STATE("Fried")
#... (equations go here)
END_STN()
The function pyCore.daeModel.ON_CONDITION() is used to define actions to be performed when the specified condition is satisfied. In addition, the function pyCore.daeModel.ON_EVENT() can be used to define actions to be performed when an event is trigerred on a specified event port. Details on how to use pyCore.daeModel.ON_CONDITION() and pyCore.daeModel.ON_EVENT() functions can be found in the OnCondition actions and OnEvent actions sections, respectively.
### 6.2.10. OnCondition actions¶
The function ON_CONDITION() can be used to define actions to be performed when a specified condition is satisfied. The available actions include:
• Changing the active state in specified State Transition Networks (argument switchToStates)
• Re-assigning or re-ininitialising specified variables (argument setVariableValues)
• Triggering an event on the specified event ports (argument triggerEvents)
• Executing user-defined actions (argument userDefinedActions)
Nota bene
OnCondition actions can be added to models or to states in State Transition Networks (pyCore.daeSTN or pyCore.daeIF):
• When added to a model they will be active throughout the simulation
• When added to a state they will be active only when that state is active
Nota bene
switchToStates, setVariableValues, triggerEvents and userDefinedActions are empty by default. The user has to specify at least one action.
For instance, to execute some actions when the temperature becomes greater than 340 K the following can be used:
def DeclareEquations(self):
...
self.ON_CONDITION( T() > Constant(340*K), switchToStates = [ ('STN', 'State'), ... ],
setVariableValues = [ (variable, newValue), ... ],
triggerEvents = [ (eventPort, eventMessage), ... ],
userDefinedActions = [ userDefinedAction, ... ] )
where the first argument of the ON_CONDITION() function is a condition specifying when the actions will be executed and:
• switchToStates is a list of tuples (string ‘STN Name’, string ‘State name to become active’)
• setVariableValues is a list of tuples (daeVariable object, adouble object)
• triggerEvents is a list of tuples (daeEventPort object, adouble object)
• userDefinedActions is a list of user defined objects derived from the base daeAction class
For more details on how to use ON_CONDITION() function have a look on tutorial_13.
### 6.2.11. OnEvent actions¶
The function ON_EVENT() can be used to define actions to be performed when an event is triggered on the specified event port. The available actions are the same as in the ON_CONDITION() function.
Nota bene
OnEvent actions can be added to models or to states in State Transition Networks (pyCore.daeSTN or pyCore.daeIF):
• When added to a model they will be active throughout the simulation
• When added to a state they will be active only when that state is active
Nota bene
switchToStates, setVariableValues, triggerEvents and userDefinedActions are empty by default. The user has to specify at least one action.
For instance, to execute some actions when an event is triggered on an event port the following can be used:
def DeclareEquations(self):
...
self.ON_EVENT( eventPort, switchToStates = [ ('STN', 'State'), ... ],
setVariableValues = [ (variable, newValue), ... ],
triggerEvents = [ (eventPort, eventMessage), ... ],
userDefinedActions = [ userDefinedAction, ... ] )
where the first argument of the ON_EVENT() function is the daeEventPort object to be monitored for events, while the rest of the arguments is the same as in the ON_CONDITION() function.
For more details on how to use ON_EVENT() function have a look on tutorial_13.
## 6.3. Configuration files¶
Various options used by DAE Tools objects are located in the daetools/daetools.cfg config file (in JSON format). The configuration file can be obtained using the global function daeGetConfig():
cfg = daeGetConfig()
which returns the daeConfig object. The configuration file is first searched in the HOME directory, the application folder and finally in the default location. It also can be specified manually using the function daeSetConfigFile(). However, this has to be done before the DAE Tools objects are created. The current configuration file name can be retrieved using the ConfigFileName attribute. The options can also be programmatically changed using the Get/Set functions i.e. GetBoolean()/SetBoolean():
cfg = daeGetConfig()
checkUnitsConsistency = cfg.GetBoolean("daetools.core.checkUnitsConsistency")
cfg.SetBoolean("daetools.core.checkUnitsConsistency", True)
Supported data types are: Boolean, Integer, Float and String. The whole configuration file with all options can be printed using:
cfg = daeGetConfig()
print(cfg)
The sample configuration file is given below:
{
"daetools":
{
"core":
{
"checkForInfiniteNumbers": false,
"eventTolerance": 1E-7,
"pythonIndent": " ",
"checkUnitsConsistency": true,
"resetLAMatrixAfterDiscontinuity": true,
"printInfo": false,
"deepCopyClonedNodes": true
},
"activity":
{
"timeHorizon": 100.0,
"reportingInterval": 1.0,
"objFunctionAbsoluteTolerance": 1E-8,
"constraintsAbsoluteTolerance": 1E-8,
"measuredVariableAbsoluteTolerance": 1E-8
},
"datareporting":
{
},
"logging":
{
"tcpipLogPort": 51000
},
"minlpsolver":
{
"printInfo": false
},
"IDAS":
{
"relativeTolerance": 1E-5,
"nextTimeAfterReinitialization": 1E-7,
"printInfo": false,
"numberOfSTNRebuildsDuringInitialization": 1000,
"SensitivitySolutionMethod": "Staggered",
"SensErrCon": false,
"maxNonlinIters": 3,
"sensRelativeTolerance": 1E-5,
"sensAbsoluteTolerance": 1E-5,
"MaxOrd": 5,
"MaxNumSteps": 1000,
"InitStep": 0.0,
"MaxStep": 0.0,
"MaxErrTestFails": 10,
"MaxNonlinIters": 4,
"MaxConvFails": 10,
"NonlinConvCoef": 0.33,
"SuppressAlg": false,
"NoInactiveRootWarn": false,
"NonlinConvCoefIC": 0.0033,
"MaxNumStepsIC": 5,
"MaxNumJacsIC": 4,
"MaxNumItersIC": 10,
"LineSearchOffIC": false
},
"superlu":
{
"factorizationMethod": "SamePattern_SameRowPerm",
"useUserSuppliedWorkSpace": false,
"workspaceSizeMultiplier": 3.0,
"workspaceMemoryIncrement": 1.5
},
"BONMIN":
{
"IPOPT":
{
"print_level": 0,
"tol":1E-5,
"linear_solver": "mumps",
"hessianApproximation": "limited-memory",
}
},
"NLOPT":
{
"printInfo": true,
"xtol_rel": 1E-6,
"xtol_abs": 1E-6,
"ftol_rel": 1E-6,
"ftol_abs": 1E-6,
"constr_tol": 1E-6
}
}
}
## 6.4. Units and quantities¶
There are three classes in the framework: base_unit, unit and quantity. The base_unit class handles seven SI base dimensions: length, mass, time, electric current, temperature, amount of substance, and luminous intensity (m, kg, s, A, K, mol, cd). The unit class operates on base units defined using the base seven dimensions. The quantity class defines a numerical value in terms of a unit of measurement (it contains the value and its units).
There is a large pool of base units and units defined (all base and derived SI units) in the pyUnits module:
• m
• s
• cd
• A
• mol
• kg
• g
• t
• K
• sr
• min
• hour
• day
• l
• dl
• ml
• N
• J
• W
• V
• C
• F
• Ohm
• T
• H
• S
• Wb
• Pa
• P
• St
• Bq
• Gy
• Sv
• lx
• lm
• kat
• knot
• bar
• b
• Ci
• R
• rd
• rem
and all above with 20 SI prefixes:
• yotta = 1E+24 (symbol Y)
• zetta = 1E+21 (symbol Z)
• exa = 1E+18 (symbol E)
• peta = 1E+15 (symbol P)
• tera = 1E+12 (symbol T)
• giga = 1E+9 (symbol G)
• mega = 1E+6 (symbol M)
• kilo = 1E+3 (symbol k)
• hecto = 1E+2 (symbol h)
• deka = 1E+1 (symbol da)
• deci = 1E-1 (symbol d)
• centi = 1E-2 (symbol c)
• milli = 1E-3 (symbol m)
• micro = 1E-6 (symbol u)
• nano = 1E-9 (symbol n)
• pico = 1E-12 (symbol p)
• femto = 1E-15 (symbol f)
• atto = 1E-18 (symbol a)
• zepto = 1E-21 (symbol z)
• yocto = 1E-24 (symbol y)
for instance: kmol (kilo mol), MW (mega Watt), ug (micro gram) etc.
New units can be defined in the following way:
rho = unit({"kg":1, "dm":-3})
The constructor accepts a dictionary of base_unit : exponent items as its argument. The above defines a new density unit $$\frac{kg}{dm^3}$$.
The unit class defines mathematical operators *, / and ** to allow creation of derived units. Thus, the density unit can be also defined in the following way:
mass = unit({"kg" : 1})
volume = unit({"dm" : 3})
rho = mass / volume
Quantities are created by multiplying a value with desired units:
heat = 1.5 * J
The quantity class defines all mathematical operators (+, -, *, / and **) and mathematical functions.
heat = 1.5 * J
time = 12 * s
power = heat / time
Units-consistency of equations and logical conditions is strictly enforced (although it can be switched off, if required). For instance, the operation below is not allowed:
power = heat + time
since their units are not consistent (J + s).
## 6.5. Data Reporters¶
There is a large number of available data reporters in DAE Tools:
The best starting point in creating custom data reporters is daeDataReporterLocal class. It internally does all the processing and offers to users the Process property (daeDataReceiverProcess instance) which contains all domains, parameters and variables in the simulation.
The following functions have to be implemented (overloaded):
• Connect(): Connects the data reporter. In the case when the local data reporter is used it may contain a file name, for instance.
• Disconnect(): Disconnects the data reporter.
• IsConnected(): Checks if the data reporter is connected or not.
All functions must return True if successful or False otherwise.
An empty custom data reporter is presented below:
class MyDataReporter(daeDataReporterLocal):
def __init__(self):
daeDataReporterLocal.__init__(self)
def Connect(self, ConnectString, ProcessName):
...
return True
def Disconnect(self):
...
return True
def IsConnected(self):
...
return True
To write the results into a file the daeDataReporterFile base class can be used. It writes the data into a file in the WriteDataToFile() function called in the Disconnect() function. The only function that needs to be overloaded is WriteDataToFile() while the base class handles all other operations.
daeDataReceiverProcess class contains the following properties that can be used to obtain the results data from a data reporter:
The example below shows how to save the results to the Matlab .mat file:
class MyDataReporter(daeDataReporterFile):
def __init__(self):
daeDataReporterFile.__init__(self)
def WriteDataToFile(self):
mdict = {}
for var in self.Process.Variables:
mdict[var.Name] = var.Values
import scipy.io
scipy.io.savemat(self.ConnectString,
mdict,
appendmat=False,
format='5',
long_field_names=False,
do_compression=False,
oned_as='row')
The filename is provided as the first argument (connectString) of the Connect() function.
|
2017-03-28 13:55:58
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.41025102138519287, "perplexity": 4626.223930335256}, "config": {"markdown_headings": true, "markdown_code": false, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 5, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-13/segments/1490218189771.94/warc/CC-MAIN-20170322212949-00508-ip-10-233-31-227.ec2.internal.warc.gz"}
|
https://quantum-computing.ibm.com/services/resources/docs/resources/manage/systems/dynamic-circuits/Bitflip-Repetition-Code
|
# Bit-flip repetition code¶
To enable real-time quantum error correction (QEC), we require the capability to dynamically control quantum program flow during execution so that quantum gates may be conditioned on measurement results. In this tutorial, we will run the bit-flip code, which is a very simple form of QEC. We will demonstrate a dynamic quantum circuit that can protect an encoded qubit from a single bit-flip error, and then evaluate the performance of the bit-flip code.
from typing import List, Optional
from qiskit import transpile, QuantumCircuit, QuantumRegister, ClassicalRegister
from qiskit.result import marginal_counts
import warnings
warnings.filterwarnings("ignore")
Make sure to update the hub/group/provider/backend_name below.
from qiskit_ibm_provider import IBMProvider
provider = IBMProvider()
hub = "<hub>"
group = "<group>"
project = "<project>"
hgp = f"{hub}/{group}/{project}"
backend_name = "<backend name>"
backend = provider.get_backend(backend_name, instance=hgp)
print(f"Using {backend.name}")
Using ibm_peekskill
shots: int = 1000 # Number of shots to run each circuit for
## Run basic quantum error correction¶
Note: This tutorial draws heavily from the excellent tutorials 1 and 2.
A few more excellent references for an introduction to quantum error correction are 3, 4, 5, and 6.
### The basics of error correction¶
The basic ideas behind error correction are the same for quantum information as for classical information. We can therefore begin by considering a very straightforward example: speaking on the phone. If someone asks you a question to which the answer is ‘yes’ or ‘no’, the way you give your response will depend on two factors:
• How important is it that you are understood correctly?
• How good is your connection?
Both of these can be parameterized with probabilities. For the first, we can use , the maximum acceptable probability of being misunderstood. If you are being asked to confirm a preference for ice cream flavors, and don’t mind too much if you get vanilla rather than chocolate, might be quite high. If you are being asked a question on which someone’s life depends, however, will be much lower.
For the second we can use , the probability that your answer is garbled by a bad connection. For simplicity, let’s imagine a case where a garbled ‘yes’ doesn’t simply sound like nonsense, but sounds like a ‘no’. Similarly, a ‘no’ is transformed into ‘yes’. Then is the probability that you are completely misunderstood.
A good connection or a relatively unimportant question will result in . In this case it is fine to simply answer in the most direct way possible: you just say ‘yes’ or ‘no’.
If, however, your connection is poor and your answer is important, then . A single ‘yes’ or ‘no’ is not enough in this case. The probability of being misunderstood would be too high. Instead we must encode our answer in a more complex structure, allowing the receiver to decode our meaning despite the possibility that the message is disrupted. The simplest method is the one that many would do without thinking: simply repeat the answer many times; for example, say ‘yes, yes, yes’ instead of ‘yes’ or ‘no, no, no’ instead of ‘no’.
If the receiver hears ‘yes, yes, yes’ in this case, they will of course conclude that the sender meant ‘yes’. If they hear ‘no, yes, yes’, ‘yes, no, yes’ or ‘yes, yes, no’, they will probably conclude the same thing, since there is more positivity than negativity in the answer. To be misunderstood in this case, at least two of the replies need to be garbled. The probability for this, , will be less than . When encoded in this way, the message therefore becomes more likely to be understood. The code cell below shows an example of this.
p1 = 0.01
p3 = 3 * p1**2 * (1-p1) + p1**3 # probability of 2 or 3 errors
print('Probability of a single reply being garbled: {}'.format(p1))
print('Probability of a majority of the three replies being garbled: {:.4f}'.format(p3))
Probability of a single reply being garbled: 0.01
Probability of a majority of the three replies being garbled: 0.0003
Say , that our acceptance probability is just at the threshold of a single garbled reply. With the majority voting technique above, , and this technique solves our problem. If it had not, then we can simply add more repetitions. The fact that above comes from the fact that we need at least two replies to be garbled to flip the majority, and so even the most likely possibilities have a probability of . For five repetitions we’d need at least three replies to be garbled to flip the majority, which happens with probability . The value for in this case would then be even lower. Indeed, as we increase the number of repetitions, will decrease exponentially. No matter how bad the connection, or how certain we need to be of our message getting through correctly, we can achieve it by just repeating our answer enough times.
Though this is a simple example, it contains all the aspects of error correction.
• There is some information to be sent or stored: in this case, a ‘yes’ or ‘no’.
• The information is encoded in a larger system to protect it against noise: in this case, by repeating the message.
• The information is finally decoded, mitigating the effects of noise: in this case, by trusting the majority of the transmitted messages.
This same encoding scheme can also be used for binary, by simply substituting 0 and 1 for ‘yes’ and ‘no’. It can therefore also be easily generalized to qubits by using the states and . In each case it is known as the repetition code. Many other forms of encoding are also possible in both the classical and quantum cases, which outperform the repetition code in many ways; however, its status as the simplest encoding lends it to certain applications.
## Quantum error correction¶
While repetition conceptually underlies the codes in this notebook, implementing the repetition code naively will not allow us to store quantum information. This is because the very act of measuring our data qubits will destroy the encoded state. For example, consider that we encoded the state as . If we measure all of the qubits according to the Born rule, we will obtain the outcome states with probability and .
If we then decode our encoded state we will obtain
Importantly this is not the state we encoded . This is because our measurement operation does not commute with the encoding of the state.
This might appear to imply that quantum error correcting codes are not possible. However, it turns out we can exploit additional ancilla qubits and entanglement to measure what are known as stabilizers that do not transform our encoded quantum information, while still informing us of some classes of errors that may have occurred.
## Stabilizer codes¶
A quantum stabilizer code encodes logical qubits into physical qubits. The coding rate is defined as the ratio of . The stabilizer is an Abelian subgroup of the Pauli group (importantly it does not matter in which order we apply our stabilizers). The +1 eigenspace of the stabilizer operators make up the codespace of the code. It will have dimension and can therefore encode qubits.
Stabilizer codes critically focus on correcting a discrete error set with support from the Pauli group . Assume the set of possible errors are . For example, in a bit-flip code with three qubits encoding the quantum state, we will have .
As both and are subsets of the Pauli group , if an error is applied to a state it will either commute or anticommute with one of the generators of the stabilizer group . If the error anticommutes with a stabilizer element, it will be both detectable and correctable, as it will change the sign of the stabilizer measurement.
In general we can measure each element of the generator of the stabilizer (the minimal representation) , which produces a syndrome bitstring where we assign 0 for a generator that commutes (+1 eigenspace) and 1 for a generator that anticommutes (-1 eigenspace). This is known as measuring the error syndrome.
If commutes with : $$If anti-commutes with :$$
Measuring the stabilizer generators will not modify our encoded qubit, since by definition they commute with the codespace; however, errors will modify our encoded qubit.
## Correct errors¶
We now know how to detect an error (by measuring the stabilizers and observing their eigenvalues).
The next step is to decode and correct the error.
The condition for recovery is either: (1) or (2) there exists that anticommutes with . For more details, see here.
While the above is a mathematical introduction to stabilizer codes, we believe it is better to learn the practicalities of implementing them with some hands-on practice.
### Stabilizer codes in practice¶
In general there is a common structure to most experiments with stabilizer codes. Someday we will write a logical program and have the hardware determine how to encode and decode syndromes to correct errors in the program. Today, we are just starting to explore the practical implementation of QEC. We therefore manually encode our logical state in physical qubits with circuits that we write at the physical qubit level. The typical flow to such an experiment is as follows:
1. Initialize our input physical state we wish to protect.
2. Encode our state in our stabilizer’s codespace as .
3. Apply an error channel. These may be simulated Krauss maps, probabilistically applied gates (as we demonstrate below), or simply the passive error channel in the device of interest. .
4. Measure the syndrome by measuring the generators of our stabilizer .
5. Apply a decoding sequence to our stabilizer and apply the correction sequence for the error (if possible) that we have decoded, . It is in computing where dynamic circuit capabilities are important.
6. Loop to 3, if we are running multiple iterations of the stabilizer sequence.
7. Decode our encoded state to determine whether the encoded state was corrupted.
8. Measure the final data qubit to observe the state we protected and determine how well the code performed.
## Execute the bit-flip code on hardware¶
The bit-flip code is among the simplest examples of a stabilizer code. It can protect our state against a single bit-flip (X) error on any of the encoding qubits. If we consider the action of bit-flip error which maps and on any of our qubits, we have . The code requires five qubits: three are used to encode the protected state, and the remaining two are used as stabilizer measurement ancillas. Nonetheless, the ancillas are not counted, so this means our coding rate for this state is .
# Setup a base quantum circuit for our experiments
qreg_data = QuantumRegister(3)
qreg_measure = QuantumRegister(2)
creg_data = ClassicalRegister(3)
creg_syndrome = ClassicalRegister(2)
state_data = qreg_data[0]
ancillas_data = qreg_data[1:]
def build_qc() -> QuantumCircuit:
return QuantumCircuit(qreg_data, qreg_measure, creg_data, creg_syndrome)
To protect a quantum state, we must first prepare it. In general we can prepare the state $$. qc_init = build_qc() qc_init.x(qreg_data[0]) qc_init.barrier(qreg_data) qc_init.draw(output="mpl") ### Encode our logical state¶ To protect our qubit we must encode it in the codespace. For the case of the bit-flip code, this is very similar to the repetition code, where we implement repetition using the entangling CX gate rather than a classically conditioned bit-flip, as we would do in the classical case. The encoding circuit below will map . The codespace of our bit-flip code is therefore . The stabilizers for the bit-flip code is . Operationally, this means that a single bit-flip error applied to the qubits will modify the observed state of a stabilizer measurement, but leave it within the codespace. It is also straightforward to show that any two non-trivial stabilizer elements can generate the full stabilizer. For example, take the generator set . We see this as and . completing our stabilizer. This means that we must only measure our two generators and to detect any correctable error. It is easy to see the prepared state is a +1 eigenstate of our stabilizers . This is because the stabilizer measures the parity of the two target qubits. The circuit below maps$$
def encode_bit_flip(qc, state, ancillas):
control = state
for ancilla in ancillas:
qc.cx(control, ancilla)
qc.barrier(state, *ancillas)
return qc
qc_encode_bit = build_qc()
encode_bit_flip(qc_encode_bit, state_data, ancillas_data)
qc_encode_bit.draw(output="mpl")
### Prepare a decoding circuit¶
To readout our final state we must map it back from the codespace to a single qubit. For our code this is simply .
This will be used to map our state out of the codespace: $$def decode_bit_flip(qc, state, ancillas): inv = qc_encode_bit.inverse() return qc.compose(inv) qc_decode_bit = build_qc() qc_decode_bit = decode_bit_flip(qc_decode_bit, state_data, ancillas_data) qc_decode_bit.draw(output="mpl") ### A circuit that prepares our encoded state |\tilde{\Psi}_1\rangle ¶ Below we see how we can combine the state preparation and encoding steps to encode our state in the bit-flip code. qc_encoded_state_bit = qc_init.compose(qc_encode_bit) qc_encoded_state_bit.draw(output="mpl") ### Measure the syndome¶ If, however, we consider the action of bit-flip error , which maps and on any of our qubits, we have . The circuit below measures the stabilizer onto creg_measure[0] and onto creg_measure[1]. This effectively measures the parity of the two respective qubits in each stabilizer. • If we observe IZZ= -1 an error ocurred on qubit 0 or 1 • If we observe ZIZ= -1 an error ocurred on qubit 0 or 2 One important detail to note in the circuit below is that we reset our ancilla qubits after measuring the stabilizer. This is done so that we may reuse them for repeated stabilizer measurements. Also note we are making use of the fact that we have already observed the state of the qubit and are writing the conditional reset protocol directly to avoid another round of qubit measurement if we used the reset instruction. The circuit below will measure our stabilizers, generating a syndrome. We assume it is applied after some error channel that takes .$$
def measure_syndrome_bit(qc, qreg_data, qreg_measure, creg_measure):
qc.cx(qreg_data[0], qreg_measure[0])
qc.cx(qreg_data[1], qreg_measure[0])
qc.cx(qreg_data[0], qreg_measure[1])
qc.cx(qreg_data[2], qreg_measure[1])
qc.barrier(*qreg_data, *qreg_measure)
qc.measure(qreg_measure, creg_measure)
qc.x(qreg_measure[0]).c_if(creg_measure[0], 1)
qc.x(qreg_measure[1]).c_if(creg_measure[1], 1)
qc.barrier(*qreg_data, *qreg_measure)
return qc
qc_syndrome_bit = measure_syndrome_bit(build_qc(), qreg_data, qreg_measure, creg_syndrome)
qc_syndrome_bit.draw(output="mpl")
We can do conditional measurements in Qiskit either with the old-style control flow circuit.x(0).c_if(<condition>), or the new-style control-flow with circuit.if_test(<condition>); for more details, see here. In this tutorial, we are using the old-style control flow because of a temporary limitation with dynamic circuit scheduling support.
### Visualize our syndrome measurement¶
qc_measure_syndrome_bit = qc_encoded_state_bit.compose(qc_syndrome_bit)
qc_measure_syndrome_bit.draw(output="mpl")
## Decode and apply our correction sequence¶
Collectively measuring the stabilizers provides enough information to identify where a single X-flip error occurred.
• If we measure IZZ=-1 and ZIZ=1 the error ocurred on qubit 1
• If we measure IZZ=1 and ZIZ=-1 the error ocurred on qubit 2
• If we measure IZZ=-1 and ZIZ=-1 the error ocurred on qubit 0
The circuit below corrects our state in the case of a single bit-flip error:
def apply_correction_bit(qc, qreg_data, creg_syndrome):
qc.x(qreg_data[0]).c_if(creg_syndrome, 3)
qc.x(qreg_data[1]).c_if(creg_syndrome, 1)
qc.x(qreg_data[2]).c_if(creg_syndrome, 2)
qc.barrier(qreg_data)
return qc
qc_correction_bit = apply_correction_bit(build_qc(), qreg_data, creg_syndrome)
qc_correction_bit.draw(output="mpl")
def apply_final_readout(qc, qreg_data, creg_data):
qc.barrier(qreg_data)
qc.measure(qreg_data, creg_data)
return qc
qc_final_measure.draw(output="mpl")
## A complete cycle of the bit-flip code¶
We complete putting our building blocks together in the circuit below.
bit_code_circuit = qc_measure_syndrome_bit.compose(qc_correction_bit).compose(qc_final_measure)
bit_code_circuit.draw(output="mpl")
## Create a routine to perform multiple cycles of parity checks¶
The routine below will allow us to easily compose multiple cycles of our code interspersed by the error channels provided in qc_channels.
def build_error_correction_sequence(
qc_base: QuantumCircuit,
qc_init: Optional[QuantumCircuit],
qc_encode: QuantumCircuit,
qc_channels: List[QuantumCircuit],
qc_syndrome: QuantumCircuit,
qc_correct: QuantumCircuit,
qc_decode: Optional[QuantumCircuit] = None,
qc_final: Optional[QuantumCircuit] = None,
name=None,
) -> QuantumCircuit:
"""Build a typical error correction circuit"""
qc = qc_base
if qc_init:
qc = qc.compose(
qc_init
)
qc = qc.compose(
qc_encode
)
if name is not None:
qc.name = name
if not qc_channels:
qc_channels = [QuantumCircuit(*qc.qregs)]
for qc_channel in qc_channels:
qc = qc.compose(
qc_channel
).compose(
qc_syndrome
).compose(
qc_correct
)
if qc_decode:
qc = qc.compose(qc_decode)
if qc_final:
qc = qc.compose(qc_final)
return qc
For example, we can use this to replicate our bit-flip code from above:
bit_code_circuit = build_error_correction_sequence(
build_qc(),
qc_init,
qc_encode_bit,
[],
qc_syndrome_bit,
qc_correction_bit,
None,
qc_final_measure,
)
bit_code_circuit.draw(output="mpl")
## Run on hardware¶
We will now choose a qubit layout on the device and transpile our circuit to a realizable circuit on the device. We will see that due to limited connectivity we will have to perform qubit routing. As not all qubits are of equal quality, it is important to choose good sets of qubits in the hardware.
# feel free to adjust the initial layout
print(initial_layout := [2, 1, 3, 4, 5])
[2, 1, 3, 4, 5]
transpiled_bit_code_circuit = transpile(bit_code_circuit, backend, initial_layout=initial_layout)
transpiled_bit_code_circuit.draw(output="mpl")
## Execute the circuit in hardware¶
Below we execute the circuit in the hardware and then decode the execution results.
job_bit_flip = backend.run(transpiled_bit_code_circuit, shots=shots, dynamic=True)
result_bit_flip = job_bit_flip.result()
def decode_result(data_counts, syndrome_counts, verbose=True, indent=0):
shots = sum(data_counts.values())
success_trials = data_counts.get('000', 0) + data_counts.get('111', 0)
failed_trials = shots-success_trials
error_correction_events = shots-syndrome_counts.get('00', 0)
if verbose:
print(f"{' ' * indent}Bit flip errors were detected/corrected on {error_correction_events}/{shots} trials")
print(f"{' ' * indent}A final parity error was detected on {failed_trials}/{shots} trials")
return error_correction_events, failed_trials
data_indices = list(range(len(qreg_data)))
syndrome_indices = list(range(data_indices[-1]+1, len(qreg_data) + len(qreg_measure) ))
marginalized_data_result = marginal_counts(result_bit_flip, data_indices)
marginalized_syndrome_result = marginal_counts(result_bit_flip, syndrome_indices)
print(f'Completed bit code experiment data measurement counts {marginalized_data_result.get_counts(0)}')
print(f'Completed bit code experiment syndrome measurement counts {marginalized_syndrome_result.get_counts(0)}')
decode_result(marginalized_data_result.get_counts(0), marginalized_syndrome_result.get_counts(0));
Completed bit code experiment data measurement counts {'000': 5, '001': 5, '010': 2, '011': 31, '101': 37, '111': 900, '110': 17, '100': 3}
Completed bit code experiment syndrome measurement counts {'00': 931, '10': 29, '11': 9, '01': 31}
Bit flip errors were detected/corrected on 69/1000 trials
A final parity error was detected on 95/1000 trials
## Emulate a random error source¶
Here we will use some more control flow to insert a random bit-flip error by using an ancilla qubit as a source of random bit-flips, and then look at the performance of our code.
from qiskit.circuit.library import IGate, XGate, ZGate
qreg_error_ancilla = QuantumRegister(1)
creg_error_ancilla = ClassicalRegister(1)
def build_random_error_channel(gate, ancilla, creg_ancilla, error_qubit):
"""Build an error channel that randomly applies a single-qubit gate based on an ancilla qubit measurement result"""
qc = build_qc()
qc.barrier(ancilla, error_qubit.register)
# 50-50 chance of applying a bit-flip
qc.h(ancilla)
qc.measure(ancilla, creg_ancilla)
qc.append(gate, [error_qubit]).c_if(creg_ancilla, 1)
qc.barrier(ancilla, error_qubit.register)
return qc
qc_id_error_channel = build_random_error_channel(IGate(), qreg_error_ancilla, creg_error_ancilla, qreg_data[0])
print("Identity error channel")
print(qc_id_error_channel.draw(idle_wires=False, output='text', fold=-1, cregbundle=False))
qc_bit_flip_error_channel = build_random_error_channel(XGate(), qreg_error_ancilla, creg_error_ancilla, qreg_data[0])
print("Bit flip error channel")
print(qc_bit_flip_error_channel.draw(idle_wires=False, output='text', fold=-1, cregbundle=False))
qc_phase_flip_error_channel = build_random_error_channel(ZGate(), qreg_error_ancilla, creg_error_ancilla, qreg_data[0])
print("Phase flip error channel")
print(qc_phase_flip_error_channel.draw(idle_wires=False, output='text', fold=-1, cregbundle=False))
Identity error channel
░ ┌───┐ ░
q0_0: ─░─────────┤ I ├─░─
░ ┌───┐┌─┐└─╥─┘ ░
q9: ─░─┤ H ├┤M├──╫───░─
░ └───┘└╥┘ ║ ░
c2: ═════════╩═══■═════
0x1
Bit flip error channel
░ ┌───┐ ░
q0_0: ─░─────────┤ X ├─░─
░ ┌───┐┌─┐└─╥─┘ ░
q9: ─░─┤ H ├┤M├──╫───░─
░ └───┘└╥┘ ║ ░
c2: ═════════╩═══■═════
0x1
Phase flip error channel
░ ┌───┐ ░
q0_0: ─░─────────┤ Z ├─░─
░ ┌───┐┌─┐└─╥─┘ ░
q9: ─░─┤ H ├┤M├──╫───░─
░ └───┘└╥┘ ║ ░
c2: ═════════╩═══■═════
0x1
## Inject the error into our error-correction sequence¶
In order to emulate a source of error, we add the error channel in between the encoding phase where we encode our qubit in the codespace, and the syndrome measurement phase where we measure the syndromes. In other words, we inject the error into our encoded logical state.
def build_error_channel_base():
qc = build_qc()
return qc
qc_id_error_bit_flip_code = build_error_correction_sequence(
build_error_channel_base(),
qc_init,
qc_encode_bit,
[qc_id_error_channel],
qc_syndrome_bit,
qc_correction_bit,
None,
qc_final_measure,
"Identity error channel"
)
qc_bit_flip_error_bit_flip_code = build_error_correction_sequence(
build_error_channel_base(),
qc_init,
qc_encode_bit,
[qc_bit_flip_error_channel],
qc_syndrome_bit,
qc_correction_bit,
None,
qc_final_measure,
"Bit flip error channel"
)
qc_phase_flip_error_bit_flip_code = build_error_correction_sequence(
build_error_channel_base(),
qc_init,
qc_encode_bit,
[qc_phase_flip_error_channel],
qc_syndrome_bit,
qc_correction_bit,
None,
qc_final_measure,
"Phase flip error channel"
)
circuits_error_channels_bit_flip_code = [qc_id_error_bit_flip_code, qc_bit_flip_error_bit_flip_code, qc_phase_flip_error_bit_flip_code]
qc_bit_flip_error_bit_flip_code.draw(output="mpl")
## Execute the circuits¶
# We need to add an extra ancilla qubit to our layout
# It doesn't matter which qubit for the most part as we are using it as a
# source of random information
error_channel_layout = error_channel_layout = initial_layout + list(set(range(circuits_error_channels_bit_flip_code[0].num_qubits)) - set(initial_layout))[:1]
transpiled_circuits_error_channels_bit_flip_code = transpile(circuits_error_channels_bit_flip_code, backend, initial_layout=error_channel_layout)
job_error_channels_bit_flip_code = backend.run(transpiled_circuits_error_channels_bit_flip_code, shots=shots, dynamic=True)
result_error_channels_bit_flip_code = job_error_channels_bit_flip_code.result()
def decode_error_channel_result(qc_init, data_counts, syndrome_counts, verbose=True, indent=0):
shots = sum(data_counts.values())
success_trials = data_counts.get('000', 0) + data_counts.get('111', 0)
failed_trials = shots-success_trials
error_correction_events = shots-syndrome_counts.get('00', 0)
if verbose:
print(f"{' ' * indent}Bit flip errors were detected/corrected on {error_correction_events}/{shots} trials")
print(f"{' ' * indent}A final parity error was detected on {failed_trials}/{shots} trials")
return error_correction_events, failed_trials
## Observe that we correct the majority of the bit-flip errors introduced¶
Note that injecting phase-flip errors has minimal impact on the final error outcome, because our qubit is in input state, which is not sensitive to phase-flip errors. This is not the case for an arbitrary input state.
qc_init_outcome = qc_init.copy()
qc_init_outcome.measure(qreg_data[0], 0)
qreg_indices = list(range(len(qreg_data)))
data_indices = qreg_indices[:1]
syndrome_indices = list(range(qreg_indices[-1]+1, len(qreg_data) + len(qreg_measure) ))
result_decoded_data_qubit_marginal_err_ch = marginal_counts(result_error_channels_bit_flip_code, data_indices)
result_data_marginal_err_ch = marginal_counts(result_error_channels_bit_flip_code, qreg_indices)
result_syndrome_marginal_err_ch = marginal_counts(result_error_channels_bit_flip_code, syndrome_indices)
for i, qc in enumerate(transpiled_circuits_error_channels_bit_flip_code):
print(f"For {qc.name} with bit flip code")
print(f' Completed bit code experiment decoded data qubit measurement counts {result_decoded_data_qubit_marginal_err_ch.get_counts(i)}')
print(f' Completed bit code experiment data qubits measurement counts {result_data_marginal_err_ch.get_counts(i)}')
print(f' Completed bit code experiment syndrome measurement counts {result_syndrome_marginal_err_ch.get_counts(i)}')
decode_error_channel_result(qc_init_outcome, result_data_marginal_err_ch.get_counts(i), result_syndrome_marginal_err_ch.get_counts(i), indent=4);
print("")
For Identity error channel with bit flip code
Completed bit code experiment decoded data qubit measurement counts {'1': 965, '0': 35}
Completed bit code experiment data qubits measurement counts {'111': 883, '101': 37, '110': 27, '011': 45, '010': 3, '100': 2, '000': 3}
Completed bit code experiment syndrome measurement counts {'10': 29, '00': 917, '01': 38, '11': 16}
Bit flip errors were detected/corrected on 83/1000 trials
A final parity error was detected on 114/1000 trials
For Bit flip error channel with bit flip code
Completed bit code experiment decoded data qubit measurement counts {'1': 919, '0': 81}
Completed bit code experiment data qubits measurement counts {'111': 871, '011': 22, '000': 33, '010': 19, '101': 24, '110': 20, '001': 2, '100': 9}
Completed bit code experiment syndrome measurement counts {'00': 471, '11': 447, '01': 34, '10': 48}
Bit flip errors were detected/corrected on 529/1000 trials
A final parity error was detected on 96/1000 trials
For Phase flip error channel with bit flip code
Completed bit code experiment decoded data qubit measurement counts {'1': 965, '0': 35}
Completed bit code experiment data qubits measurement counts {'011': 38, '111': 883, '010': 4, '100': 3, '000': 3, '101': 42, '110': 25, '001': 2}
Completed bit code experiment syndrome measurement counts {'10': 37, '11': 22, '00': 896, '01': 45}
Bit flip errors were detected/corrected on 104/1000 trials
A final parity error was detected on 114/1000 trials
In the next sections of the tutorial, we evaluate the performance of bit-flip code.
## Compare our bit-flip code to a code that applies the identity correction¶
We will now create a decoding sequence that does not correct the error, but rather performs conditional identity operations.
def apply_no_correction_bit(qc, qreg_data, creg_syndrome):
"""Apply an "identity correction"
We need to make sure to still apply the conditional gates so that the comparison to the bit-flip
code is faithful.
"""
qc.id(qreg_data[0]).c_if(creg_syndrome, 3)
qc.id(qreg_data[1]).c_if(creg_syndrome, 1)
qc.id(qreg_data[2]).c_if(creg_syndrome, 2)
qc.barrier(qreg_data)
return qc
qc_no_correction_bit = apply_no_correction_bit(build_qc(), qreg_data, creg_syndrome)
qc_no_correction_bit.draw(output="mpl")
We implement circuits that do not correct the bit-flip errors, using the routine above.
qc_id_error_no_correct = build_error_correction_sequence(
build_error_channel_base(),
qc_init,
qc_encode_bit,
[qc_id_error_channel],
qc_syndrome_bit,
qc_no_correction_bit,
None,
qc_final_measure,
"Identity error channel"
)
qc_bit_flip_error_no_correct = build_error_correction_sequence(
build_error_channel_base(),
qc_init,
qc_encode_bit,
[qc_bit_flip_error_channel],
qc_syndrome_bit,
qc_no_correction_bit,
None,
qc_final_measure,
"Bit flip error channel"
)
qc_phase_flip_error_no_correct = build_error_correction_sequence(
build_error_channel_base(),
qc_init,
qc_encode_bit,
[qc_phase_flip_error_channel],
qc_syndrome_bit,
qc_no_correction_bit,
None,
qc_final_measure,
"Phase flip error channel"
)
circuits_error_channels_no_correct = [qc_id_error_no_correct, qc_bit_flip_error_no_correct, qc_phase_flip_error_no_correct]
qc_id_error_no_correct.draw(output="mpl")
We will now execute the circuits that do not perform a correction.
# We need to add an extra ancilla qubit to our layout
# It doesn't matter which qubit for the most part as we are using it as a
# source of random information
error_channel_layout = error_channel_layout = initial_layout + list(set(range(circuits_error_channels_bit_flip_code[0].num_qubits)) - set(initial_layout))[:1]
transpiled_circuits_error_channels_no_correct = transpile(circuits_error_channels_no_correct, backend, initial_layout=error_channel_layout)
job_error_channels_no_correct = backend.run(transpiled_circuits_error_channels_no_correct, shots=shots, dynamic=True)
result_error_channels_no_correct = job_error_channels_no_correct.result()
The analysis below shows that the results are worse than that of the experiments where we indeed corrected the errors. We can see that the number of parity errors has increased significantly.
qc_init_outcome = qc_init.copy()
qc_init_outcome.measure(qreg_data[0], 0)
qreg_indices = list(range(len(qreg_data)))
data_indices = qreg_indices[:1]
syndrome_indices = list(range(qreg_indices[-1]+1, len(qreg_data) + len(qreg_measure) ))
result_decoded_data_qubit_marginal_no_correct = marginal_counts(result_error_channels_no_correct, data_indices)
result_data_marginal_no_correct = marginal_counts(result_error_channels_no_correct, qreg_indices)
result_syndrome_marginal_no_correct = marginal_counts(result_error_channels_no_correct, syndrome_indices)
for i, qc in enumerate(transpiled_circuits_error_channels_no_correct):
print(f'For {qc.name} with "identity" correction')
print(f' Completed bit code experiment decoded data qubit measurement counts {result_decoded_data_qubit_marginal_no_correct.get_counts(i)}')
print(f' Completed bit code experiment data qubits measurement counts {result_data_marginal_no_correct.get_counts(i)}')
print(f' Completed bit code experiment syndrome measurement counts {result_syndrome_marginal_no_correct.get_counts(i)}')
decode_error_channel_result(qc_init_outcome, result_data_marginal_no_correct.get_counts(i), result_syndrome_marginal_no_correct.get_counts(i), indent=4);
print("")
For Identity error channel with "identity" correction
Completed bit code experiment decoded data qubit measurement counts {'1': 959, '0': 41}
Completed bit code experiment data qubits measurement counts {'111': 895, '101': 43, '110': 33, '011': 21, '100': 2, '010': 4, '000': 2}
Completed bit code experiment syndrome measurement counts {'00': 909, '01': 52, '11': 17, '10': 22}
Bit flip errors were detected/corrected on 91/1000 trials
A final parity error was detected on 103/1000 trials
For Bit flip error channel with "identity" correction
Completed bit code experiment decoded data qubit measurement counts {'1': 476, '0': 524}
Completed bit code experiment data qubits measurement counts {'101': 27, '111': 432, '110': 485, '011': 13, '100': 23, '001': 4, '010': 14, '000': 2}
Completed bit code experiment syndrome measurement counts {'00': 454, '11': 468, '01': 38, '10': 40}
Bit flip errors were detected/corrected on 546/1000 trials
A final parity error was detected on 566/1000 trials
For Phase flip error channel with "identity" correction
Completed bit code experiment decoded data qubit measurement counts {'1': 963, '0': 37}
Completed bit code experiment data qubits measurement counts {'011': 35, '110': 30, '111': 874, '010': 3, '101': 51, '000': 3, '001': 3, '100': 1}
Completed bit code experiment syndrome measurement counts {'10': 34, '11': 21, '00': 900, '01': 45}
Bit flip errors were detected/corrected on 100/1000 trials
A final parity error was detected on 123/1000 trials
## Evaluate the performance of multiple stabilizer cycles¶
We will now evaluate multiple cycles of measuring the stabilizer and applying our correction sequence with the routines we have created above.
We perform experiments on our bit-flip code circuit with/without error correction. In addition, we study equivalent idle quantum circuits in order to compare our performance with a raw physical qubit. Our idle circuits prepare our state, but do not perform encoding/decoding/correcting errors; they instead idle for the same period of time.
We will plot the performance (Hellinger fidelity) of our quantum circuits as a function of the number of stabilizer cycles.
def apply_final_readout_invert(qc, qc_init, qreg_data, creg_data):
"""Apply inverse mapping so that we always try and measure |0> in the computational basis."""
qc = qc.compose(qc_init.inverse())
qc.barrier(qreg_data)
qc.measure(qreg_data, creg_data)
return qc
qc_final_measure_invert = apply_final_readout_invert(build_qc(), qc_init, qreg_data, creg_data)
qc_final_measure_invert.draw(output="mpl")
from collections import defaultdict
from qiskit.transpiler import PassManager
from qiskit_ibm_provider.transpiler.passes.scheduling import DynamicCircuitInstructionDurations, ALAPScheduleAnalysis
def get_circuit_duration_(qc: QuantumCircuit) -> int:
"""Get duration of circuit in hardware cycles."""
durations = DynamicCircuitInstructionDurations.from_backend(backend)
pm = PassManager([ALAPScheduleAnalysis(durations)])
pm.run(qc)
node_start_times = pm.property_set["node_start_time"]
block_durations = defaultdict(int)
for inst, (block, t0) in node_start_times.items():
block_durations[block] = max(block_durations[block], t0+inst.op.duration)
duration = sum(block_durations.values())
return duration
def build_idle_error_correction_sequence(
qc_base: QuantumCircuit,
qc_init: Optional[QuantumCircuit],
qc_encode: QuantumCircuit,
qc_channels: List[QuantumCircuit],
qc_syndrome: QuantumCircuit,
qc_correct: QuantumCircuit,
qc_decode: Optional[QuantumCircuit] = None,
qc_final: Optional[QuantumCircuit] = None,
initial_layout=initial_layout,
name: str = None,
) -> QuantumCircuit:
"""Build a quantum circuit that idles for the period of the input error correction sequence."""
qc = qc_base
if qc_init:
qc = qc.compose(
qc_init
)
if name is not None:
qc.name = name
qc_idle_region = qc_base.copy()
qc_idle_region.compose(
qc_encode
)
if not qc_channels:
qc_channels = [QuantumCircuit(*qc.qregs)]
for qc_channel in qc_channels:
qc_idle_region = qc_idle_region.compose(
qc_channel
).compose(
qc_syndrome
).compose(
qc_correct
)
if qc_decode:
qc_idle_region = qc_idle_region.compose(qc_decode)
qc_idle_transpiled = transpile(qc_idle_region, backend, initial_layout=initial_layout, scheduling_method=None)
idle_duration = get_circuit_duration_(qc_idle_transpiled)
qc_idle = qc_base.copy()
qc_idle.barrier()
for qubit in qc_idle.qubits:
qc_idle.delay(idle_duration, qubit, unit="dt")
qc_idle.barrier()
qc = qc.compose(qc_idle)
if qc_final:
qc = qc.compose(qc_final)
return qc
We now create an “idle” error channel.
# a helper routine to calculate the idle cycles
def convert_cycles(time_in_seconds: float, backend) -> int:
cycles = time_in_seconds / (backend.configuration().dt)
return int(cycles + (16 - (cycles % 16)))
def build_idle_error_channel(time_in_seconds, qreg):
idle_cycles = convert_cycles(time_in_seconds, backend)
qc_idle = build_qc()
qc_idle.delay(idle_cycles, qreg, "dt")
qc_idle.barrier()
return qc_idle
We will now perform a sweep of our bit-flip code circuit with/without error correction, and our equivalent idle circuit for a number of iterations.
num_rounds = 5
# Idle for a specified period in seconds
# This is how we build an idle error channel
idle_period = 5e-6
# Use the circuit we created above
qc_idle = build_idle_error_channel(idle_period, qreg_data)
qcs_corr_bit = []
qcs_no_corr_bit = []
qcs_idle_equiv_bit = []
for round_ in range(num_rounds):
qc_error_channels = [qc_idle] * (round_ + 1)
# bit-flip code with error correction
qcs_corr_bit.append(
build_error_correction_sequence(
build_qc(),
qc_init,
qc_encode_bit,
qc_error_channels,
qc_syndrome_bit,
qc_correction_bit,
qc_decode_bit,
qc_final_measure_invert,
f"With Correction {round_}"
)
)
# bit-flip code with no error correction (i.e., id error correction)
qcs_no_corr_bit.append(
build_error_correction_sequence(
build_qc(),
qc_init,
qc_encode_bit,
qc_error_channels,
qc_syndrome_bit,
qc_no_correction_bit,
qc_decode_bit,
qc_final_measure_invert,
name=f"Without Correction {round_}"
)
)
# equivalent idle circuit with no encoding/decoding/correcting errors
qcs_idle_equiv_bit.append(
build_idle_error_correction_sequence(
build_qc(),
qc_init,
qc_encode_bit,
qc_error_channels,
qc_syndrome_bit,
qc_correction_bit,
qc_decode_bit,
qc_final_measure_invert,
initial_layout=initial_layout,
name=f"Idle {round_}"
)
)
Below we execute and plot one of the circuits we constructed.
qcs_corr_bit[0].draw(output="mpl", idle_wires=False)
transpiled_qcs_corr_bit = transpile(qcs_corr_bit, backend, initial_layout=initial_layout)
job_qcs_corr_bit = backend.run(transpiled_qcs_corr_bit, shots=shots, dynamic=True)
transpiled_qcs_no_corr_bit = transpile(qcs_no_corr_bit, backend, initial_layout=initial_layout)
job_qcs_no_corr_bit = backend.run(transpiled_qcs_no_corr_bit, shots=shots, dynamic=True)
transpiled_qcs_idle_equiv_bit = transpile(qcs_idle_equiv_bit, backend, initial_layout=initial_layout)
job_qcs_idle_equiv_bit = backend.run(transpiled_qcs_idle_equiv_bit, shots=shots, dynamic=True)
result_qcs_corr_bit = job_qcs_corr_bit.result()
result_qcs_no_corr_bit = job_qcs_no_corr_bit.result()
result_qcs_idle_equiv_bit = job_qcs_idle_equiv_bit.result()
We use the ideal simulator below in calculating the fidelities.
from qiskit.providers.aer import Aer
ideal_sim = Aer.get_backend('qasm_simulator')
from qiskit.quantum_info.analysis import hellinger_fidelity
qc_init_outcome = qc_init.copy()
qc_init_outcome = qc_init_outcome.compose(qc_final_measure_invert)
transpiled_ideal = transpile(qc_init_outcome, backend, initial_layout=initial_layout)
result_ideal = ideal_sim.run(transpiled_ideal, shots=shots).result()
# Calculate the fidelity of our experiment given the ideal results obtained from our ideal simulator
def calculate_hellinger_fidelity(result_ideal, result_experiment, data_qubit=0):
result_ideal = marginal_counts(result_ideal, indices=[data_qubit])
result_experiment = marginal_counts(result_experiment, indices=[data_qubit])
counts_ideal = result_ideal.get_counts(0)
hellinger_fidelities = []
for circuit_idx in range(len(result_experiment.results)):
hellinger_fidelities.append(hellinger_fidelity(counts_ideal, result_experiment.get_counts(circuit_idx)))
return hellinger_fidelities
We use the routuine calculate_hellinger_fidelity above to extract the Hellinger fidelity for our sweeps.
fidelities_corr_bit = calculate_hellinger_fidelity(result_ideal, result_qcs_corr_bit)
fidelities_no_corr_bit = calculate_hellinger_fidelity(result_ideal, result_qcs_no_corr_bit)
fidelities_idle_equiv_bit = calculate_hellinger_fidelity(result_ideal, result_qcs_idle_equiv_bit)
Finally, we plot the performance of our error-correcting code as a function of the number of iterations.
import matplotlib.pyplot as plt
plt.rcParams["figure.figsize"] = (12, 6)
iters = range(1, num_rounds+1)
plt.plot(iters, fidelities_corr_bit, label="Bit flip code - correction")
plt.plot(iters, fidelities_no_corr_bit, label="Bit flip code - no correction")
plt.plot(iters, fidelities_idle_equiv_bit, label="Idle equivalent circuit")
plt.ylabel("Hellinger Fidelity")
plt.xlabel("Correction cycles")
plt.xticks(iters)
plt.suptitle("Comparing the performance of error-correction strategies for multiple correction cycles")
plt.title(f"Idle period: {idle_period*1e6}us - Qubit layout: {initial_layout}")
plt.legend(loc="upper right")
<matplotlib.legend.Legend at 0x1463c88e0>
## Discussion/summary¶
In this tutorial, we learned how to perform a very simple form of QEC with dynamic circuits. We prepared a qubit in state and showed that with a bit-flip code circuit we can detect/correct errors.
We expect the bit-flip code to perform the best with a input state. Selecting a different input state, for example , will most likely lead to lower performance because of higher sensitivity to various types of error/noise. For example, as mentioned earlier in the notebook, is not sensitive to phase errors, while this is not the case for the state.
The important data to note is the average quality of protecting any input state. Please feel free to re-run this notebook, but instead of initializing with the state, select a different state, and evaluate the performance of protecting different input states.
import qiskit.tools.jupyter
%qiskit_version_table
### Version Information
Qiskit SoftwareVersion
qiskit-terra0.22.2
qiskit-aer0.11.0
qiskit-ignis0.4.0
qiskit-ibmq-provider0.19.2
qiskit0.39.0
System information
Python version3.8.13
Python compilerClang 13.0.0 (clang-1300.0.29.30)
Python builddefault, May 8 2022 17:53:05
OSDarwin
CPUs8
Memory (Gb)32.0
Tue Nov 08 16:20:43 2022 CST
|
2023-02-08 16:59:51
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.562953770160675, "perplexity": 7098.58643774905}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2023-06/segments/1674764500837.65/warc/CC-MAIN-20230208155417-20230208185417-00589.warc.gz"}
|
https://wiki.socr.umich.edu/index.php?title=SOCR_EduMaterials_Activities_ApplicationsActivities_TradingOptions&diff=cur&oldid=7951
|
# Difference between revisions of "SOCR EduMaterials Activities ApplicationsActivities TradingOptions"
## SOCR Applications Activities - Options
### Description
You can access the Trading-Options applet at the SOCR Applications Site, select Financial Applications --> TraidingOptions.
### Definition
An option is a contract between two investors:
• Issuer (or seller), holder of a short position. He sells the option.
• Holder (buyer), holder of a long position. He buys the option.
### Types of options
• Call option: Gives the holder the right to buy an asset by a certain date for a certain price called exercise price with a fee. This fee it is the price of the option or premium.
• Put option: Gives the holder the right to sell an asset by a certain date for a certain price called exercise price with a fee. This fee it is the price of the put or premium. The date specified it is called: the expiration date or maturity date. The price specified it is called the exercise price or the strike price.
There are European options (can be exercised only on the expiration date) and American options (can be exercised at any time up to the expiration date).
### Stock options mechanics
• Options are normally traded in units of 100 shares. The price of the option is on a per share basis. Therefore, if the price of an option is priced at \$0.50, the total premium for that option would be $$\50$$ ($$0.50 \times 100 = \50$$).
• Stock options are on a January, February, or March cycle. Stocks are randomly assigned in one of these three cycles. For example, IBM is on a January cycle (options can be bought on Jan, Apr, Jul, Oct).
Stock options expired on the Saturday immediately following the third Friday of the expiration month.
The call option will only be exercised if the stock price at expiration is larger than the exercise price. In this case the holder of the call will have a positive payoff. The put option will only be exercised if the stock price at expiration is lower than the exercise price. In this case the holder of the put will have a positive payoff. The two figures below shows when the holder or the seller make a positive payoff.
There is an infinite number of combinations that one can make using call and put options. Some of the combinations have special names, like straddles, strips, straps, bull spreads, bear spreads, butterfly spreads, covered call, etc. All these are shown in the SOCR Trading Options applet. Here are some snapshots:
### References
The materials above was partially taken from:
• Modern Portfolio Theory by Edwin J. Elton, Martin J. Gruber, Stephen J. Brown, and William N. Goetzmann, Sixth Edition, Wiley, 2003.
• Options, Futues, and Other Derivatives by John C. Hull, Sixth Edition, Pearson Prentice Hall, 2006.
• SOCR Applications Site
|
2022-12-01 20:11:45
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.4543958604335785, "perplexity": 3191.286606867756}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-49/segments/1669446710869.86/warc/CC-MAIN-20221201185801-20221201215801-00702.warc.gz"}
|
https://engineer.john-whittington.co.uk/2015/06/simulink-raspberry-pi-driver-blocks/?replytocom=3112
|
Simulink Raspberry Pi Driver Blocks
June 13, 2015
Following on from adding support to wiringPi for the MCP4725 DAC, I wanted to add driver blocks to Simulink such that one could use them to create graphical models for the Raspberry Pi that could interface with the real-world – a workable alternative to expensive real-time targets.
Using the S-Function Builder and some other user created blocks (which didn’t work – see below) as a basis, the process isn’t too difficult thanks to the generic function prototypes wiringPi uses. All the low-level stuff is done in the library as explained in my previous post, the S-Function just has to call mcp4725Setup() at first step, followed by analogWrite() at each proceeding step. See the screenshots below.
‘Build’ creates the target language compile (.tlc) and .c code for the blocks such that Simulink Embedded Coder can incorporate them into the generated code. The process of generating, deploying to the Raspberry Pi and compiling over SSH is all very slick! Slick until compilation stops due to undefined references. The undefineds are the standard wiringPi functions – the Simulink makefile does not include the wiringPi library by default, even though the MATLAB Raspbian image has wiringPi installed. The error occurs with the older example blocks too so it looks like something changed at some point down the line.
Remote Build Makefile Setup
After a long time digging around in the remote build templates I finally found an undocumented command to get it working: xmakefilesetup. Run this in the MATLAB command window with the Raspberry Pi model open. The settings should be as below. Navigate to Linker > Arguments and add ‘lwiringPi’ on the end (this tells the linker to use the wiringPi library and so the wiringPi functions will be linked). If you’re using my ADC/DAC blocks with ADS1115 and MCP4725, you’ll also have to update the wiringPi library with my library by logging on via SSH and following these steps.
I created a model with four driver blocks with example usage that can be copied and pasted into other models – remember to ‘Build’ the s-functions in the model directories and install my version of wiringPi. It can be downloaded below.
rpi-driver-blocks
|
2021-05-18 11:56:33
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.39248889684677124, "perplexity": 3407.605270693773}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-21/segments/1620243989819.92/warc/CC-MAIN-20210518094809-20210518124809-00428.warc.gz"}
|
http://mathoverflow.net/revisions/48962/list
|
MathOverflow will be down for maintenance for approximately 3 hours, starting Monday evening (06/24/2013) at approximately 9:00 PM Eastern time (UTC-4).
3 added 84 characters in body
Based upon your examples, you may find it more natural to work with the slightly more general class of Prüfer domains (vs. Bezout domains) - i.e. finitely generated ideals$\:\ne 0$ are invertible (vs. principal). Prüfer domains are non-Noetherian generalizations of Dedekind domains. Their ubiquity stems from a remarkable confluence of interesting characterizations, e.g. CRT, or Gauss's Lemma for content ideals, or for ideals $\rm\ A\cap (B + C) = A\cap B + A\cap C\:,\:$ or $\rm\ (A + B)\ (A \cap B) = A\ B\$ etc. It's been estimated that there are close to 100 such characterizations known, e.g. see my sci.math post for 30.
As a simple example I'll give the natural Prüfer domain proof of a generalization of your example, viz. the ideal-theoretic $\:$ Freshman's Dream $\rm\ \ (A + B)^n = A^n + B^n\:.\:$ This theorem identity is true for both arithmetic of $\:$ GCDs $\:$ and invertible ideals simply because, in both cases, multiplication is cancellative and addition is idempotent, i.e. $\rm\ A + A = A\$ for ideals and $\rm\ (A,A) = A\$ for GCDs. Combining this these properties with the associative, commutative, distributive laws of addition and multiplication we obtain an extremely elementary high-school-level proof of the Freshman's Dream . e.g. - which is best illustrated for $\rm\: n = 2$2\:,\:$viz.$\rm\quad\quad (A + B)^4 \ =\ A^4 + A^3 B + A^2 B^2 + AB^3 + B^4 \rm\quad\quad\phantom{(A + B)^4 }\ =\ A^2\ (A^2 + AB + B^2) + (A^2 + AB + B^2)\ B^2 \rm\quad\quad\phantom{(A + B)^4 }\ =\ (A^2 + B^2)\ \:(A + B)^2 $So$\rm\ {(A + B)^2 }\ =\ \ A^2 + B^2\ $if$\rm\ A+B\ $is cancellative, e.g. if$\rm \rm\ A+B = 1$1\$ or if it's invertible.
The same proof generalizes for all $\rm\:n\:$ since, as above
$\rm\quad\quad (A + B)^{2n}\ =\ A^n\ (A^n + \cdots+ B^n) + (A^n +\cdots+ B^n)\ B^n$
$\rm\quad\quad\phantom{(A + B)^{2n}}\ =\ (A^n + B^n)\ (A + B)^n$
In the GCD case $\rm\ A+B\ := (A,B) = \gcd(A,B)\$ for $\rm\:A,B\:$ in a GCD-domain, i.e. a domain where $\rm\: \gcd(A,B)\:$ exists for all $\rm\:A,B \ne 0,0\:$. Hence Here too the Dream is true since $\rm\:(A,B)\:$ is cancellable, being nonzero in a domain. Note: (Note: one can unify the GCD and ideal cases by employing Divisor TheoryTheory).
In fact this yields yet another characterization: a domain is Prufer iff it satisfies the Freshman's Dream for all finitely generated ideals. See said sci.math post for further discussion.
2 added 4 characters in body; edited body
Based upon your examples, you may find it more natural to work with the slightly more general class of Prüfer domains (vs. Bezout domains) - i.e. finitely generated ideals$\:\ne 0$ are invertible (vs. principal). Prufer Prüfer domains are non-Noetherian generalizations of Dedekind domains. Their ubiquity stems from a remarkable confluence of interesting characterizations, e.g. CRT, or Gauss's Lemma for content ideals, or for ideals $\rm\ A\cap (B + C) = A\cap B + A\cap C\ C\:,\:$ or $\rm\ (A + B)\ (A \cap B) = A\ B\$ etc. It's been estimated that there are close to 100 such characterizations known, e.g. see my sci.math post for 30.
As a simple example I'll give the natural Prufer Prüfer domain proof of a generalization of your example, viz. the ideal-theoretic $\:$ Freshman's Dream $\rm\ (A + B)^n = A^n + B^n\:.\:$ This theorem is true for both arithmetic of GCDs and invertible ideals simply because, in both cases, multiplication is cancellative and addition is idempotent, i.e. $\rm\ A + A = A\$ for ideals and $\rm\ (A,A) = A\$ for GCDs. Combining this with the associative, commutative, distributive laws of addition and multiplication we obtain an extremely elementary high-school-level proof of the Freshman's Dream. e.g. for $\rm\: n = 2$
$\rm\quad\quad (A + B)^4 \ =\ A^4 + A^3 B + A^2 B^2 + AB^3 + B^4$
$\rm\quad\quad\phantom{(A + B)^4 }\ =\ A^2\ (A^2 + AB + B^2) + (A^2 + AB + B^2)\ B^2$
$\rm\quad\quad\phantom{(A + B)^4 }\ =\ (A^2 + B^2)\ \:(A + B)^2$
So $\rm\ {(A + B)^2 }\ =\ \ A^2 + B^2\$ if $\rm\ A+B\$ is cancellative, e.g. if $\rm A+B = 1$
The same proof generalizes for all $\rm\:n\:$ since, as above
$\rm\quad\quad (A + B)^{2n}\ =\ A^n\ (A^n + \cdots+ B^n) + (A^n +\cdots+ B^n)\ B^n$
$\rm\quad\quad\phantom{(A + B)^{2n}}\ =\ (A^n + B^n)\ (A + B)^n$
In the GCD case $\rm\ A+B\ := (A,B) = \gcd(A,B)\$ for $\rm\:A,B\:$ in a GCD-domain, i.e. a domain where $\rm\: \gcd(A,B)\:$ exists for all $\rm\:A,B \ne 0,0\:$. Hence the Dream is true since $\rm\:(A,B)\:$ is cancellable, being nonzero in a domain. Note: one can unify the GCD and ideal cases by employing Divisor Theory.
In fact this yields yet another characterization: a domain is Prufer iff it satisfies the Freshman's Dream for all finitely generated ideals. See said sci.math post for further discussion.
1
Based upon your examples, you may find it more natural to work with the slightly more general class of Prüfer domains (vs. Bezout domains) - i.e. finitely generated ideals$\:\ne 0$ are invertible (vs. principal). Prufer domains are non-Noetherian generalizations of Dedekind domains. Their ubiquity stems from a remarkable confluence of interesting characterizations, e.g. CRT, or Gauss's Lemma for content ideals, or for ideals $\rm\ A\cap (B + C) = A\cap B + A\cap C\$ or $\rm\ (A + B)\ (A \cap B) = A\ B\$ etc. It's been estimated that there are close to 100 such characterizations known, e.g. see my sci.math post for 30.
As a simple example I'll give the natural Prufer domain proof of a generalization of your example, viz. the ideal-theoretic $\:$ Freshman's Dream $\rm\ (A + B)^n = A^n + B^n\:.\:$ This theorem is true for both arithmetic of GCDs and invertible ideals simply because, in both cases, multiplication is cancellative and addition is idempotent, i.e. $\rm\ A + A = A\$ for ideals and $\rm\ (A,A) = A\$ for GCDs. Combining this with the associative, commutative, distributive laws of addition and multiplication we obtain an extremely elementary high-school-level proof of the Freshman's Dream. e.g. for $\rm\: n = 2$
$\rm\quad\quad (A + B)^4 \ =\ A^4 + A^3 B + A^2 B^2 + AB^3 + B^4$
$\rm\quad\quad\phantom{(A + B)^4 }\ =\ A^2\ (A^2 + AB + B^2) + (A^2 + AB + B^2)\ B^2$
$\rm\quad\quad\phantom{(A + B)^4 }\ =\ (A^2 + B^2)\ \:(A + B)^2$
So $\rm\ {(A + B)^2 }\ =\ \ A^2 + B^2\$ if $\rm\ A+B\$ is cancellative, e.g. if $\rm A+B = 1$
The same proof generalizes for all $\rm\:n\:$ since, as above
$\rm\quad\quad (A + B)^{2n}\ =\ A^n\ (A^n + \cdots+ B^n) + (A^n +\cdots+ B^n)\ B^n$
$\rm\quad\quad\phantom{(A + B)^{2n}}\ =\ (A^n + B^n)\ (A + B)^n$
In the GCD case $\rm\ A+B\ := (A,B) = \gcd(A,B)\$ for $\rm\:A,B\:$ in a GCD-domain, i.e. a domain where $\rm\: \gcd(A,B)\:$ exists for all $\rm\:A,B \ne 0,0\:$. Hence the Dream is true since $\rm\:(A,B)\:$ is cancellable, being nonzero in a domain. Note: one can unify the GCD and ideal cases by employing Divisor Theory.
In fact this yields yet another characterization: a domain is Prufer iff it satisfies the Freshman's Dream for all finitely generated ideals. See said sci.math post for further discussion.
|
2013-06-18 22:53:46
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8886944651603699, "perplexity": 558.4313617605387}, "config": {"markdown_headings": true, "markdown_code": false, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368707435344/warc/CC-MAIN-20130516123035-00028-ip-10-60-113-184.ec2.internal.warc.gz"}
|
https://www.pavenafoundation.or.th/sugdvk4b/s1n6aa6.php?page=maximum-order-volume-solution-b6d3f3
|
Price: $20.61 ($2.58 / Fl Oz) FREE Shipping on your first order. Reviewed in the United States on January 1, 2019. Filesystem limitation in Logical Volume Maximum filesystem size that can be used in LVM. For example, wine is about 12% v/v ethanol. – Warm the solution to body temperature. They are marked at 25 ml to 100 ml intervals depending on the overall size of the bag. Details & FREE Returns. Naturally based, green chemistry: moisturize, enhance, protect, restore and repair the hair. Determine the height of the box that will give a maximum volume. 4 people found this helpful. I will get right to the point: WORST hair product I have ever used in my life, and that is saying a LOT because I am a hair product junkie. Zero toxic chemicals. Reviewed in the United States on February 15, 2020, I purchased the product called Loma Maximum Volumizing Solution, but it is in a blue tube and not green as shown in the picture for this product's reviews. I tried my best to come up with an answer but I failed. We do not have any recommendations at this time. We construct uniformly high order accurate discontinuous Galerkin (DG) and weighted essentially non-oscillatory (WENO) finite volume (FV) schemes satisfying a strict maximum principle for scalar conservation laws and passive convection in incompressible flows, and positivity preserving for density and pressure for compressible Euler equations. Free returns are available for the shipping address you chose. Textbook solution for Pre-Algebra Student Edition 1st Edition Glencoe/McGraw-Hill Chapter 12.3 Problem 30STP. Try it on dry hair for a wild look. ft. = 1,100 sq.ft. This product smells absolutely amazing (as do pretty much all of the Loma products). My one complaint with it is that it leaves your hair a little crunchy. Depends on concentration (see conversion table below) E.g. SOLUTION 3 : Let variable x be the length of one edge of the square base and variable y the height of the box. Numerical Solution of Partial Differential Equations III, SYNSPADE 1975, pp. At one end of the bag are two ports of about the same length. a simple and clean approach to healthy hair and a sustainable environment. A company sells its products at unit price of $$100\,\$$ if the lot size does not exceed $$5000$$ units. Dip the tip into the solution to a depth of 1 cm, and slowly release the operating button. Part of Medical Dosage Calculations For Dummies Cheat Sheet . For example, a 70 % (v/v) solution of ethanol can be prepared by dissolving 70 mL of 100% (i.e., 200 proof) ethanol in a total solution volume of 100 mL. Gave me absolutely NO volume and should be called an Anti-Volume Solution instead of a "maximum volumizing solution," which is a bad joke. This is the key to any dilution calculation. Android phones give a warning if the user is increasing volume beyond a certain limit. In order to brush some of the crunch out, you lose a little bit of volume. Then, I found this. The maximum usage: 550 Chairs × 2 sq. Fortunately, calculating any one of these three variables is easy to do when you know the other two variables. The order of degree of tolerance of pH for different dosing routes is oral>intravenous>intramuscular>subcutaneous>intraperitoneal. must be taken into account when approaching the volume limits or determining the volume to be infused IV. Simply put, the number of moles of solute present in the dilute solution must be equal to the number of moles of solute present in the concentrated sample. Note that volume percent is relative to the volume of solution, not the volume of solvent. max = -2 - (4 2 / 4 * (-1)) 4 squared is 16, and 4 times -1 is -4. I have a tube I'm using up and this product really helps to give volume to older thinning blond hair. Step 5: To determine the domain of consideration, let’s examine .Certainly, we need Furthermore, the side length of the square cannot be greater than or equal to half the length of the shorter side, 24 in. BUT Loma doesn't provide any shine whatsoever, hence the 4 stars. so that 2h = 12 - 2r. Worst product I have ever used ! for 1% lidocaine: contains 10 mg of lidocaine per 1 mL; Max volume of 1% lidocaine that can be administered to a 10 kg patient = 45 mg / 10mg/mL = 4.5 mL I spent a lot of time researching the many toxic chemicals in popular mouse products. Your selected delivery location is beyond seller's shipping coverage for this item. “LOMATherapy” consists of Essential Oil-Based Fragrances, Natural Aroma Ingredients & Aromatic Botanical Extracts, NEVER tested on animals and Vegan Friendly (*We utilize a small amount of beeswax in our Fiber Putty and Forming Paste, which we obtain in a way that does not harm the bees. Read more. Mix 4 cups of warm tap water with 1½ teaspoons of table salt. Purchased about a year ago....still have about 1/2 a bottle left. What are the LVM size limits in Red Hat Enterprise Linux server? Amazon sources from top brands to offer you a wide range of high-end products for women and men. When I put just a very small amount through my hair it went flat and limp right away. 16 divided by -4 is -4. There was a problem adding this item to Cart. Paraben, Sodium Chloride, Gluten, and Soy Free. procedures, and special considerations for wild animals. 5, or 20 inches. Find the maximum volume of a rectangular box with three faces in the coordinate planes and a vertex in the first octant on the plane The sum of the length and the girth (perimeter of a cross-section) of a package carried by a delivery service cannot exceed in. Whenever you’re administering intravenous (IV) infusions, you need to know the flow rate, infusion time, and total volume. The total surface area of the brick is 600 cm 2. This commonplace notion of order is described quantitatively by Landau theory. Squares of equal sides x are cut out of each corner then the sides are folded to make the box. We consider all of the possible disjoint cycle structures, as in the example on … Maximum Volumizing Solution builds volume and fullness for thicker, shinier hair. Gave it two tries and threw it in the trash. Total hair bomb and waste of money for me! The plastic bag system collapses as the solution is administered so a vacuum is not created inside the bag. V = L * W … (2017) Second-order accurate finite volume schemes with the discrete maximum principle for solving Richards’ equation on unstructured meshes. The total surface area of the brick is 600 cm 2. Volume 298, Number 2, December 1986 THE PONTRYAGIN MAXIMUM PRINCIPLE FROM DYNAMIC PROGRAMMING AND VISCOSITY SOLUTIONS TO FIRST-ORDER PARTIAL DIFFERENTIAL EQUATIONS EMMANUEL NICHOLAS BARRON1 AND ROBERT JENSEN2 ABSTRACT. It also analyzes reviews to verify trustworthiness. The pipette is checked at the maximum volume (nominal volume) and at the minimum volume or 10% of the maximum volume, whichever is higher. We infuse the purest water with concentrated organic aloe vera powder - this powerful combination creates the #1 ingredient in Loma products. Grisvard, P.: Behavior of the solutions of an elliptic boundary value problem in a polygonal or polyhedral domain. Solution to Problem 1: We first use the formula of the volume of a rectangular box. How to maximize the volume of a box using the first derivative of the volume. Find the dimensions of the rectangular package of largest volume that can be sent. This shopping feature will continue to load items when the Enter key is pressed. Reviewed in the United Kingdom on April 4, 2016. I thought I give this a try again....but it absolutely does NOT result in any Volume. After viewing product detail pages, look here to find an easy way to navigate back to pages you are interested in. Aloe barbadensis leaf juice (aloe vera gel)*, vp/dmapa acrylates copolymer, vp/methacrylamide/vinyl imidazole copolymer, hydroxypropyl guar, gluconolactone, chlorphenesin, citrus tangerina (tangerine) peel oil, citrus aurantium dulcis (orange) peel oil, citrus paradisi (grapefruit) oil, fragrance, glycerin* & water & lavandula angustifolia (lavender) flower/leaf/stem extract* & foeniculum vulgare (fennel) seed extract* & helianthus annuus (sunflower) seed extract*, creatine, hydrolyzed quinoa, panthenol (vitamin b-5), tocopherol (vitamin e)* ricinus communis (castor seed) oil*, helianthus annuus (sunflower) seed oil, carthamus tinctorius (safflower) seed oil, beet juice powder, citric acid. ft. = 970 sq. Reviewed in the United States on August 1, 2017. It is important to realize liquid and gas volumes are not necessarily additive. Some of the names are scary, but they are not nearly as bad for you as what's in other mouses. This could hurt your child. You may be charged a restocking fee up to 50% of item's price for used or damaged returns and up to 100% for materially different item. I recently cut my hair shorter, and was looking for a product to add some volume and I was excited to find this product! The extremum (dig that fancy word for maximum or minimum) you’re looking for doesn’t often occur at an endpoint, but it can — so don’t fail to evaluate the function at the interval’s two endpoints.. You’ve got your answer: a height of 5 inches produces the box with maximum volume (2000 cubic inches). I recently started buying Loma products from my local hair salon, and I am in love with their shampoo and conditioner. Love the LOMA products so I’m hoping I just got a bad one. by Clausius and Helmholtz, associated with disorder. Solution; We have a piece of cardboard that is 50 cm by 20 cm and we are going to cut out the corners and fold up the sides to form a box. ; otherwise, one of the flaps would be completely cut off. Find the value of x that makes the volume maximum. Solution: Volume of the solid sphere = (4/3)πr 3. The warning is that if we keep on increasing the volume beyond that limit, it may hurt our ears. Volume/volume % solutes are also common, and are used when pure solutes in liquid form are used. I've been using Loma firm hold hair spray for years to help build and hold volume and it has worked fairly well, but after reading positive reviews for this product from people with similar hair thought I'd give it a try. It adds a good amount of volume that lasts mostly through the day. To calculate the overall star rating and percentage breakdown by star, we don’t use a simple average. what volume of the saline solution must be administered to the patient in order … LOMA products are Paraben and Gluten free, Sulfate free cleansing and Sodium Chloride free, and all products are good for your hair and skin! The plastic bag system collapses as the solution is administered so a vacuum is not created inside the bag. Sulfate-free Cleansing. In: Hubbard, B. WOW! Your recently viewed items and featured recommendations, Select the department you want to search in. They are marked at 25 ml to 100 ml intervals depending on the overall size of the bag. I would really appreciate if someone can help me with this: Question: To carry a suitcase on an airplace, the length + width + height must be less than or equal to $62$ in. 8.4 mL of saline solution 840 mL of saline solution 7.1 mL of saline solution 140 mL of saline solution. Aloemoist Complex is made of pure organic plant extracts, oils, vitamins and minerals and offers a simple and clean approach to healthy hair and a sustainable environment. At higher volume the company gives a discount of $$5\,\$$ for each additional thousand exceeding the level of $$5000.$$ Determine the order volume at which the company has the largest income. (ed.) Note that volume percent is relative to the volume of solution, not the volume of solvent. I love Loma's products but this one just didn't deliver for my type of hair. :'-(, Reviewed in the United States on March 6, 2017. Top subscription boxes – right to your door, Extended holiday return window till Jan 31, 2021, © 1996-2020, Amazon.com, Inc. or its affiliates. Precalculus (10th Edition) Edit edition. Academic Press, New York (1976) All of our other products are totally vegan, using no animal bi-products.). What is the maximum size of logical volume? I am down to 2 bottles of the old LOMA INTENT Volumnizing Leave in creme (I wish Loma did not discontinue this Intent Volumizing creme - been using for 10+ years) . Problem 94E from Chapter 2.2: Maximum Volume An open box with locking labs is to be made f... Get solutions What volume of the saline solution must be administered to the patient in order to deliver 7.7 g of NaCl? Never change this recipe, and never use plain water in an enema. Please choose a different delivery location or purchase from another seller. I have found that if I don't use the same styling product daily I have better results with volume from products. Maximum volume of lidocaine administered. Determine the height of the box that will give a maximum volume. It is important to realize liquid and gas volumes are not necessarily additive. and h = 6 - r. We wish to MAXIMIZE the total VOLUME of the resulting CYLINDER V = (area of base) (height) . Sulfate-free Cleansing. Though the table indicates it is the maximum volume, the intent is that the dose volume is the maximum volume for ideal or good practice dose volumes. Maximum level of stock = Re-order level + Re-order quantity – (Minimum usage × Minimum lead time) I paid over $20 for this and had no recourse but to throw it in the trash. In order to brush some of the crunch out, you lose a little bit of volume. Disappointed that it has no holding power, Reviewed in the United States on December 23, 2017. Re-order quantity: 496 square feet of wood; Maximum stock level: ? MAXIMUM-PRINCIPLE-SATISFYING AND POSITIVITY-PRESERVING HIGH ORDER SCHEMES Remark: If we insist on the maximum principle interpreted as m un+1 j M; 8j if m un j M; 8j; where un j is either the approximation to the point value u(xj;tn)for a finite difference scheme, or to the cell average 1 x Rx j+1=2 xj 1=2 u(x;tn)dxfor a finite volume or DG scheme, then the scheme can be at most second (2017) Cell-centered nonlinear finite-volume methods for the … ... Keep the pipette vertical when pipetting in order to prevent liquid from running into the pipette body. Find the value of x that makes the volume maximum. For the 2020 holiday season, returnable items shipped between October 1 and December 31 can be returned until January 31, 2021. ... What is the maximum volume of a 0.788 M CaCl2 solution that can be prepared using 85.3 g CaCl2? eval(ez_write_tag([[728,90],'analyzemath_com-medrectangle-3','ezslot_5',320,'0','0']));Solution to Problem 1: A sheet of metal 12 inches by 10 inches is to be used to make a open box. Worked solution to this question on differentiation - maximum volume of a box Figure 4 shows a solid brick in the shape of a cuboid measuring 2x cm by x cm by y cm. 207–274. We wish to MAXIMIZE the total VOLUME of the box V = (length) (width) (height) = (x) (x) (y) = x 2 y. The recommended working range for pH is 4.5-8.0. Then I tried using it as a "dressing tool" instead and the same thing happened. I have fine straight hair that gets bogged down by most styling products. Assuming that height is fixed, show that the maximum volume is$ V = h(31-\frac{1}{2}h)^2 .\$ (a) Show that the volume, V cm 3, of the brick is given by V = 200x - 4x 3 /3 Given that x can vary, We wish to MAXIMIZE the total VOLUME of the box V = (length) (width) (height) = (x) (x) (y) = x 2 y. I REALLY wanted this to work, but sadly had to return. Volume of twenty seven solid sphere = 27×(4/3)πr 3 = 36 π r 3 (i) New solid iron sphere radius = r’ Volume of this new sphere = (4/3)π(r’) 3 (4/3)π(r’) 3 = 36 π r 3 (r’) 3 = 27r 3. r’= 3r. This chemistry video tutorial provides a basic introduction into mass percent and volume percent. Helpful. Worked solution to this question on differentiation - maximum volume of a box Figure 4 shows a solid brick in the shape of a cuboid measuring 2x cm by x cm by y cm. For example, wine is about 12% v/v ethanol. Volume optimization problem with solution. Essential Oil-Based Fragrances, Natural Aroma Ingredients & Aromatic Botanical Extracts, Reviewed in the United States on May 19, 2017. Your question might be answered by sellers, manufacturers, or customers who bought this product. Amazon's Choice for " loma maximum volumizing solution ". Math 401: Solutions to HW #3 (Ch. Please try your search again later. Builds Volume and shine. In order to navigate out of this carousel please use your heading shortcut key to navigate to the next or previous heading. Advances in Water Resources 104 , 114-126. Otherwise, I'm very pleased with the quality and results of this product! Medium to firm hold. Paraben, Sodium Chloride, Gluten, and Soy Free. I will prchase this again when I near the end of the bottle. Although the descriptions says, "Builds volume, fullness and shine for thicker, plumper hair. Solution. A little goes a long way. "160 mL" The most important thing to keep in mind when it comes to diluting solutions is that the number of moles of solute must remain constant at all times. Loma Hair Care Nourishing Shampoo & Conditioner Duo, Loma Hair Care Violet Shampoo Violet Conditioner Duo, 12 Fl Oz each, Loma Hair Care Moisturizing Shampoo, 12 Fl Oz. Loma Maximum Volumizing Solution, 8 Fl Oz. I got this one and it is much more watery and has no hold whatsoever - was expecting more strength since it is supposed to volumize and is characterized a “medium to firm” hold. I could have gotten better results from a cheaper, drugstore-brand hair gel. Reviewed in the United States on February 21, 2020. ... Keep the pipette vertical when pipetting in order to prevent liquid from running into the pipette body. This means there is 12 ml ethanol for every 100 ml of wine. This give my fine, limp, thin hair volume that lasts almost all day. Solution to Problem 1: We first use the formula of the volume of a rectangular box. Overall, 5 mL has been cited for adults as the maximum volume … Radius of new sphere will be 3r (thrice the radius of original sphere) Please make sure that you are posting in the form of a question. Reviewed in the United States on May 5, 2020. This means there is 12 ml ethanol for every 100 ml of wine. Entropy has been historically, e.g. (b) Find the volume of each solid. Entropy-driven order. SOLUTION 3 : Let variable x be the length of one edge of the square base and variable y the height of the box. The order of a permutation is the least common multiple of the cycle lengths in its disjoint cycle form. NOT AT ALL... leaves my hair slimy and difficult to style and after drying my hair for abt 1/2 hour, hair is still SLIMY. ft. The total surface area of the box is given to be 48 = (area of base) + 4 (area of one side) = x 2 + 4 (xy) , so that 4xy = 48 - x 2. or . A sheet of metal 12 inches by 10 inches is to be used to make a open box. 5 # 8) What is the maximum order of any element in A 10? Benefits. Amazon's Choice recommends highly rated and well-priced products. Squares of equal sides x are cut out of each corner then the sides are folded to make the box. (a) Show that the volume, V cm 3, of the brick is given by V = 200x - 4x 3 /3 Given that x can vary, We have step-by-step solutions for your textbooks written by Bartleby experts! Medium to firm holdFor all hair typesBuilds volume, fullness and shine for thicker, plumper hairMay also be used as a dry dressing tool for more defined looksAromatherapy of Cranberry, Orange and TangerineKey Ingredients:VP/DMAPA Acrylates Copolymer — Stringy flexible holding styling resin.Fennel Seed — Extends all hair color vibrancy by killing excess peroxide and ammonia residue leftover after color services.Sunflower Seed — Reduces the fading of all hair colors and helps to repair UV damage.Glycerin — Vegetable based moisturizer that provides thermal protection. Apply desired amount to clean, damp hair from roots to ends and style. *Certified Organic. Graduation marks are on the front of the bag to indicate the volume of solution used. Otherwise, I'm very pleased with the quality and results of this product! There's a problem loading this menu right now. 1.00 L 0.769 L 0.975 L 67.2 L. 0.975 L. Smartphones have a very good feature related to maximum volume of the device. What is Premium Beauty? (c) Determine the dimensions of a rectangular solid (with a square base) of maximum volume if its surface area is 150 square inches. May also be used as a dry dressing tool for more defined looks" it didn't do any of this for me. If you I've tried everything but this one does work, but must be careful - too large an amount will lay hair flatter, weigh it down. The total surface area of the box is given to be 48 = (area of base) + 4 (area of one side) = x 2 + 4 (xy) , so that 4xy = 48 - x 2. or . However, in common speech, order is used to describe organization, structural regularity, or form, like that found in a crystal compared with a gas. Sketch the graph of some function that meets the following conditions : To buy Premium Beauty and Premium Men’s Grooming items, just look for the Premium Beauty badge. The pipette is checked at the maximum volume (nominal volume) and at the minimum volume or 10% of the maximum volume, whichever is higher. Solution; We have a piece of cardboard that is 50 cm by 20 cm and we are going to cut out the corners and fold up the sides to form a box. Medium to firm hold. The dose volume guideline (p. 12) provided the maximum volumes for common routes of administration in common laboratory species. Smells amazing, gives good volume, but a little crunch. Required: Compute maximum stock level. I like Kevin Murphy but L'oreal is good too and much cheaper. There was a problem completing your request. Return this item for free. Maximum Volume (a) Verify that each of the rectangular solids shown in the figure has a surface area of 150 square inches. By Richard Snyder, Barry Schoenborn . Loma Hair Care Nourishing Shampoo, 12 Fl Oz, LOMA Moisturizing Shampoo 33 Ounce (Liter), Loma Hair Care Deep Conditioner, 33.8 Fl Oz. Use Derivatives to solve problems: Area Optimization. Thus, the dimensions of the desired box are 5 inches by 20 inches by 20 inches. Thank goodness their hair spray does. Builds Volume and shine. Maximum Volumizing Solution builds volume and fullness for thicker, shinier hair. Solution Sketch the graph of some function on the interval $$\left[ { - 4,3} \right]$$ that has an absolute maximum at $$x = - 3$$ and an absolute minimum at $$x = 2$$. I am throwing it away even though it's nearly still full. The minimum usage: 485 Chairs × 2 sq. SOLUTION 13 : Let variable r be the length of the base and variable h the height of the rectangle. Application of 1st Order DE in Drainage of a Water Tank Tap Exit Use the law of conservation of mass: The total volume of water leaving the tank during ∆t (∆V exit) = The total volume of water supplied by the tank during ∆t (∆V tank) We have from Equation (3.8e): ∆V … Is there any size limit in creating Logical Volume with LVM? Amazon's Choice recommends highly rated and well-priced products. I do use Nioxcin before that, when I semi dry, then Loma, followed by a good anti-frizz product. Maximum volume of cylinder. It is given that the perimeter of the rectangle is 12 = 2r + 2h. I was hoping this new Loma will be a good replacement. Prime members enjoy FREE Delivery and exclusive access to music, movies, TV shows, original audio series, and Kindle books. Please try again later. This feature is seemingly non-existent in Windows 10. Hair never dried out. a saline solution in intravenous drips for patients who cannot take oral fluids contains 0.92% (w/v) NaCl in water. Dip the tip into the solution to a depth of 1 cm, and slowly release the operating button. I use about a dime to nickel's worth a washing (twice weekly) and that seems to work well. Instead, our system considers things like how recent a review is and if the reviewer bought the item on Amazon. | bartleby Maximum volumes have been proposed across the various IM sites for adult patients 3,12-16 (Table 1). 4 Prepare the solution: – Use saline solution from the store (also called normal saline or 0.9% sodium chloride solution) or make your own. At one end of the bag are two ports of about the same length. Find the value of x that makes the volume maximum. Premium Beauty, Find answers in product info, Q&As, reviews. Not only did it suck the volume and life out of my hair, but it filled it with static electricity. Solution properties such as tonicity, pH, etc. Have used the LOMA fiber putty and that works great on my hair (I spike my hair up and it goes several inches high) and has a “medium” hold. Graduation marks are on the front of the bag to indicate the volume of solution used. We prove the Pontryagin Maximum Principle for the Lagrange
## maximum order volume solution
L'oreal Beach Hair, Reinforcement Learning Introduction Slides, Cloudera Express Edition, Doc Acronym Military, Chicken Breast Diet, College Baseball Showcases 2020, Philips Fidelio X2hr Review, Big Mom One Piece,
|
2021-01-24 06:30:30
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.2930971086025238, "perplexity": 1792.7560797556196}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-04/segments/1610703547333.68/warc/CC-MAIN-20210124044618-20210124074618-00362.warc.gz"}
|
http://mathhelpforum.com/algebra/117510-fractions.html
|
# Math Help - Fractions
1. ## Fractions
Hi.
Can you help me solve this question?
One layer of tinting material on a window cuts out 1/5 of the sun's UV rays.
(a) What fraction would be cut out by using two layers?
(b) How many layers would be required to cut out at least 9/10 of the sun's UV rays?
2. Hello ipokeyou
Originally Posted by ipokeyou
Hi.
Can you help me solve this question?
One layer of tinting material on a window cuts out 1/5 of the sun's UV rays.
(a) What fraction would be cut out by using two layers?
(b) How many layers would be required to cut out at least 9/10 of the sun's UV rays?
(a) If one layer cuts out $\frac15$ of the UV, then it must allow $\frac45$ through. Of this, $\frac45$ will be allowed through by a second layer. So, of the original quantity of UV the fraction allowed through by two layers will be:
$\frac45\times\frac45 = \frac{16}{25}$
Therefore $1 - \frac{16}{25}= \frac{9}{25}$ will have been cut out.
(b) Similarly, if there are $n$ layers, the fraction allowed through will be $\left(\frac45\right)^n$. And the fraction cut out will be
$1-\left(\frac45\right)^n$
So we want the smallest integer value of n for which
$1-\left(\frac45\right)^n\ge\frac{9}{10}$
$\Rightarrow \left(\frac45\right)^n\le\frac{1}{10}$
If you understand how to take logs of both sides, you can solve this inequality that way. Or you can simply use your calculator until you get the answer. Either way, I think it's 11 layers.
|
2015-07-07 16:33:27
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 10, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.552837073802948, "perplexity": 723.3822022670239}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2015-27/segments/1435375099755.63/warc/CC-MAIN-20150627031819-00174-ip-10-179-60-89.ec2.internal.warc.gz"}
|
http://gmatclub.com/forum/what-is-the-distance-between-x-and-y-on-the-number-line-129638.html
|
Find all School-related info fast with the new School-Specific MBA Forum
It is currently 31 Aug 2016, 10:06
### GMAT Club Daily Prep
#### Thank you for using the timer - this advanced tool can estimate your performance and suggest more practice questions. We have subscribed you to Daily Prep Questions via email.
Customized
for You
we will pick new questions that match your level based on your Timer History
Track
every week, we’ll send you an estimated GMAT score based on your performance
Practice
Pays
we will pick new questions that match your level based on your Timer History
# Events & Promotions
###### Events & Promotions in June
Open Detailed Calendar
# What is the distance between x and y on the number line?
Author Message
TAGS:
### Hide Tags
Manager
Joined: 27 Oct 2011
Posts: 191
Location: United States
Concentration: Finance, Strategy
GMAT 1: Q V
GPA: 3.7
WE: Account Management (Consumer Products)
Followers: 4
Kudos [?]: 128 [2] , given: 4
What is the distance between x and y on the number line? [#permalink]
### Show Tags
25 Mar 2012, 18:30
2
KUDOS
6
This post was
BOOKMARKED
00:00
Difficulty:
55% (hard)
Question Stats:
57% (02:45) correct 43% (00:53) wrong based on 356 sessions
### HideShow timer Statistics
What is the distance between x and y on the number line?
(1) |x| – |y| = 5
(2) |x| + |y| = 11
[Reveal] Spoiler: OA
_________________
DETERMINED TO BREAK 700!!!
Math Expert
Joined: 02 Sep 2009
Posts: 34527
Followers: 6313
Kudos [?]: 80083 [0], given: 10027
Re: What is the distance between x and y on the number line? [#permalink]
### Show Tags
26 Mar 2012, 00:22
Expert's post
5
This post was
BOOKMARKED
What is the distance between x and y on the number line?
Question: |x-y|=?
(1) |x| – |y| = 5. Not sufficient: consider x=10, y=5 and x=10, y=-5.
(2) |x| + |y| = 11. Not sufficient: consider x=10, y=1 and x=10, y=-1.
(1)+(2) Solve the system of equation for |x| and |y|: sum two equations to get 2|x|=16 --> |x|=8 --> |y|=3. Still not sufficient to get the single numerical value of |x-y|, for example consider: x=8, y=3 and x=8, y=-3. Not sufficient.
_________________
Manager
Joined: 12 Mar 2012
Posts: 94
Location: India
Concentration: Technology, Strategy
GMAT 1: 710 Q49 V36
GPA: 3.2
WE: Information Technology (Computer Software)
Followers: 9
Kudos [?]: 313 [0], given: 22
Re: What is the distance between x and y on the number line? [#permalink]
### Show Tags
26 Mar 2012, 01:30
Solving the two equations will give x as 8 and y as 3. But since mod sign is there, x and y can take any value, either positive or negative. Hence, both the statements are insufficient.
Current Student
Joined: 27 Jun 2012
Posts: 418
Concentration: Strategy, Finance
Followers: 71
Kudos [?]: 697 [1] , given: 183
Re: What is the distance between x and y on the number line? [#permalink]
### Show Tags
18 Dec 2012, 10:16
1
KUDOS
Bunuel can you clarify what can be wrong below approach?
-------------------------------------
By multiplying statements 1 & 2
(1) |x| – |y| = 5
(2) |x| + |y| = 11
$$X^2-Y^2=55$$
i.e. $$(x+y)(x-y)=55 = 11 * 5 = - 11 * - 5$$ (i.e. both factors are either positive or negative)
Hence only two possible solutions for this – i.e. either (x=8 & y=3) OR (x= -8 & y= -3)
In both cases the distance between them is 5.
_________________
Thanks,
Prashant Ponde
Tough 700+ Level RCs: Passage1 | Passage2 | Passage3 | Passage4 | Passage5 | Passage6 | Passage7
VOTE GMAT Practice Tests: Vote Here
PowerScore CR Bible - Official Guide 13 Questions Set Mapped: Click here
Math Expert
Joined: 02 Sep 2009
Posts: 34527
Followers: 6313
Kudos [?]: 80083 [0], given: 10027
Re: What is the distance between x and y on the number line? [#permalink]
### Show Tags
23 Dec 2012, 07:55
PraPon wrote:
Bunuel can you clarify what can be wrong below approach?
-------------------------------------
By multiplying statements 1 & 2
(1) |x| – |y| = 5
(2) |x| + |y| = 11
$$X^2-Y^2=55$$
i.e. (x+y)(x-y)=55 = 11 * 5 = - 11 * - 5 (i.e. both factors are either positive or negative)
Hence only two possible solutions for this – i.e. either (x=8 & y=3) OR (x= -8 & y= -3)
In both cases the distance between them is 5.
(x+y)(x-y)=55 does not mean that either (x=8 & y=3) OR (x= -8 & y= -3). There are more integer solutions (for example x=+/-28 and y=+/-27) and infinitely many non-integer solutions.
_________________
Current Student
Joined: 27 Jun 2012
Posts: 418
Concentration: Strategy, Finance
Followers: 71
Kudos [?]: 697 [0], given: 183
Re: What is the distance between x and y on the number line? [#permalink]
### Show Tags
23 Dec 2012, 14:41
_________________
Thanks,
Prashant Ponde
Tough 700+ Level RCs: Passage1 | Passage2 | Passage3 | Passage4 | Passage5 | Passage6 | Passage7
VOTE GMAT Practice Tests: Vote Here
PowerScore CR Bible - Official Guide 13 Questions Set Mapped: Click here
Math Expert
Joined: 02 Sep 2009
Posts: 34527
Followers: 6313
Kudos [?]: 80083 [0], given: 10027
Re: What is the distance between x and y on the number line? [#permalink]
### Show Tags
17 Jul 2013, 00:22
From 100 hardest questions
Bumping for review and further discussion.
_________________
Senior Manager
Joined: 13 May 2013
Posts: 472
Followers: 3
Kudos [?]: 140 [0], given: 134
Re: What is the distance between x and y on the number line? [#permalink]
### Show Tags
18 Jul 2013, 12:56
What is the distance between x and y on the number line?
(1) |x| – |y| = 5
|11|-|6|=5
Distance is five
|11|-|-6|=5
Distance is seventeen
INSUFFICIENT
(2) |x| + |y| = 11
|5|+|6| = 11
Distance is one
|5|+|-6| = 11
Distance is negative eleven
INSUFFICIENT
This problem, to me, seems much easier than a 700 level question. a and b provide us with multiple valid values for x and y, none of which entirely (i.e. are the same) Can someone tell me if I am oversimplifying this problem? Thanks!
GMAT Club Legend
Joined: 09 Sep 2013
Posts: 11181
Followers: 512
Kudos [?]: 134 [0], given: 0
Re: What is the distance between x and y on the number line? [#permalink]
### Show Tags
24 Jun 2015, 10:51
Hello from the GMAT Club BumpBot!
Thanks to another GMAT Club member, I have just discovered this valuable topic, yet it had no discussion for over a year. I am now bumping it up - doing my job. I think you may find it valuable (esp those replies with Kudos).
Want to see all other topics I dig out? Follow me (click follow button on profile). You will receive a summary of all topics I bump in your profile area as well as via email.
_________________
SVP
Joined: 17 Jul 2014
Posts: 1639
Location: United States
Schools: Stanford '19
GMAT 1: 550 Q39 V27
GMAT 2: 560 Q42 V26
GMAT 3: 560 Q43 V24
GMAT 4: 650 Q49 V30
GPA: 3.56
WE: General Management (Transportation)
Followers: 12
Kudos [?]: 197 [0], given: 111
Re: What is the distance between x and y on the number line? [#permalink]
### Show Tags
31 Mar 2016, 18:37
calreg11 wrote:
What is the distance between x and y on the number line?
(1) |x| – |y| = 5
(2) |x| + |y| = 11
i picked 8 and 3
1. x=8, y=3 -> 5 or x=-8, y=3 -> distance is 11. 1 alone is insufficient.
2. x=8, y=3 -> 5 or x=-8, y=3 -> distance is 11. 2 alone is insufficient.
1+2
same info from 1 and 2. C is out, and the answer must be E.
Intern
Joined: 09 Jan 2015
Posts: 8
Followers: 0
Kudos [?]: 0 [0], given: 56
Re: What is the distance between x and y on the number line? [#permalink]
### Show Tags
06 Apr 2016, 22:57
Bunuel wrote:
What is the distance between x and y on the number line?
Question: |x-y|=?
(1) |x| – |y| = 5. Not sufficient: consider x=10, y=5 and x=10, y=-5.
(2) |x| + |y| = 11. Not sufficient: consider x=10, y=1 and x=10, y=-1.
(1)+(2) Solve the system of equation for |x| and |y|: sum two equations to get 2|x|=16 --> |x|=8 --> |y|=3. Still not sufficient to get the single numerical value of |x-y|, for example consider: x=8, y=3 and x=8, y=-3. Not sufficient.
Hi... I have a question when we solve both the eqs tog we get two values for |y|=3 and -13. Right?
Math Expert
Joined: 02 Sep 2009
Posts: 34527
Followers: 6313
Kudos [?]: 80083 [0], given: 10027
Re: What is the distance between x and y on the number line? [#permalink]
### Show Tags
07 Apr 2016, 00:33
enasni wrote:
Bunuel wrote:
What is the distance between x and y on the number line?
Question: |x-y|=?
(1) |x| – |y| = 5. Not sufficient: consider x=10, y=5 and x=10, y=-5.
(2) |x| + |y| = 11. Not sufficient: consider x=10, y=1 and x=10, y=-1.
(1)+(2) Solve the system of equation for |x| and |y|: sum two equations to get 2|x|=16 --> |x|=8 --> |y|=3. Still not sufficient to get the single numerical value of |x-y|, for example consider: x=8, y=3 and x=8, y=-3. Not sufficient.
Hi... I have a question when we solve both the eqs tog we get two values for |y|=3 and -13. Right?
|y| is an absolute value of y, so it cannot be negative. When we solve for |y|, we get that |y| = 3, so y = 3, or y = -3.
Hope it's clear.
_________________
Re: What is the distance between x and y on the number line? [#permalink] 07 Apr 2016, 00:33
Similar topics Replies Last post
Similar
Topics:
On the number line, what is the distance between the point 2x and the 2 14 Jan 2016, 12:34
95 On the number line, the distance between x and y is greater 14 03 Feb 2012, 14:32
9 On the number line, the distance between x and y is greater 7 10 May 2010, 09:34
On the number line, what is the distance between the point 4 04 Feb 2008, 19:14
29 On the number line, the distance between x and y is greater 8 01 Nov 2006, 23:09
Display posts from previous: Sort by
|
2016-08-31 17:06:15
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.6478880643844604, "perplexity": 4931.682057357103}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2016-36/segments/1471982295966.49/warc/CC-MAIN-20160823195815-00143-ip-10-153-172-175.ec2.internal.warc.gz"}
|
https://lgw2.github.io/teaching/csci127-summer-2019/lectures/lecture1/
|
# Data Types, Turtle Drawing, and Functions
Welcome to the first day of class!
Chapters 2, 4, and 6.
## Key ideas
### From chapter 2:
• Simple python types: int(eger), float, str(ing)
• Determing type: type
• Declaring and using variables
• Assignment token: =
• Arithmetic operators: +, -, *, /, //, %, **
• User input: input
### From chapter 4:
• Turtle Module Methods
• Creation: turtle
• Movement: forward, backward, goto
• x, y orientation: right, left
• Pen control: up, down, pensize
• Drawing Color: color, fillcolor, begin_fill, end_fill
• Status: heading, position
• Turtle type: shape, e.g. arrow, classic, turtle or circle
• Turtle Imprints: stamp, dot
• Looping Construct: for
• onclick(), onrelease(), ondrag() from turtle online documentation
### From chapter 6:
• Be able to write a function.
• Be able to call a function.
• Understand function parameters.
• Understand the difference between a fruitful function and a non-fruitful function.
• Understand the difference between a local variable and a global variable.
## Active learning
### Activity 1
Evaluate the following numerical expressions:
5 ** 2
9 * 5
15 / 12
12 / 15
15 // 12
12 // 15
5 % 2
9 % 5
15 % 12
12 % 15
6 % 6
0 % 7
### Activity 2
Download madlib.py and modify it to create your own story. Your modified Mad Lib should use (1) at least one input that is treated as a string, (2) at least one input that is treated as an integer and (3) at least one input that is treated as a floating point number.
### Activity 3
Look at racing-turtles.py and make sure that you understand it fully. Then,
1. Modify the program so that a third racing turtle starts at coordinate (-200, -100).
2. Write down three other improvements to the program.
3. Find someone to discuss your proposed improvements with.
4. Implement at least one of your proposed improvements.
### Activity 4
Look at etch-a-sketch.py and make sure that you understand it fully.
1. Write down three improvements you could make to the program.
2. Find someone to discuss your proposed improvements with.
3. Implement at least one of your proposed improvements.
### Activity 5
Which of the following best reflects the order in which these lines of code are processed in Python?
1 def pow(b, p):
2 y = b ** p
3 return y
4
5 def square(x):
6 a = pow(x, 2)
7 return a
8
9 n = 5
10 result = square(n)
11 print(result)
What does the program print?
### Activity 6
Improve house.py using functions.
|
2021-06-18 20:35:01
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.25164443254470825, "perplexity": 8591.185108077227}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-25/segments/1623487641593.43/warc/CC-MAIN-20210618200114-20210618230114-00118.warc.gz"}
|
https://math.stackexchange.com/questions/1858794/is-there-symbol-to-denote-a-combination-and-permutation
|
# Is there symbol to denote a combination and permutation?
For example, let's say I wanted to denote any arbitrary, $2$ number combination of the letters, A, B and C. So you can have AB, AC, and BC. Say you wanted a way to represent any given combination, is there a shorthand way to denote this idea?
The reason why you may want to do this, is that whilst every combination is unique, every combination may share a unique property, which wouldn't be seen given other combination sets (i.e combinations derived from other pairs of value, say, $\{C,D, E\}$, or $\{1, 2, 3\}$ etc.) if you wanted to refer to a certain property that every combination posses (relative to a specific set of values), a shorthand for this would I think be convenient :)
And of course, the same can be said for permutations.
As a spinoff of that notation that $n \choose k$ denotes the number of $k$-element subsets of a set of size $n$, we can define $S \choose k$ to denote the set of all $k$-element subsets of a set $S$. So, to say that you're thinking about one of the sets $\{A,B\},\, \{B,C\}$ or $\{A, C\}$, you might write something like "$\Delta \in {{\{A, B, C\}}\choose{2}}$" to signify that the set $\Delta$ is one of the above $2$-element subsets of $\{A, B, C\}$.
The notation is reasonably "natural" in the sense that $$\left\lvert {S \choose k }\right\rvert = {{\lvert S \rvert} \choose {k}}.$$
I personally like this notation, and know it is used to some extent, but I have no idea how popular it is "in the field." As for permutations, I have never seen anything comparable.
And, as a general rule, it never hurts to make a note about what your notation means, if you're not just writing things down for yourself (and honestly, it couldn't hurt then, either!)
• For what it's worth, I use this notation and can reasonably be considered to be "in the field". Jul 14 '16 at 0:58
• @AustinMohr Good to know! I had assumed as much, but for me, I rarely encounter anyone needing to specify that the subsets are a certain size. So I see $2^S$ plenty often, but never anyone needing $S \choose k$. Jul 14 '16 at 1:02
• Cheers pjs36, the only thing I don't get is the | | symbol, at least in this context. Oh, and is subset here the same as some combination? Jul 14 '16 at 1:03
• @user2901512 The notation $|S|$ just means the cardinality, or size, of the set $S$. So $|\{A,B,C\}| = 3 = |\{C, D, E\}|$, for example. Jul 14 '16 at 1:05
• Oh sweet, I never knew/thought of that, that's so cool ^.^ Jul 14 '16 at 1:07
|
2021-10-23 15:01:02
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9304179549217224, "perplexity": 214.1616929437444}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-43/segments/1634323585696.21/warc/CC-MAIN-20211023130922-20211023160922-00389.warc.gz"}
|
http://mathematica.stackexchange.com/tags/experimental-mathematics/new
|
# Tag Info
3
This is a problem known as finding moments of moments. In this case, we seek the covariance (i.e. the $\mu_{1,1}$ central moment) of various sample moments. The modus operandi for solving such problems is to work with power sum notation $s_r$, namely: $$s_r = \sum_{i=1}^n X_i^r$$ In this case, you are interested in the sample mean $= \frac{s_1}{n}$, and ...
Top 50 recent answers are included
|
2015-04-28 18:18:15
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9646066427230835, "perplexity": 364.53642974084585}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 5, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2015-18/segments/1429246661916.33/warc/CC-MAIN-20150417045741-00120-ip-10-235-10-82.ec2.internal.warc.gz"}
|
https://math.stackexchange.com/questions/2811262/whats-the-group-with-2-elements-one-for-even-permutations-and-one-for-odd-perm
|
# What's the group with 2 elements: one for even permutations and one for odd permutations (on $n$ points)? How to construct it?
I was thinking about orientations (of a vector space, of a simplex, ...) and how there seem to always be exactly two orientations. Moreover, orientations are defined "up to the parity of the permutation". So, eg., an orientation of a vector space seems to be not a basis, but an equivalence class of bases (where two bases are equivalent iff they differ by a permutation of the same parity; now, since there's two possible parities, there's two possible equivalence classes, hence two possible orientations).
What is the "formal" way to state this fact in terms of groups, quotient groups, and (perhaps) group actions? (Is it even a fact, to begin with?) I suspect that the answer involves the terms: symmetric group, alternating group, quotient group, $\Bbb Z / 2\Bbb Z$, and maybe group action and orbit.
What group is responsible for the definition/construction of orientation? I think it has to be a quotient group of the symmetric group $S_n$ (say, for a vector space of dimension $n$), because you start with a basis and take all possible permutations and then consider the permutations that differ by (say) an even permutation to have the same orientation. (But what's the formal way to state this?)
Group actions seem to be able to induce quotient spaces; eg. Wikipedia says that an orbifold looks (locally) like the the quotient space of Euclidean space under the (linear) action of a finite group. It also says that a manifold with boundary is an orbifold because it's the quotient of its double by an action of $\Bbb Z / 2\Bbb Z$.
And I think that the group special linear group $\text{SL}(2,\Bbb Z)$ and some of its subgroups act on the the upper half (complex) plane $\Bbb H$ and so can induce quotient topological spaces.
What is the proper way to phrase/understand how groups and/or group actions induce quotients on sets / topological spaces / manifolds / stuff? Is it the group action that induces the quotient, or the group itself?
• I believe if the group action is faithful there's not much difference between being induced by the action or the group itself. If the action is not faithful quotients can be induced by the kernel of the action too. When an action isn't faithful in well studied contexts usually people take a quotient of the group to make the action faithful. I believe this is why we consider modular forms to be in $SL(2,\Bbb Z)$ instead of $GL(2,\Bbb Z)$. So the difference may be hidden in well studied contexts. – N8tron Jun 7 '18 at 12:48
Each orientation is an orbit space of the action of the alternating group $A_n$ (the group of even permutations) on your set of vectors or vertices or whatever you're orienting. There are two orbits because $A_n$ has index 2 as a subgroup of the group of permutations $S_n$.
In general, you can consider the group homomorphism $S_n\to\{-1,1\}\cong \mathbb{Z}/2\mathbb{Z}$ asigning to each permutation its sign, whose kernel is precisely $A_n$. Hence, there is a relation between $A_n$ and $\mathbb{Z}/2\mathbb{Z}$.
|
2019-06-26 02:01:50
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9422829151153564, "perplexity": 162.31299417925035}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-26/segments/1560628000044.37/warc/CC-MAIN-20190626013357-20190626035357-00147.warc.gz"}
|
https://brilliant.org/problems/which-function-is-this/
|
# Which function is this?
Algebra Level 4
Let $$f: \mathbb{R} \rightarrow [\frac{1}{2}, 1 ]$$ and $f\left(x+2\right)= \frac{1}{2} +\sqrt{f\left(x\right)-f\left(x\right)^2}$ Then which among the following is always true?
×
|
2019-04-21 09:22:30
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.684124231338501, "perplexity": 1277.6142528108637}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-18/segments/1555578530505.30/warc/CC-MAIN-20190421080255-20190421102255-00298.warc.gz"}
|
https://www.techwhiff.com/issue/find-the-angular-speed-of-the-minute-hand-and-the-hour--199849
|
# Find the angular speed of the minute hand and the hour hand of the famous clock in london
###### Question:
Find the angular speed of the minute hand and the hour hand of the famous clock in london
### Reesa Mork is a multinational corporation that has good credit ratings. It issues promissory notes to other companies. Based on the given information in the scenario, it appears that Reesa Mork uses _____ as a short-term financing option to other companies.A. commercial paper B. factoring C. line of credit D. trade credit
Reesa Mork is a multinational corporation that has good credit ratings. It issues promissory notes to other companies. Based on the given information in the scenario, it appears that Reesa Mork uses _____ as a short-term financing option to other companies.A. commercial paper B. factoring C. line of...
### Pa help po. thanku!
pa help po. thanku! ...
### What is -2-m=5 and explain thanks
what is -2-m=5 and explain thanks...
### How does depression affect your mental health
How does depression affect your mental health...
### Read the passage and answer the following question(s). Summer Sun Great is the sun, and wide he goes Through empty heaven without repose; And in the blue and glowing days More thick than rain he showers his rays. 5Though closer still the blinds we pull To keep the shady parlour cool, Yet he will find a chink or two To slip his golden fingers through. The dusty attic spider-clad, 10 He, through the keyhole, maketh glad; And through the broken edge of tiles, Into the laddered hay-loft smiles. Me
Read the passage and answer the following question(s). Summer Sun Great is the sun, and wide he goes Through empty heaven without repose; And in the blue and glowing days More thick than rain he showers his rays. 5Though closer still the blinds we pull To keep the shady parlour cool, Yet he will fi...
Kaelyn's mother, Judy, looks after Kaelyn's four-year-old twins so Kaelyn can go to work (she drops off and picks up the twins from Judy's home every day). Since Judy is a relative, Kaelyn made sure, for tax purposes, to pay her mother the going rate for child care ($6,310 for the year). What is the... 1 answer ### A sprinter can run 10 yards in 5⁄3 seconds. How long will it take the same sprinter at the same speed to run 30 yards? A sprinter can run 10 yards in 5⁄3 seconds. How long will it take the same sprinter at the same speed to run 30 yards?... 1 answer ### A volcanic cone has a diameter of 300 meters and a height of 150 meters. What is the volume of the cone? A volcanic cone has a diameter of 300 meters and a height of 150 meters. What is the volume of the cone?... 2 answers ### PLEASE HELP!!Read the second part of "From Blossoms": From laden boughs, from hands, from sweet fellowship in the bins, comes nectar at the roadside, succulent peaches we devour, dusty skin and all, comes the familiar dust of summer, dust we eat. Li-Young Lee, "From Blossoms". Which literary element does the poet use here? A: Repetition of the words"from" and "comes" B: Comparison of the words "summer" and "dust" C: Rhyming the last two lines of the stanza D: Allusion to a biblical story about e PLEASE HELP!!Read the second part of "From Blossoms": From laden boughs, from hands, from sweet fellowship in the bins, comes nectar at the roadside, succulent peaches we devour, dusty skin and all, comes the familiar dust of summer, dust we eat. Li-Young Lee, "From Blossoms". Which literary element... 2 answers ### Which of the following sentences is grammatically correct? Nosotros somos muy ocupados Nosotros somos muy ocupadas Nosotros estamos muy ocupados Nosotros estamos muy ocupado Which of the following sentences is grammatically correct? Nosotros somos muy ocupados Nosotros somos muy ocupadas Nosotros estamos muy ocupados Nosotros estamos muy ocupado... 2 answers ### What was a lasting impact of george washington's presidency? What was a lasting impact of george washington's presidency?... 1 answer ### HELPPPPPPPPPPPPPPPPPPPPPPPPPPPPPPPPPPPP HELPPPPPPPPPPPPPPPPPPPPPPPPPPPPPPPPPPPP... 2 answers ### Large farms in North America where enslaved workers were forced to grow crops were known as Large farms in North America where enslaved workers were forced to grow crops were known as... 1 answer ### Need help. Not very good with matrix math please show work so I could understand! Need help. Not very good with matrix math please show work so I could understand!... 1 answer ### In St. Louis, MO, in August 2000, Richard Miller orally agreed to loan Jeff Miller$35,000.00 in exchange for a security interest in a 1999 Kodiak dump truck.
In St. Louis, MO, in August 2000, Richard Miller orally agreed to loan Jeff Miller \$35,000.00 in exchange for a security interest in a 1999 Kodiak dump truck....
### Which parts of the present-day United States were included in New Netherland?
Which parts of the present-day United States were included in New Netherland?...
|
2022-09-29 12:07:36
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.20966549217700958, "perplexity": 8962.383627652713}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030335350.36/warc/CC-MAIN-20220929100506-20220929130506-00143.warc.gz"}
|
https://xplaind.com/157121/marginal-cost-of-capital
|
# Marginal Cost of Capital
Marginal cost of capital is the weighted average cost of the last dollar of new capital raised by a company. It is the composite rate of return required by shareholders and debt-holders for financing new investments of the company. It is different from the average cost of capital which is based on the cost of equity and debt already issued.
The weighted average cost of capital (WACC), the most common measure of cost of capital used in capital budgeting and business valuation, is the weighted average of the marginal cost of common stock, marginal cost of preferred stock and marginal after-tax cost of debt.
The distinction between average cost of capital and marginal cost of capital is important. The marginal cost of capital rises as the company raises more and more capital. This is because capital is scarce, just like any other factor of production, and must be compensated through a higher required return. The return available on new projects must be compared with the marginal cost of capital and not the average cost of capital and the projects should be accepted only when the expected return is higher than the required return.
Marginal cost of capital increases in steps and not linearly. This is because a company can finance a certain portion of new investments by reinvesting earnings and raising enough debt and/or preferred stock to maintain the target capital structure. The reinvestment of earnings comes without any increase in cost of equity. However, as soon as the expected capital exceeds the combined amount of retained earnings and debt and/or preferred stock raised to maintain the target capital structure, the marginal cost of capital increases.
## Break Point
Break point is the total amount of new investments that can be financed and the new capital that can be raised before a jump in marginal cost of capital is expected. It is the point at which the marginal cost of capital curve breaks out from its flat trend.
The break point can be worked out by dividing the retained earnings for the period by the weight of the retained earnings in the target capital structure. The retained earnings in a period equals the product of net income for the period and the retention rate (also called plow-back rate), which equals 1 minus the dividend payout ratio.
The following equation can be used to calculate the break point:
$$Break\ Point=\frac{NI\ \times(1-DPR)}{W_e}$$
Where NI is the net income for the period, DPR is the dividend payout ratio, i.e. the dividends declared dividend by net income and We is the weight of retained earnings in the target capital structure.
The break points are helpful in creating the marginal cost of capital curve, a graph that plots capital raised on the X-axis and marginal weighted average cost of capital on the Y-axis.
Your company's marginal cost of capital was 10% at the start of 2017. Its net income for the year was $30 million, 30% of which was paid out in dividends. Retained earnings form 45% of the target capital structure of the company. The company's break point equals retained earnings for the period divided by proportion of retained earnings in target capital structure. Retained earnings for the period equals$21,000,000 (i.e. $30,000,000 × (1 – 30%)). $$Break\ point\ =\ \frac{21,000,000}{0.45}\ =\ 46.67\ million$$ The new marginal cost of capital once$46.67 million of capital is raised is 12%.
Using the above data, the marginal cost of capital curve can be graphed as follows:
## Investment Opportunity Schedule
Investment opportunity schedule is the table/graph of cumulative investment opportunities and their expected return. It plots the expected return on the Y-axis and the initial investment required on the X-axis. The investment opportunity schedule is a down-ward sloping curve because investment opportunities are rare and each new opportunity is expected to generate a diminishing return.
Let's say the following is a list of potential investment opportunities available to your company and their expected return:
Project Initial Investment Expected Return
A $25,000,000 25% B$40,000,000 18%
C $25,000,000 12% D$15,000,000 8%
The investment opportunity schedule can be plotted as follows:
## Optimal Capital Budget
A company's optimal capital budget is the point at which its marginal cost of capital equals the incremental expected return. A company should raise new capital as long as the marginal cost of capital is lower than or equal to the available return.
The following chart plots the marginal cost of capital and investment opportunity schedule. The point of intersection of the marginal cost of capital curve and investment opportunity schedule is the optimal capital budget.
|
2018-06-21 18:03:11
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.28323888778686523, "perplexity": 1378.5905917508385}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-26/segments/1529267864256.26/warc/CC-MAIN-20180621172638-20180621192638-00406.warc.gz"}
|
https://homerhanumat.github.io/r-notes/making-decisions-conditionals.html
|
## 4.2 Making Decisions: Conditionals
Another type of flow control involves determining which parts of the program to execute, depending upon certain conditions.
### 4.2.1 If Statements
Let’s design a simple guessing-game for the user:
• The computer will pick randomly a whole number between 1 and 4.
• The user will then be asked to guess the number.
• If the user is correct, then the computer will congratulate the user.
number <- sample(1:4, size = 1)
guess <- as.numeric(readline("Guess the number (1-4): "))
if ( guess == number ) {
cat("Congratulations! You are correct.")
}
The sample() function randomly picks a value from the vector that is is given. The size parameter specifies how many numbers to pick. (This time we only want one number.)
Flow control enters the picture with the reserved word if. Immediately after if is a Boolean expression enclosed in parentheses. This expression is often called the condition. If the condition evaluates to TRUE, then the body of the if statement—the code enclosed in the brackets—will be executed. On the other hand if the condition evaluates to FALSE, then R skips past the bracketed code.13
The general form of an if expression is as follows:
if ( condition ) {
## code to run when the condition evaluates to TRUE
}
The code above congratulates the a lucky guesser, but it has nothing at all to say to someone who did not guess correctly. The way to provide an alternative is through the addition of the else reserved-word:
number <- sample(1:4, size = 1)
guess <- as.numeric(readline("Guess the number (1-4): "))
if ( guess == number ) {
cat("Congratulations! You are correct.")
} else {
cat("Sorry, the correct number was ", number, ".\n", sep = "")
cat("Better luck next time!")
}
The general form of an if-else expression is as follows:
if ( condition ) {
# code to run if the condition evaluates to TRUE
} else {
# code to run if condition evaluates to FALSE
}
An if-elsecan be followed by any number of if-else’s, setting up a chain of alternative responses:
number <- sample(1:4, size = 1)
guess <- as.numeric(readline("Guess the number (1-4): "))
if ( guess == number ) {
cat("Congratulations! You are correct.")
} else if ( abs(guess - number) == 1 ){
cat("You were close!\n")
cat("The correct number was ", number, ".\n", sep = "")
} else {
cat("You were way off.\n")
cat("The correct number was ", number, ".\n", sep = "")
}
In general, a chain looks like this:
if ( condition1) {
# code to run if condition1 evaluates to TRUE
} else if ( condition2 ) {
# code to run if condition2 evaluates to TRUE
} else if ( condition3 ) {
# code to run if condition2 evaluates to TRUE
} else if ......
# and so on until
} else if ( conditionN ) {
# code to run if conditionN evaluates to TRUE
}
### 4.2.2 Application: Validating Arguments
Recall the function manyCat():
manyCat <- function(word, n) {
wordWithNewline <- paste(word, "\n", sep = "")
lines <- rep(wordWithNewline, times = n)
cat(lines, sep = "")
}
What would happen if a user were to call it with an unusable argument for the parameter n, a negative number, for instance?
manyCat(word = "Hello", n = -3)
## Error in rep(wordWithNewline, times = n): invalid 'times' argument
For us, it’s clear enough what is wrong. After all, we wrote the function in the previous chapter, and we know that the rep() function in its body requires that the times parameter be set to some positive integer. On the other hand, to someone who is unfamiliar with the body of manyCat() and who has no access to help on how manyCat() is to be used it may not be so obvious what has gone wrong and why.
In the case of complex functions, we cannot expect ordinary users to search through the function’s definition to learn how to fix an error that arises from improper input. Accordingly, it can be good practice to validate user-input. Conditionals allow us to do this.
Here, a possible approach is to attempt to coerce the user’s input for n into an integer, using the as.integer() function:
as.integer(3.6) # will round to nearest integer
## [1] 3
as.integer("4") # will convert string to number 4
## [1] 4
as.integer("4.3") # will convert AND round
## [1] 4
as.integer("two") # cannot convert to integer
## Warning: NAs introduced by coercion
## [1] NA
In the last example, the result is NA, and a cryptic warning was issued. In order to keep the warning from the user, we should wrap any call to as.integer() in the suppressWarnings() function.
Let’s try out a piece of code that checks for validity:
n <- "two" # this will not convert to a number at all
converted <- suppressWarnings(as.integer(n))
!is.na(converted) && converted >= 1
## [1] FALSE
• First we attempted to converted the "two" to an integer.
• Since the result of as.integer("two") is NA, the expression is.na(converted) evaluates to TRUE
• Hence !is.na(converted) evaluates to FALSE.
• Hence !is.na(converted) && converted > 1 evaluates to FALSE.
Let’s try it on another “wrong” value:
n <- -2 # number, but it's negative
converted <- suppressWarnings(as.integer(n))
!is.na(converted) && converted >= 1
## [1] FALSE
This time converted gets an assigned value, namely -2, but since it’s not at least 1 the expression !is.na(converted) && converted >= 1 evaluates to FALSE.
Now let’s try it on a “good” value:
n <- 3
converted <- suppressWarnings(as.integer(n))
!is.na(converted) && converted >= 1
## [1] TRUE
Our code appears to be working well.
Now that we’ve figured out a way to determine whether any given input is a usable number, let’s employ a conditional to implement validation in manyCat():
manyCat <- function(word, n) {
n <- suppressWarnings(as.integer(n))
isValid <- !is.na(n) && n >= 1
if (!isValid) {
message <- "Sorry, n must be a whole number at least 1.\n"
return(cat(message))
}
wordWithNewline <- paste(word, "\n", sep = "")
lines <- rep(wordWithNewline, times = n)
cat(lines, sep = "")
}
The idea is to force an early return—along with a helpful message to the Console—if the user’s argument for n is no good.
Let’s watch it in action:
manyCat(word = "Hello", n = "two") # problem!
## Sorry, n must be a whole number at least 1.
manyCat(word = "Hello", n = 3) # OK
## Hello
## Hello
## Hello
### 4.2.3 Application: Invisible Returns
Let’s think again about the $$\pi$$-computing function from Section 3.4.1:
madhavaPI <- function(n = 1000000) {
k <- 1:n
terms <- (-1)^(k+1)*4/(2*k-1)
sum(terms)
}
We could use if to write in a “talky” option:
madhavaPI <- function(n = 1000000, verbose = FALSE) {
k <- 1:n
terms <- (-1)^(k+1)*4/(2*k-1)
approx <- sum(terms)
if ( verbose) {
cat("Madhava's approximation is: ", approx, ".\n", sep = "")
cat("This is based on ", n, " terms.\n", sep = "")
}
approx
}
Try it out:
madhavaPI(n = 1000, verbose = TRUE)
## Madhava's approximation is: 3.140593.
## This is based on 1000 terms.
## [1] 3.140593
It’s a bit awkward that the approximation gets printed out at the end: after the message on the console, the user doesn’t need to see it. But if we were to remove the final approx expression, then the function would not return an approximation that could be used for further computations.
The solution to this dilemma is R’s invisible() function.
madhavaPI <- function(n = 1000000, verbose = FALSE) {
k <- 1:n
terms <- (-1)^(k+1)*4/(2*k-1)
approx <- sum(terms)
if ( verbose) {
cat("Madhava's approximation is: ", approx, ".\n", sep = "")
cat("This is based on ", n, " terms.\n", sep = "")
}
invisible(approx)
}
If you wrap an expression in invisible(), then it won’t be printed out to the console:
madhavaPI(n = 1000, verbose = TRUE)
## Madhava's approximation is: 3.140593.
## This is based on 1000 terms.
Nevertheless it is still returned, as we can see from the following code, in which the approximation is computed without any output to the console and stored in the variable p for use later on in a cat() statement.
p <- madhavaPI() # verbose is FALSE by default
cat("Pi plus 10 is about ", p + 10, ".", sep = "")
## Pi plus 10 is about 13.14159.
### 4.2.4 Ifelse
The ifelse() function is a special form of the if-else construct that is used to make assignments, and is especially handy in the context of vectorization.
Suppose that you have a lot of heights:
height <- c(69, 67, 70, 72, 65, 63, 75, 70)
You would like to classify each person as either “tall” or “short”, depending on whether they are respectively more or less than 71 inches in height. ifelse() makes quick work of it:
heightClass <- ifelse(test = height > 70,
yes = "tall", no = "short")
heightClass
## [1] "short" "short" "short" "tall" "short" "short" "tall" "short"
Note that ifelse() takes three parameters:
• test: the condition you want to evaluate;
• yes: the value that gets assigned when test is true;
• no: the value assigned when test is false;
Most programmers don’t name the parameters. This is fine—just remember to keep the test-yes-no order:
ifelse(height > 70, "tall", "short")
## [1] "short" "short" "short" "tall" "short" "short" "tall" "short"
Here’s another example of the power of ifelese(). If a triangle has three sides of length $$x$$, $$y$$ and $$z$$, then the sum of any two sides must be greater than the remaining side:
\begin{aligned} x + y &> z, \\ x + z &> y, \\ y + z &> x. \end{aligned} This fact is known as the Triangle Inequality. It works the other way around, too: if three positive numbers are such that the sum of any two exceeds the third, then three line segments having those numbers as lengths could be arranged into a triangle.
We can write a function that, when given three lengths, determines whether or not they can make a triangle:
isTriangle <- function(x, y, z) {
(x + y > z) & (x +z > y) & (y + z > x)
}
isTriangle() simply evaluates a Boolean expression involving x, y and z. It will return TRUE when the three quantities satisfy the Triangle Inequality; otherwise, it returns FALSE. Let’s try it out:
isTriangle(x = 3, y = 4, z = 5)
## [1] TRUE
Recall that Boolean expressions can involve vectors of any length. So suppose that we are would like to know which of the following six triples of numbers could be the side-lengths of a triangle:
$(2,4,5),(4.7,1,3.8),(5.2,8,12),\\ (6, 6, 13), (6, 6, 11), (9, 3.5, 6.2)$ We could enter the triples one at a time into isTriangle(). On the other hand we could arrange the sides into three vectors of length six each:
a <- c(2, 4.7, 5.2, 6, 6, 9)
b <- c(4, 1, 2.8, 6, 6, 3.5)
c <- c(5, 3.8, 12, 13, 11, 6.2)
Then we can decide about all six triples at once:
isTriangle(x = a, y = b, z = c)
## [1] TRUE TRUE FALSE FALSE TRUE TRUE
We could also use ifelse() to create a new character-vector that expresses our results verbally:
triangle <- ifelse(isTriangle(a, b, c), "triangle", "not")
triangle
## [1] "triangle" "triangle" "not" "not" "triangle" "triangle"
### 4.2.5 Switch
If you have to make a decision involving two or more alternatives you can use a chain of if ... else constructions. When the alternatives involve no more than the assignment of a value to a variable, you might also consider using the switch() function.
For example, suppose that you have days of the week expressed as numbers. Maybe it’s like this:
• 1 stands for Sunday
• 2 for Monday
• 3 for Wednesday
• and so on.
If you would like to convert a day-number to the right day name, then you could write a function like this:
dayWord <- function(dayNumber) {
switch(dayNumber,
"Sunday",
"Monday",
"Tuesday",
"Wednesday",
"Thursday",
"Friday",
"Saturday")
}
dayWord(3)
## [1] "Tuesday"
In switch() above, the first argument after dayNumber is what goes with 1, the second argument is what goes with 2, and so on.
When the item you want to convert is a string rather than a number, then the switch() function works a little bit differently. Suppose, for instance, that you want to abbreviate the names of the weekdays. You might write an abbreviation-function as follows:
abbrDay <- function(day) {
switch(day,
Monday = "Mon",
Tuesday = "Tue",
Wednesday = "Wed",
Thursday = "Th",
Friday = "Fri",
Saturday = "Sat")
}
abbrDay("Wednesday")
## [1] "Wed"
In the above call to switch(), the weekday names you want to abbreviate appear as the names of named character-vectors, each of length one. The value of each vector is what the its name will be converted to.
When you are converting strings you have the option to provide a default conversion for values that don’t fit into the pattern you have in mind. All you have to do is to provide the default value as an additional argument. (It should NOT have a name.) Thus:
abbrDay <- function(day) {
switch(day,
Monday = "Mon",
Tuesday = "Tue",
Wednesday = "Wed",
Thursday = "Th",
Friday = "Fri",
Saturday = "Sat",
"not a weekday!")
}
abbrDay("Wednesday")
## [1] "Wed"
abbrDay("Neptune")
## [1] "not a weekday!"
### 4.2.6 Practice Exercises
1. Consider the following function:
computeSquare <- function(x) {
x^2
}
Write a “talky” version of the above function that returns the square invisibly.. It should be called computeSquare2() and should take two parameters:
• x: the number to be squared
• verbose: whether or not to cat() out a report to the Console.
Typical examples of use would be:
computeSquare2(4)
## The square of 4 is 16.
mySquare <- computeSquare2(6, verbose = FALSE)
mySquare
## [1] 36
2. Write a function called findKarl() that takes a character vector and returns a character vector that reports whether or not each element of the given vector was equal to the string "Karl". It should work like this:
vec1 <- c("three", "blowfish", "Karl", "Grindel")
findKarl(vec1)
## [1] "Sorry, not our guy." "Sorry, not our guy." "Yep, that's Karl!"
## [4] "Sorry, not our guy."
3. Here’s a function that is supposed to return "small!" when given a number less than 100, and return "big!" when the number if at least 100:
sizeComment <- function(x) {
if ( x < 100 ) {
"small!"
}
"big!"
}
But it doesn’t work:
sizeComment(200) # this will be OK
## [1] "big!"
sizeComment(50) # this won't be OK
## [1] "big!"
Fix the code.
4. Add some validation to the isTriangle() function so that it stops the user if one or more of the parameters x, y and z cannot be interpreted as a positive real number.
### 4.2.7 Solutions to Practice Exercises
1. Here’s the desired function:
computeSquare2 <- function(x, verbose = TRUE) {
if ( verbose ) {
cat("The square of ", x, " is ", x^2, ".\n", sep = "")
}
invisible(x^2)
}
2. Here’s one way to write it:
findKarl <- function(x) {
ifelse(x == "Karl",
"Yep, that's Karl!",
"Sorry, not our guy.")
}
3. A function always returns the value of the last expression that it evaluates. As it stands, the function will always end at the line "big!", so "big" will always be returned. One way to get the desired behavior is to force the function to stop executing once it prints out "small!". You can do this with the return() function:
sizeComment <- function(x) {
if ( x < 100 ) {
return("small!")
}
"big!"
}
Another way is to use the if ... else construction:
sizeComment <- function(x) {
if ( x < 100 ) {
"small!"
} else {
"big!"
}
}
4. Here is one approach:
isTriangle <- function(x, y, z) {
x <- suppressWarnings(as.numeric(x))
x <- suppressWarnings(as.numeric(x))
x <- suppressWarnings(as.numeric(x))
xValid <- all(!is.na(x) & x > 0)
yValid <- all(!is.na(y) & y > 0)
zValid <- all(!is.na(z) & z > 0)
if (!(xValid & yValid & zValid)) {
return(cat("Sorry, all inputs must be positive real numbers.\n"))
}
(x + y > z) & (x +z > y) & (y + z > x)
}
Try it out:
isTriangle(x = c(2,4,7),
y = c(3, -4, 5), # oops, a negative number
z = c(6, 8, 10))
## Sorry, all inputs must be positive real numbers.
isTriangle(x = c(2,4,7),
y = c(3, 4, 5), # fixed it
z = c(6, 8, 10))
## [1] FALSE FALSE TRUE
1. Actually you don’t need the brackets if you plan to put only one expression in them. Many people keep the brackets, though, for the sake of clarity.
|
2019-05-20 06:38:17
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.5769199132919312, "perplexity": 2917.5496079403397}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-22/segments/1558232255773.51/warc/CC-MAIN-20190520061847-20190520083847-00076.warc.gz"}
|
https://gateoverflow.in/86789/test-by-bikram-data-structures-test-2-question-21
|
574 views
Following keys have to be inserted in exact order into the hash table with $9$ slots.
$5, 28, 19, 15, 20, 33, 12, 17, 10$
The auxiliary hash functions is $h(k)=K$ mod table size, where table size is $9$ . Which of the following represent the contents of the hash table in correct order after insertions are performed using linear probing?
1. $12,28,19,20,10,5,15,33,17$
2. $10,28,19,20,12,5,15,33,17$
3. $33,28,19,20,12,5,15,10,17$
4. $20,28,19,10,12,5,15,33,17$
why not D is correct plz explain?????
i changed previous question as i found that had insufficient data to answer, please check this new question.
is the ans B?
B is the answer because in linear probing we linearly probe for the next slot. Indexes will be from 0 to 8 since, the hash function is K mod 9.
|
2023-02-07 18:42:17
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.3484564423561096, "perplexity": 1119.1833928975766}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2023-06/segments/1674764500628.77/warc/CC-MAIN-20230207170138-20230207200138-00356.warc.gz"}
|
https://math.stackexchange.com/questions/3107552/finite-element-method-for-vector-valued-functions
|
# Finite Element Method for vector valued functions
I need help with the finite element method for the following problem I present in weak formulation. Certain details are left out since they are not important for the essence of this question. This is actually an eigenproblem for the model of the curved rod in mathematical elasticity.
The question is, generally, how does one construct the finite-dimensional subspace when the objective is to approximate vector-valued functions? In particular, consider the following problem.
Let $$V=\{(v,w)\in H^1(0,l)^3\times H^1(0,l)^3: v'+t\times w=0, v(0)=w(0)=\mathbf{0}\}$$. Here, $$l>0$$, $$H^1(0,l)^3$$ is the Sobolev $$H^1$$ space of functions $$(0,l)\mapsto\mathbb{R}^3$$ and $$t$$ is also some function with values in $$\mathbb{R}^3$$.
We want to obtain all $$\lambda\in\mathbb{R}$$ which for there exist $$(u,\omega)\neq\mathbf{0}\in V$$ such that: $$$$\tag{1} \label{problem_statement} \int_{0}^{l}Q(x)HQ(x)^T\omega(x)'\cdot w(x)'=\lambda\int_{0}^{l}\rho(x)A(x)u\cdot v, \hspace{0.2cm}\forall(v,w)\in V\$$$$ $$Q(x)$$ is an orthonormal matrix for any $$x\in(0,l)$$, $$H$$ can be considered a constant real symmetric $$3\times 3$$ matrix while $$\rho, A\in\mathcal{C}(0,l; \mathbb{R})$$.
I ought to use FEM to numerically solve this problem and I cannot seem to figure out how to do this. After we define two bilinear forms $$B,L:V\times V\to\mathbb{R}$$: \begin{align*} B((u,\omega),(v,w))&=\int_{0}^{l}QHQ^T\omega'\cdot w'\\ L((u,\omega),(v,w))&=\int_{0}^{l}\rho A u\cdot v \end{align*} $$(1)$$ becomes: \begin{align*} B((u,\omega),(v,w))=\lambda L((u,\omega),(v,w)) \end{align*}
Now I cannot figure out how to define the finite-dimensional subspace for FEM. Let us denote with $$\widetilde{V_n}$$ the usually used space of Lagrange's first degree polynomials from $$(0,l)$$ to $$\mathbb{R}$$. I tried setting $$V_n=\widetilde{V_n}^3$$. For instance, the function $$u$$ approximation $$u_n\in V_n$$ is thus given with component-wise approximation: \begin{align*} u_n(x)=\sum_{i=1}^{n}(\alpha_i^1\phi_i(x),\alpha_i^2\phi_i(x),\alpha_i^3\phi_i(x)) \end{align*} Here $$\phi_i\in V_n$$ and $$\alpha_i^j\in\mathbb{R}$$ for $$j=1,2,3, i=1,2,\dots,n$$.
This, unfortunately, has not lead me anywhere. I know my question might resemble this one: System of equations for vector valued functions problems. However, I obviously tried the approach suggested there.
Any help is immensely appreciated.
• What doesn’t work exactly? – VorKir Feb 12 at 4:25
• @VorKir: This should, in principle, work. However, the problem is that one can not use the space of first order Lagrange polynomials as I described above for the FE, because they need not satisfy the condition: $v'+t\times w=\mathbf{0}$. Fortunately, I managed to figure this out. One should simply use the mixed-formulation of the problem as it will incorporate the problematic condition into the equation and thus remove it from the FE space. The rest can be used as I had described, albeit one should use second order Lagrange polynomials. – forbes Feb 22 at 12:17
• Do you need this condition in the definition of the space? You can add it into the formulation via a lagrange multiplier, as an option. Then you could use the more conventional f.e. space – VorKir Feb 22 at 20:33
|
2019-05-25 03:48:51
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 29, "wp-katex-eq": 0, "align": 0, "equation": 1, "x-ck12": 0, "texerror": 0, "math_score": 0.8825843930244446, "perplexity": 458.455227659036}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-22/segments/1558232257847.56/warc/CC-MAIN-20190525024710-20190525050710-00431.warc.gz"}
|
http://docs.astropy.org/en/stable/api/astropy.coordinates.BaseAffineTransform.html
|
# BaseAffineTransform¶
class astropy.coordinates.BaseAffineTransform(fromsys, tosys, priority=1, register_graph=None)[source]
Base class for common functionality between the AffineTransform-type subclasses.
This base class is needed because AffineTransform and the matrix transform classes share the _apply_transform() method, but have different __call__() methods. StaticMatrixTransform passes in a matrix stored as a class attribute, and both of the matrix transforms pass in None for the offset. Hence, user subclasses would likely want to subclass this (rather than AffineTransform) if they want to provide alternative transformations using this machinery.
|
2019-12-14 22:07:07
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.21553553640842438, "perplexity": 6881.4031504764025}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-51/segments/1575541294513.54/warc/CC-MAIN-20191214202754-20191214230754-00268.warc.gz"}
|
http://www.gradesaver.com/textbooks/math/precalculus/precalculus-6th-edition/chapter-r-review-of-basic-concepts-r-6-rational-exponents-r-6-exercises-page-64/89
|
## Precalculus (6th Edition)
$\color{blue}{k^{-2}(4k+1)}$
The least exponent of the variable is $-2$. Factor out $k^{-2}$ to obtain: $=\color{blue}{k^{-2}(4k+1)}$
|
2018-04-21 17:52:52
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.21239632368087769, "perplexity": 1645.595834498548}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-17/segments/1524125945272.41/warc/CC-MAIN-20180421164646-20180421184646-00412.warc.gz"}
|
https://databasetown.com/dot-product-of-two-matrices-in-python/
|
Dot Product of Two Matrices in Python
The product of two matrices A and B will be possible if the number of columns of a Matrix A is equal to the number of rows of another Matrix B. A mathematical example of dot product of two matrices A & B is given below.
If
$$\displaystyle A=\left[ {\begin{array}{*{20}{c}} 1 & 2 \\ 3 & 4 \end{array}} \right]$$
and
$$\displaystyle B=\left[ {\begin{array}{*{20}{c}} 3 & 2 \\ 1 & 4 \end{array}} \right]$$
Then,
$$\displaystyle AB=\left[ {\begin{array}{*{20}{c}} 1 & 2 \\ 3 & 4 \end{array}} \right] \left[ {\begin{array}{*{20}{c}} 3 & 2 \\ 1 & 4 \end{array}} \right]$$
$$\displaystyle AB=\left[ {\begin{array}{*{20}{c}} {1\times 3+2\times 1} & {1\times 2+2\times 4} \\ {3\times 3+4\times 1} & {3\times 2+4\times 4} \end{array}} \right]=\left[ {\begin{array}{*{20}{c}} {3+2} & {2+8} \\ {9+4} & {6+16} \end{array}} \right]$$
$$\displaystyle AB=\left[ {\begin{array}{*{20}{c}} 5 & {10} \\ {13} & {22} \end{array}} \right]$$
Let’s start a practical example of dot product of two matrices A & B in python. First, we import the relevant libraries in Jupyter Notebook.
Dot Product of two Matrices
Let’s see another example of Dot product of two matrices C and D having different values.
If all the diagonal elements of a diagonal matrix are same, then it is called a Scalar Matrix. We can also take the dot product of two scalars which result will also a scalar, like this
Linear Algebra is mostly concerned with operations on vectors and matrices. Let’s take an example of dot product of one scalar and one vector…
It is clear from above snap that, the result obtained after taking dot product of a scalar and a vector is also a vector because a scalar value i.e. 2 is multiplied with each value of a vector i.e. 1, 2, 3 & 4 and we obtained a vector having values 2, 4, 6 & 8.
1. Robert
|
2020-11-28 05:22:04
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8495128750801086, "perplexity": 169.2270468133785}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": false}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-50/segments/1606141195069.35/warc/CC-MAIN-20201128040731-20201128070731-00296.warc.gz"}
|
https://physics.stackexchange.com/questions/54251/could-quantum-mechanics-work-without-the-born-rule
|
# Could quantum mechanics work without the Born rule?
Slightly inspired by this question about the historical origins of the Born rule, I wondered whether quantum mechanics could still work without the Born rule. I realize it's one of the most fundamental concepts in QM as we understand it (in the Copenhagen interpretation) and I know why it was adopted as a calculated and extremely successful guess, really. That's not what my question is about.
I do suspect my question is probably part of an entire field of active research, despite the fact that the theory seems to work just fine as it is. So have there been any (perhaps even seemingly promising) results with other interpretations/calculations of probability in QM? And if so, where and why do they fail? I've gained some insight on the Wikipages of the probability amplitude and the Born rule itself, but there is no mention of other possibilities that may have been explored.
• The many worlds interpretation (MWI), Bohmian mechanics, and dynamic collapse theories all discard with the Born rule as a postulate. In all three theories the subjective appearance of the Born rule is explained as a consequence of other postulates. Feb 18 '13 at 15:12
• Possible duplicate: physics.stackexchange.com/questions/49859/… Feb 19 '13 at 2:31
The authors conduct a three-slit photon experiment and find that the magnitude of the third order interference is less than $10^{-2}$ of the expected second order interference.
|
2021-09-19 17:09:31
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.5493741631507874, "perplexity": 380.18071099125785}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-39/segments/1631780056892.13/warc/CC-MAIN-20210919160038-20210919190038-00633.warc.gz"}
|
http://community.rapidminer.com/t5/RapidMiner-Studio-Forum/the-definition-of-deviation/td-p/923
|
Contributor
# the definition of deviation
Dear Rapid-I users.
I am really confused at the value following the "+/-" sign when evaluatnig the performance in accuracy or area under curve. Is it standard deviation, confidence interval , or something else ? Is there anyway to find out ? Some comment/suggestion , please ?
Thanks,
Kevin
|
2017-05-25 03:22:14
|
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8640376329421997, "perplexity": 2650.9310835846754}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-22/segments/1495463607963.70/warc/CC-MAIN-20170525025250-20170525045250-00513.warc.gz"}
|
https://testbook.com/objective-questions/mcq-on-electric-flux-density--5eea6a0b39140f30f369ddb6
|
A hollow metallic sphere of radius 'r' is kept at a potential of 1 volt. The total electric flux coming out of the concentric spherical surface of radius R(>r) is
1. 4πε0r
2. 4πε0r2
3. 4πε0R
4. None of the above
Option 1 : 4πε0r
Electric Flux Density MCQ Question 1 Detailed Solution
Concept:
By Gauss law, $$ϵ \oint \vec{E}.d\vec{s}={{Q}_{encl}}$$
Thus, $$ϵ \oint \vec{E}.d\vec{s}={{ϵ }_{0}}E\left( 4π {{r}^{2}} \right)$$
$$ϕ = \oint \vec{E}.d\vec{s}$$
where ϕ = flux coming out of the concentric sphere.
r = radius of the sphere
Calculation:
Since electric potential can be defined as:
$$V = \frac{Q}{{4πϵ{_0}r}}$$
V = 1 V (given)
Q = 4 π ϵ0 r
$$ϕ =ϵ \oint \vec{E}.d\vec{s}={4\pi\epsilon_0r}$$
Electric displacement density D at any point on the spherical surface of radius r with a charge Q at the center in a medium with dielectric constant ε is:
1. Q / (4π ε r2)
2. Q / (4π r2)
3. (Q / 4π ε r)2
4. Q / (4π r)2
Option 2 : Q / (4π r2)
Electric Flux Density MCQ Question 2 Detailed Solution
Explanation:
The electric field intensity due to a point charge Q at any spherical surface of radius r is given by:
$$\bar{E}=\frac{Q}{4\piϵ r^2}$$
ϵ = permittivity of the material/medium.
If we multiply the electric field intensity by permittivity ϵ, we get a new vector D
$$D=\epsilon E[\frac{C}{m^2}]$$
$$\bar{D}=\frac{Q}{4\piϵ r^2}\times \epsilon$$
$$\bar{D}=\frac{Q}{4\pi r^2}$$
Hence option (2) is the correct answer.
1. This new vector is called the Electric flux density.
2. This vector has the same direction as E but unlike E, it is independent of ϵ and therefore of the material properties.
3. The unit of D is C/m2
Important Points
Gauss law states that,
$$\mathop{\oint }_{s}\bar{E}.d\bar{s}=\frac{Q}{{{ϵ }_{0}}}$$
Since D = ϵ0E, the above can be written as:
$$\mathop{\oint }_{s}\bar{D}.d\bar{s}={Q}$$
Where Q is the total charge enclosed by the surfaces.
Given that the electric flux density D = zρ (cos2 Φ) âz, C/m2 The charge density at point (1,π/4, 3) is
1. 3
2. 1
3. 0.5
4. 0.5 a
Option 3 : 0.5
Electric Flux Density MCQ Question 3 Detailed Solution
Concept:
Charge density
Pv = ∇ ⋅ D (Gauss-Divergence theorem)
In integral form
ϕ D⋅ds = ∫ρvdv
In cylindrical co-i
$$\nabla \cdot \vec D = \;\frac{1}{\rho}\frac{{\partial \left( {\rho{D_r}} \right)}}{{\partial \rho}} + \frac{1}{\rho}\frac{{\partial {D_\phi }}}{{\partial \phi }} + \frac{{\partial {D_z}}}{{\partial z}}$$
Calculation:
Given D = z ρ (cos2 ϕ) âz
So $$\nabla \cdot D = \frac{\partial }{{\partial z}}\left[ {z\rho {{\cos }^2}\phi } \right]$$ (Z-component)
$$\nabla \cdot D = \rho {\cos ^2}\phi = {\rho _v}$$
At $$P = 1,\;\phi = \frac{\pi }{4}$$
$${\rho _v} = 1 \cdot {\cos ^2}\left( {\frac{\pi }{4}} \right) = \frac{1}{2}$$
So ρv = 0.5 c/m3
Option (c) correct choice.
Maxwell equation for static and magnetic field
Differential (or point form) Integral form Remark $$\vec \nabla \cdot \vec D = {\rho _v}$$ $$\mathop \oint \nolimits_s D \cdot ds = \mathop \smallint \nolimits_v {\rho _v}dv$$ Gauss law $$\vec \nabla \cdot \vec B = 0$$ $$\mathop \oint \nolimits_s \vec B \cdot \vec ds = 0$$ Non existence of magnetic monopole $$\nabla \times \vec E = 0$$ $$\mathop \oint \nolimits_L E \cdot dl = 0$$ Conservative nature of the electrostatic field. $$\vec \nabla \times \vec H = \vec J$$ $$\mathop \oint \nolimits_L H \cdot dl = \mathop \smallint \nolimits_s \vec J \cdot d\vec s$$ Ampers law.
Divergence in:
I) Cartesian form: If $$\vec A = {A_x}{q_x} + {A_y}{a_y} + {a_z}{a_z}$$
$$\nabla \cdot \vec A = \frac{2}{{2x}}{A_x} + \frac{2}{{2y}}{A_y} + \frac{\partial }{{\partial z}}{A_z}$$
II) In polar form:
$$\nabla \cdot \vec A = \frac{1}{{{r^2}}}\frac{\partial }{{\partial r}}\left( {{r^2}Ar} \right) + \frac{1}{{r\sin \theta }}\frac{\partial }{{\partial \theta }}(\sin \theta \;A\theta ) + \frac{1}{{r\sin \theta }}\frac{{\partial A}}{{\partial \phi }}$$
The electric field in free space is
1. $$\frac{D}{\epsilon_0}$$
2. $$\frac{D}{\mu_0}$$
3. ϵ0D
4. $$\frac{\sigma}{\epsilon_0}$$
Option 1 : $$\frac{D}{\epsilon_0}$$
Electric Flux Density MCQ Question 4 Detailed Solution
The electric field due to a point charge Q at a distance r is given by:
$$\bar{E}=\frac{Q}{4\piϵ r^2}$$
ϵ = permittivity of the material/medium.
If we multiply the electric field intensity by permittivity ϵ, we get a new vector D
D = ϵ.E $$[\frac{C}{m^2}]$$
$$\bar{D}=\frac{Q}{4\piϵ r^2}\times \epsilon$$
$$\bar{D}=\frac{Q}{4\pi r^2}$$
• This new vector is called the Electric flux density.
• This vector has the same direction as E but unlike E, it is independent of ϵ and therefore of the material properties.
• The unit of D is C/m2
Important Application:
Gauss law states that,
$$\mathop{\oint }_{s}\bar{E}.d\bar{s}=\frac{Q}{{{ϵ }_{0}}}$$
Since D = ϵ0E, the above can be written as:
$$\mathop{\oint }_{s}\bar{D}.d\bar{s}={Q}$$
Where Q is the total charge enclosed by the surfaces.
A hollow metallic sphere of radius 'r' is kept at a potential of 1 volt. The total electric flux coming out of the concentric spherical surface of radius R(>r) is
1. 4πε0r
2. 4πε0r2
3. 4πε0R
4. None of the above
Option 1 : 4πε0r
Electric Flux Density MCQ Question 5 Detailed Solution
Concept:
By Gauss law, $$ϵ \oint \vec{E}.d\vec{s}={{Q}_{encl}}$$
Thus, $$ϵ \oint \vec{E}.d\vec{s}={{ϵ }_{0}}E\left( 4π {{r}^{2}} \right)$$
$$ϕ = \oint \vec{E}.d\vec{s}$$
where ϕ = flux coming out of the concentric sphere.
r = radius of the sphere
Calculation:
Since electric potential can be defined as:
$$V = \frac{Q}{{4πϵ{_0}r}}$$
V = 1 V (given)
Q = 4 π ϵ0 r
$$ϕ =ϵ \oint \vec{E}.d\vec{s}={4\pi\epsilon_0r}$$
Electric displacement density D at any point on the spherical surface of radius r with a charge Q at the center in a medium with dielectric constant ε is:
1. Q / (4π ε r2)
2. Q / (4π r2)
3. (Q / 4π ε r)2
4. Q / (4π r)2
Option 2 : Q / (4π r2)
Electric Flux Density MCQ Question 6 Detailed Solution
Explanation:
The electric field intensity due to a point charge Q at any spherical surface of radius r is given by:
$$\bar{E}=\frac{Q}{4\piϵ r^2}$$
ϵ = permittivity of the material/medium.
If we multiply the electric field intensity by permittivity ϵ, we get a new vector D
$$D=\epsilon E[\frac{C}{m^2}]$$
$$\bar{D}=\frac{Q}{4\piϵ r^2}\times \epsilon$$
$$\bar{D}=\frac{Q}{4\pi r^2}$$
Hence option (2) is the correct answer.
1. This new vector is called the Electric flux density.
2. This vector has the same direction as E but unlike E, it is independent of ϵ and therefore of the material properties.
3. The unit of D is C/m2
Important Points
Gauss law states that,
$$\mathop{\oint }_{s}\bar{E}.d\bar{s}=\frac{Q}{{{ϵ }_{0}}}$$
Since D = ϵ0E, the above can be written as:
$$\mathop{\oint }_{s}\bar{D}.d\bar{s}={Q}$$
Where Q is the total charge enclosed by the surfaces.
A sphere of radius 10 cm has volume charge density $$\rho_v=\frac{r^3}{100}$$ C/m3. If it is required to make electric flux density D̅ = 0, for r > 10 cm, then the value of point charge that must be placed at the center of the sphere is ______ nC.
Electric Flux Density MCQ Question 7 Detailed Solution
For r > 10 cm,
$$Q_{enclosed}=\displaystyle\oint_{s}\vec {D}.{ds}=Q+\displaystyle\int_v\rho_vdv$$
If $$\vec{D}$$ = 0 for r > 10 cm,
then
$$Q=-\displaystyle\int_v\rho_vdv=-\displaystyle\int^{2\pi}_{\phi=0}\displaystyle\int^{\pi}_{\theta=0}\displaystyle\int^{0.1}_{r=0}\frac{r^3}{100}.r^2\sin\theta ~d\theta ~d\phi ~dr$$
$$=-4\pi\times10^{-2}\displaystyle\int^{0.1}_{r=0}r^5dr$$
$$=-\left[4\pi\times10^{-2}\times\frac{r^6}{6}\right]_0^{0.1}=-20.94$$ nC
Given that the electric flux density D = zρ (cos2 Φ) âz, C/m2 The charge density at point (1,π/4, 3) is
1. 3
2. 1
3. 0.5
4. 0.5 a
Option 3 : 0.5
Electric Flux Density MCQ Question 8 Detailed Solution
Concept:
Charge density
Pv = ∇ ⋅ D (Gauss-Divergence theorem)
In integral form
ϕ D⋅ds = ∫ρvdv
In cylindrical co-i
$$\nabla \cdot \vec D = \;\frac{1}{\rho}\frac{{\partial \left( {\rho{D_r}} \right)}}{{\partial \rho}} + \frac{1}{\rho}\frac{{\partial {D_\phi }}}{{\partial \phi }} + \frac{{\partial {D_z}}}{{\partial z}}$$
Calculation:
Given D = z ρ (cos2 ϕ) âz
So $$\nabla \cdot D = \frac{\partial }{{\partial z}}\left[ {z\rho {{\cos }^2}\phi } \right]$$ (Z-component)
$$\nabla \cdot D = \rho {\cos ^2}\phi = {\rho _v}$$
At $$P = 1,\;\phi = \frac{\pi }{4}$$
$${\rho _v} = 1 \cdot {\cos ^2}\left( {\frac{\pi }{4}} \right) = \frac{1}{2}$$
So ρv = 0.5 c/m3
Option (c) correct choice.
Maxwell equation for static and magnetic field
Differential (or point form) Integral form Remark $$\vec \nabla \cdot \vec D = {\rho _v}$$ $$\mathop \oint \nolimits_s D \cdot ds = \mathop \smallint \nolimits_v {\rho _v}dv$$ Gauss law $$\vec \nabla \cdot \vec B = 0$$ $$\mathop \oint \nolimits_s \vec B \cdot \vec ds = 0$$ Non existence of magnetic monopole $$\nabla \times \vec E = 0$$ $$\mathop \oint \nolimits_L E \cdot dl = 0$$ Conservative nature of the electrostatic field. $$\vec \nabla \times \vec H = \vec J$$ $$\mathop \oint \nolimits_L H \cdot dl = \mathop \smallint \nolimits_s \vec J \cdot d\vec s$$ Ampers law.
Divergence in:
I) Cartesian form: If $$\vec A = {A_x}{q_x} + {A_y}{a_y} + {a_z}{a_z}$$
$$\nabla \cdot \vec A = \frac{2}{{2x}}{A_x} + \frac{2}{{2y}}{A_y} + \frac{\partial }{{\partial z}}{A_z}$$
II) In polar form:
$$\nabla \cdot \vec A = \frac{1}{{{r^2}}}\frac{\partial }{{\partial r}}\left( {{r^2}Ar} \right) + \frac{1}{{r\sin \theta }}\frac{\partial }{{\partial \theta }}(\sin \theta \;A\theta ) + \frac{1}{{r\sin \theta }}\frac{{\partial A}}{{\partial \phi }}$$
An infinite plane with Q C/m2 charge density is placed at X = 6 m. If the magnitude of electric field is 30 V/m at origin, then the value of Q (in nF) is____
Electric Flux Density MCQ Question 9 Detailed Solution
Electric field strength exerted by an infinite sheet with charge density Q C/m2 is
$$\begin{array}{l} E = \frac{Q}{{2{\epsilon_0}}}\\ \Rightarrow 30 = \frac{Q}{{2 \times 8.854 \times {{10}^{ - 12}}}}\\ \Rightarrow Q = 0.53\;nF/{m^2} \end{array}$$
|
2021-10-21 02:59:37
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9481428265571594, "perplexity": 1954.5334046646365}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-43/segments/1634323585380.70/warc/CC-MAIN-20211021005314-20211021035314-00152.warc.gz"}
|
https://stacks.math.columbia.edu/tag/01GB
|
Definition 25.4.1. Let $\mathcal{C}$ be a site. Let $K$ be a simplicial object of $\textit{PSh}(\mathcal{C})$. By the above we get a simplicial object $\mathbf{Z}_ K^\#$ of $\textit{Ab}(\mathcal{C})$. We can take its associated complex of abelian presheaves $s(\mathbf{Z}_ K^\# )$, see Simplicial, Section 14.23. The homology of $K$ is the homology of the complex of abelian sheaves $s(\mathbf{Z}_ K^\# )$.
In your comment you can use Markdown and LaTeX style mathematics (enclose it like $\pi$). A preview option is available if you wish to see how it works out (just click on the eye in the toolbar).
|
2022-06-27 17:39:19
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 2, "x-ck12": 0, "texerror": 0, "math_score": 0.9595659971237183, "perplexity": 214.29705435767744}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-27/segments/1656103337962.22/warc/CC-MAIN-20220627164834-20220627194834-00579.warc.gz"}
|
http://www.ams.org/mathscinet-getitem?mr=932459
|
MathSciNet bibliographic data MR932459 93-02 (42B30 93B28 93B50 93C05) Francis, Bruce A. A course in $H\sb \infty$$H\sb \infty$ control theory. Lecture Notes in Control and Information Sciences, 88. Springer-Verlag, Berlin, 1987. x+150 pp. ISBN: 3-540-17069-3 Article
For users without a MathSciNet license , Relay Station allows linking from MR numbers in online mathematical literature directly to electronic journals and original articles. Subscribers receive the added value of full MathSciNet reviews.
Username/Password Subscribers access MathSciNet here
AMS Home Page
American Mathematical Society 201 Charles Street Providence, RI 02904-6248 USA
© Copyright 2016, American Mathematical Society
Privacy Statement
|
2016-09-01 01:52:14
|
{"extraction_info": {"found_math": true, "script_math_tex": 1, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9062866568565369, "perplexity": 8739.82124067272}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 5, "enable": false}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2016-36/segments/1471982956861.76/warc/CC-MAIN-20160823200916-00087-ip-10-153-172-175.ec2.internal.warc.gz"}
|
https://codereview.stackexchange.com/questions/182696/angular-reusing-an-observable-for-different-queries
|
# Angular - reusing an observable for different queries
This is just a dummy example, but in my real world application I get a lot of data from the web API and I'd like to manipulate it on the client's side. So, I kind of stored the observable I get from the HTTP request locally and change the values with map:
@Injectable()
export class UsersApiService {
private readonly baseUrl: string = 'https://reqres.in/api/users';
resource$: Observable<any>; constructor(private http: HttpClient) { this.resource$ = this.http.get<IUserDetails[]>(this.baseUrl).pipe(
tap((data) => {
console.log('"getUsers" successfully called!');
}),
map((data: any) => {
return data.data;
})
).publishReplay(1).refCount();
}
getUsers(): Observable<IUser[]> {
return this.resource$.pipe( map((data: IUserDetails[]) => { return <IUser[]>data.map((u) => { return { id: u.id, name: ${u.first_name} ${u.last_name} }; }); }) ); } getUserById(id: number): Observable<IUserDetails> { return this.resource$.pipe(
map((data) => {
return <IUserDetails>data.find(x => x.id === id);
})
);
}
}
Here's an example plunker
Is this the correct approach at reusing an observable? What would be better? Am I creating any memory leaks and what are the general drawbacks of my approach? Thanks!
There's nothing (obviously) wrong with your code. You deal with observables the efficient way, IMO. Cache is good. The fact you're constructing a "base" observable in constructor without subscribing to it there is good as well...
There are only a few things I can point out:
• Design: Be careful with caching, if your user information changes often, the cache may bite you back. You may want to implement some kind of cache invalidation policy.
• Design: If your system has thousands of users, you may want to not load all of them into memory at once. That would require your API support pagination though. If the users are few, it's totally okay to do what you're doing, I think.
• RxJs: not sure why pipe() and tap() are used the way are used. I think, it's easier to use do() and map() as shown in the code below. Chaining looks much more readable (as "step-by-step" explanation of how the data is transformed into the desired result).
• Idioms: TypeScript is all about types. Do specify a known type when possible instead of using any. E.g. Observable<IUserDetails[]> is better than Observable<any[]>.
• Style: Variable naming is important. Do not use one-letter names (u), they are pure evil. Names like data are also evil, in your case it's better to call it response. The resource$ can probably be nicer to the reader if called userDetails$. You get the idea.
@Injectable()
export class UsersApiService {
private readonly baseUrl: string = 'https://reqres.in/api/users';
userDetails$: Observable<IUserDetails[]>; constructor(private http: HttpClient) { this.userDetails$ = this.http
.get<IUserDetails[]>(this.baseUrl)
.do(response => console.log('"getUsers" successfully called!'))
.map(response => response.data)
.publishReplay(1)
.refCount();
}
getUsers(): Observable<IUser[]> {
return this.userDetails$.map(userDetailsList => userDetailsList.map(userDetails => <IUser>{ id: userDetails.id, name: ${userDetails.first_name} ${userDetails.last_name} } ) ); } getUserById(id: number): Observable<IUserDetails> { return this.userDetails$
.map(userDetailsList => <IUserDetails>userDetailsList.find(userDetails => userDetails.id === id));
}
}
• Sadly I can't comment on the answer above, so I'll just write a reply here. Thanks for your time! Firstly, I'm using this for a menu, since it has a lot of itmes (>1000), so I'll just manipulate the data locally, not going to query server each time the user clicks a link. I'm using pipe since I believe Google has been forcing that with their new version of HttpClient. Other two comments I agree with, will improve that part of the code. Now, I have a follow up question: do I really need .publishReplay(1).refCount(); in the initial HTTP request? – uglycode Dec 13 '17 at 22:56
• Could you show sources where Google forces for pipe? I am unsure about it. – Igor Soloydenko Dec 13 '17 at 22:58
• .publishReplay(1).refCount(); is basically a "standard" simple implementation of a cache. Without it, each subscription on getUserById()/getUsers() will result in an HTTP call which you want to avoid. So, yes you need this code as long as you want to cache the result. – Igor Soloydenko Dec 13 '17 at 23:00
• @user155748 do you have two accounts - this one and user155724? – Sᴀᴍ Onᴇᴌᴀ Dec 13 '17 at 23:02
• Thanks for your time! Firstly, I'm using this for a menu, since it has a lot of itmes (>1000), so I'll just manipulate the data locally, not going to query server each time the user clicks a link. Here's the part: angular.io/tutorial/toh-pt6 search for .pipe( I believe there are 17 occurrences. I've just tested the application without publishReplay and indeed, the service gets called each time. Hm, hm, I wonder why is that, without this publishReplay, we have a cold observable? – uglycode Dec 13 '17 at 23:10
|
2019-10-17 02:10:22
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.28697165846824646, "perplexity": 2447.862553052861}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-43/segments/1570986672431.45/warc/CC-MAIN-20191016235542-20191017023042-00466.warc.gz"}
|
https://en.wikibooks.org/wiki/Real_Analysis/Section_1_Exercises/Answers
|
Answers in this wikibook work on the didactic that you should still be able to logically piece together a proof that can sufficiently answer a question. Thus, no copy-paste answer is available on this page. Instead, the answer section simply lays out all the tools to prove the problem sufficiently. Many answers will provide hidden questions since some assumptions may not be known to you. Luckily, some of these assertions will have answers elsewhere in the wikibook or can be solved independently.
Note that all solutions are suggestions. There can be multiple ways to solve a question, and as this is a collaborative wikibook, you are free to put down your answer (as long as it follows out guidelines).
## Unsorted
### Problem 4: Prove that ${\displaystyle 0=-0}$
1. One algebraic property of zero is ${\displaystyle \forall x\in \mathbb {R} :x-x=0}$
2. And we can use either valid theorem
• Subtracting both sides of the equation by the same variable is valid.
• Multiplying both sides by negative 1 is valid.
3. And finally, substitution of whole terms is valid.
## Algebra
1. As a side note, this is why you do the "inequality sign flip" when you "multiply by -1"
{\displaystyle {\begin{aligned}a&
2. Intuition solved!
{\displaystyle {\begin{aligned}a&
3. Part 1: Let's get a new property from the given statement by applying (II)
{\displaystyle {\begin{aligned}c&>d\\-c&<-d\end{aligned}}}
Part 2: Let's solve it now
{\displaystyle {\begin{aligned}a&
3. The following answers only outline a method to answer the question
4. The following answers only outline a method to answer the question
1. The largest hurdle is thinking of ways to prove this since distribution of the power term has not been established yet. However, there is a method:
2. Start off with ${\displaystyle 1=1}$
3. Apply the existence of an inverse as such: ${\displaystyle (a\cdot 1/a)\cdot (b\cdot 1/b)=1}$
4. Redistribute and swap over variables to the other side until the equivalency shows up. As a last resort, consult the following whited out hint in text format: <pre style="text-color: white;">(ab) × 1/a 1/b = 1 → 1/a × 1/b = 1/ab</pre>
5. Assuming problem 4I has been solved, this question is relatively easy.
6. Assuming problem 4I has been solved, this question is relatively easy.
## Absolutes
1. As a side note, you should generally not use squares in your proof! However, it works here just because we're dealing with absolute values, the resulting number when you reverse the squaring operation.
{\displaystyle {\begin{aligned}|a+b|&\leq |a|+|b|\\|a+b|^{2}&\leq (|a|+|b|)^{2}\\a^{2}+2ab+b^{2}&\leq |a|^{2}+2|a||b|+|b|^{2}\\a^{2}+2ab+b^{2}&\leq a^{2}+2|ab|+b^{2}\\2ab&\leq 2|ab|\\ab&\leq |ab|\\\square \end{aligned}}}
## Number Theory
### Problem 1: Prove the following properties on even and odd numbers
1. These definitions can be substituted in for each of the following problems.
• The definition of an even number e is ${\displaystyle \exists k\in \mathbb {Z} :e=2k}$
• The definition of an odd number o is ${\displaystyle \exists j\in \mathbb {Z} :o=2j+1}$
2. Multiplication and addition are closed under the integers.
3. The distributive law extends to integers, and this axiom is probably the most difficult theorem that is used to solve these problems.
4. Any necessary factor can be expressed as a variable to clean up interpretation.
### Problem 2: Prove that no consecutive number of a perfect square is also a perfect square
1. In math notation, the question asks ${\displaystyle \forall x\in \{x^{2}:x\in \mathbb {N} \}:[(x+1)^{2}\neq x^{2}+1]\land [(x-1)^{2}\neq x^{2}-1]}$
2. Some valid theorems are the distributive law for natural numbers and adding the same number over both sides of an equation.
### Problem 6: Prove that any square root of a prime number is irrational
1. A prime number does not include 1 and should only have itself as a factor.
2. If ${\displaystyle {\sqrt {p}}}$ is rational, then ${\displaystyle {\sqrt {p}}=r/s}$ for some coprime integers ${\displaystyle s}$ and ${\displaystyle r}$; ${\displaystyle s}$ and ${\displaystyle r}$ are in its lowest terms.
3. Types of ways to reach a contradiction:
• Both ${\displaystyle s}$ and ${\displaystyle r}$ can be shown to be divisible by the prime number ${\displaystyle p}$ after exposing the prime number using algebraic manipulation, thereby breaking the coprime definition.
1. "Any natural number has a prime number as a factor" is valid.
• The prime number ${\displaystyle p}$, after exposing the prime number using algebraic manipulation, can be shown to have a factor of two of the same natural number squared, which breaks the prime number definition.
1. "Any natural number cannot be expressed as a fraction in its lowest common factor form and have its denominator anything other than 1" is valid.
|
2016-07-23 23:19:21
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 23, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.6983470916748047, "perplexity": 614.1541148013678}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2016-30/segments/1469257823802.12/warc/CC-MAIN-20160723071023-00133-ip-10-185-27-174.ec2.internal.warc.gz"}
|
https://math.paperswithcode.com/paper/inverse-eigenvalue-problem-of-cell-matrices
|
## Inverse Eigenvalue Problem of Cell Matrices
12 Dec 2017 · Khim Sreyaun, Rodtes Kijti ·
In this paper, we consider the problem of reconstructing an $n \times n$ cell matrix $D(\vec{x})$ constructed from a vector $\vec{x} = (x_{1}, x_{2},\dots, x_{n})$ of positive real numbers, from a given set of spectral data. In addition, we show that the spectrum of cell matrices $D(\vec{x})$ and $D(\pi(\vec{x}))$ are the same, for every permutation $\pi \in S_{n}$...
PDF Abstract
# Code Add Remove Mark official
No code implementations yet. Submit your code now
# Categories
Rings and Algebras
|
2021-08-03 01:35:53
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.7232798933982849, "perplexity": 1531.0896134530528}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-31/segments/1627046154408.7/warc/CC-MAIN-20210802234539-20210803024539-00025.warc.gz"}
|
http://www.evia.org.es/71pwkxe/6e8955-artificial-lighting-strategies
|
## artificial lighting strategies
Also, the release of volatile compounds contributes considerably to ozone layer degradation. Natural lighting, also known as daylighting, is a technique that efficiently brings natural light into your home using exterior glazing (windows, skylights, etc. 0000175722 00000 n 0000025440 00000 n As discussed herein above, the lipids are intracellular products and, therefore, the lipid productivity depends on both the lipid content as biomass productivity. As the parameters NER and NEB are mainly based on biomass productivity, they were clustered together on the top left-hand of the plot. 0000023070 00000 n 0000113629 00000 n 0000129684 00000 n 0000081543 00000 n It is commonly agreed that environmental deterioration and fossil fuel depletion are becoming two worldwide issues that threaten human development [1]. Together, the principal component 1 (PC1) and principal component 2 (PC2) explained 93.14% of the overall variance. 0000141164 00000 n 0000021897 00000 n 1b), as a strategy to improve the performance of the process. Note that fluorescent bulbs and neon lighting are halogen sources. 686 0 obj <> endobj The cell concentration was monitored every 24 h during the growth phase of the microorganism. This explanation was confirmed by the light regimes of 24:0 h, 22:2 h, 20:4 h, 1 t/d, 8 t/d, 12 t/d, and 48 t/d, which showed the highest PUFAs values and consequently lower CN values (below the values established by the highest international standards). The authors are grateful to the Coordination for the Improvement of Higher Education Personnel (CAPES) (Grant Number 001) for the financial support. 0000131048 00000 n Jacob-Lopes and coworkers [8] also associate the low cell growth under conditions with long-term in the dark to the limited carbon source for cell growth. 0000143320 00000 n Finally, in terms of the fatty acid profile of short photoperiods, the lipid fraction of biomass indicated that this lipid profile was completely different from the other light conditions tested. 0000222984 00000 n Biomass data were used to calculate the biomass productivity [PX = Xmax∙µmax, mg/L h] and lipid productivity [PL = PX∙LC, mg/L h], where Xmax is the maximum biomass concentration (mg/L), µmax is the maximum specific growth rate (h−1) and LC is lipid content of the biomass (%). 0000142201 00000 n 3 Metering Tips for New Facilities Managers. Google Scholar, Su F, Li G, Fan Y, Yan Y (2016) Enhanced performance of lipase via microcapsulation and its application in biodiesel preparation. 0000026626 00000 n 0000099565 00000 n In addition, Table 2 presents the respective impact category indicators using characterization factors. Natural lighting has been proven to increase health and comfort levels for building occupants. 0000131616 00000 n 0000099280 00000 n This can be associated with the photoinhibition phenomenon, where, when the cells are exposed to continuous illumination, there may be an excess of light energy. Thermostats controlled the temperature. As shown in Fig. The condition that presented better both quantitative and qualitative values for biodiesel production was the frequency photoperiod of 24 times per day. Thus, both the problem of low viscosity microalgae biodiesel and high viscosity of vegetable oils could be solved by blending. 0000090333 00000 n 0000174758 00000 n Algal Res 8:192–204, Zhou Q, Zhang P, Zhan G, Peng M (2015) Biomass and pigments production in photosynthetic bacteria wastewater treatment: effects of photoperiod. */'&+v���������������������а��Ȩ�$ϸ��D��Դ�L3sK+[;{G'g�XW7w/o_?���¼������ںʂ����.n�l˙8Y8�Yy�eE����D�$a�Bm]ch�oB����[s���r+Jj�*�Jk������#BJJjiii�8A �����%r�D�dll�� Obviously, there is a direct reduction of the environmental impact with the illumination time. 0000099712 00000 n Effects of long-term photoperiods (a), frequency photoperiods (b) and short photoperiods (c) on the biomass and lipid productivity. The light energy can be provided by the sunlight, which is the most cost-effective energy source for microalgal production. ����ȤӇ��� %%%�{�q���*ZQ� ����v{�n�se�$^$p�42d0-gX��X�z@�� Besides, the same behavior for Scenedesmus obliquus was found at a long-term photoperiod of 12:12, corresponding to lower values in the order of 0.0023 g/L h [21]. 0000050462 00000 n Daylighting is a building design strategy to use light from sun. Thus, for processes to be considered technically viable, it has been theoretically established that NER should be greater than 1 and negative NEB [40]. 0000142031 00000 n 0000110349 00000 n Bioresour Technol 152:241–246, Abomohra AEF, Jin W, El-Sheekh M (2016) Enhancement of lipid extraction for improved biodiesel recovery from the biodiesel promising microalga Scenedesmus obliquus. Thus, in the light/dark cycle frequency experiments, the cells were exposed to 22 h of light and 2 h of dark, where these 2 h were divided into six frequencies: 2, 4, 8, 12, 24 and 48 times per day (t/d). The fuel properties of biodiesel (ester content, EC; cetane number, CN; iodine value, II; degree of unsaturation, DU; saponification value, SV; long-chain saturated factor, LCSF; cold filter plugging point, CFPP; cloud point, CP; allylic position equivalents, APE; bis-allylic position equivalents, BAPE; oxidation stability, OS; higher heating value, HVV; kinematic viscosity, µ and kinematic density, ρ) were determined through the software BiodieselAnalyzer© 1.1, which estimates biodiesel properties based on the fatty acid profile of the parent oil, through a system of empirical equations [19]. 0000007943 00000 n As a result, alternative sustainable and renewable sources of energy have been developed, such as the biofuels of the first, second, third, and fourth generation. 0000187531 00000 n The significance of lighting as an issue for wildlife is highlighted in publications prepared by organisations such as the Bat Conservation Trust1 (BCT) and Buglife2. x�bb�y����&� Ā B�@Q�F �����6 ����T���\$�e�:x��ȊL�\�6mʄnn�I�Kg/��>�F�����o���es�[�.�˞%Ÿ�}�Yf&!i ��;�ڦtO�6}��쥽��p�/nm/ZP8wV~ޜe����{�fְ�0��͓e�j�b����8{�R��s� 2012).Here, we focus on street lighting because of its universal use and potential for ecological impacts (Gaston et al. The dominant fatty acids were palmitic acid (C16:0), α-linolenic acid (C18:3n6) and linoleic acid (C18:2n6) for all the treatments. 0000113970 00000 n Moreover, although the values of NER and NEB did not reach the necessary values to be considered viable processes, the modulation of photoperiods demonstrated to be an effective strategy to reduce the energy requirements and increased energy produced. 0000110874 00000 n 4a and b). 0000039721 00000 n Appl Energy 88:3438–3443, Sharma KK, Schuhmann H, Schenk PM (2012) High lipid induction in microalgae for biodiesel production. The net energy ratio (NER) $$[{\text{NER}} = \sum {\text{Eout}}/\sum {\text{Ein}}]$$ of a system was defined as the ratio of the total energy produced per day (energy content of the residual biomass, MJ/d) over the energy required for lighting and timer digital of the photobioreactor per day (MJ/d), where Eout is the renewable energy output and Ein is the fossil fuel energy input. Fuel Process Technol 149:121–130, Santos AM, Deprá MC, Santos AM, Zepka LQ, Jacob-Lopes E (2015) Aeration energy requirements in microalgal heterotrophic bioreactors applied to agroindustrial wastewater treatment. 0000110003 00000 n In this regard, this work aimed to identify artificial lighting strategies that balance kinetic performance, energy cost and environmental impact for microalgal biodiesel production in photobioreactors. 0000133666 00000 n 0000173754 00000 n These note the increase in the distribution and intensity of artificial light during the past few decades, and its potential to significantly disrupt species and ecosystems. 0000136280 00000 n 0000140096 00000 n This because the microalgae are unable to use inorganic carbon sources in the absence of light, and the organic carbon concentrations in the culture medium were insufficient for the energy maintenance of respiratory metabolism. Fatty acid composition was determined by using a VARIAN 3400CX gas chromatograph (Varian, Palo Alto, CA, USA. Eleven majority FAs were identified and, as expected, there was variability in FA composition for the different light conditions. 2 shows the lipid content of microalgal biomass in all the conditions tested. Figure 1a and b shows the weights (variables) and scores (treatments), respectively, from the two major principal components. LinkedIn used a similar strategy in 2011 when piloting a new wireless lighting control system in an office building more than 20 years old. advances in lighting design allow efficient use of windows to reduce the need for artificial lighting during daylight hours without causing heating or cooling problems. Algal Res 8(161):167, Rippka R, Derulles J, Watrerbury JB, Herdman M, Stainer RY (1979) Generic assignments, strain histories and properties of pure cultures of cyanobacteria. Several authors have previously reported that, besides reducing the operational costs of the process, the use of photoperiod can improve the photosynthetic rate as well as the productivity and quality of intracellular products [11,12,13]. Finally, the use of photoperiods also proved to be an effective strategy to improve the energy balance and reduce the environmental impact of the microalgae biodiesel production process. It was also observed that the photoperiod of 24 times per day had biomass productivity 35.8% higher than experiments with constant illumination. https://doi.org/10.1007/s42452-019-1761-0, DOI: https://doi.org/10.1007/s42452-019-1761-0, Over 10 million scientific documents at your fingertips. 0000111043 00000 n 0000005867 00000 n or through portable, like floor lamps. [22] showed values of the 16.3 mg/L h. Besides, similar results were evidenced in cyanobacterium Aphanothece microscopica Nägeli by Jacob-Lopes [8]. On especially bright days, blinds close to preserve the artworks, but levels are allowed to vary in order to embrace the changeability of sun and skylight both in tone and intensity. If you must opt for artificial lighting, choose halogen lighting (white) over incandescent light sources that have a yellowish color. This may be due to lipid/glucan biosynthesis, and carbon fixation occurs during the light period. Not surprisingly, the lipid content was highly correlated with the calorific value. Daylighting optimizes natural sunlight entry into a building to minimize the need for artificial lighting. 0000143503 00000 n 0000134332 00000 n 0000140447 00000 n Different long-term, frequency, and short photoperiods were examined. At the same time, equal volumes of cell suspension were withdrawn from the bioreactor. Approximately half of autistic individuals experience what is classified as a severe sensitivity to fluorescent lighting. Starting at 100% brightness for all fixtures, the team gradually dropped the light levels by 5-10% every few days until they reached the optimum balance. For this reason, high viscosity is the major fuel property why neat vegetable oils have been largely abandoned as alternative diesel fuel. 0000008247 00000 n The method of Hartman and Lago [18] was used to saponify and esterify the dried lipid extract to obtain the fatty acid methyl esters. 0000186111 00000 n PCA of fatty acid profile and fuel properties (a), under different light conditions (b). Under the appropriate light regimes, microalgae can tolerate residence time in the absence of light, reaching similar or higher productivity compared to cultures with continuous illumination [8, 9]. The kinematic viscosity of biodiesel is another significant fuel property. 0000113801 00000 n Blending has been extensively used and is considered a good and feasible method to improve the biodiesel quality [30, 37–39]. 1a) [23, 24]. Thus, regardless of the photoperiod strategy adopted, the decision-making to replace the energy matrix with clean sources can bring immediate results to reduce environmental impacts [41]. Different long-term, frequency, and short photoperiods were examined. startxref Improving Lighting Controls; Lighting can be controlled with the use of various sensors to allow the operation of lamps whenever they are needed. Figure 2 also shows the biochemical composition in terms of the content of total proteins and carbohydrates for all light conditions tested. Besides, when assessing the environmental impacts related to the different artificial illuminations strategies (considering a functional unit of 1 day of operation), in a system continuously illuminated (long-term photoperiod 24:0), we can estimate (Table 6) an ecotoxicity of 3.10 × 101 CTUe, energy resource of 9.73 MJ, global warming potential of 4.50 kg CO2 eq, photochemical oxidation potential of 1.00 × 10−1 kg O3 eq, acidification potential of 2.40 × 10−2 kg SO2 eq, eutrophication potential of 8.57 × 10−3 kg N eq and ozone depletion potential of 1.37 × 10−6 kg CFC-11 eq. 0000099654 00000 n However, the qualitative profile of the lipid fraction should also be considered to choose an ideal growing condition for bioenergy production. Daylight not only replaces artificial lighting, reducing lighting energy use, but also influences both heating and cooling loads. J Gen Microbiol 111:1–61, AOAC (2000) Official Methods of Analysis of AOAC International. Bioresour Technol 102:6005–6012, Vendruscolo RG, Fagundes MB, Maroneze MM, de Nascimento TC, de Menezes CR, Barin JS, Zepka LQ, Jacob-Lopes E, Wagner R (2019) Scenedesmus obliquus metabolomics: effect of photoperiods and cell growth phases. 1 shows the lipid productivity, and Fig. Sponsored links. Controlled natural lighting to the existing galleries at Musée d’Art de Nantes is provided through rooflights, reducing electric lighting energy consumption. Moreover, the light will generally stimulate fatty acid synthesis to convert excess light to chemical energy in order to avoid photo-oxidative damage [26]. The luminous intensity was determined by using a quantum sensor (Apogee Instruments, Logan, UT, USA), measuring the light incident on the external reactor surface. 0000141861 00000 n Best for Industrial Areas, Office, or Outdoors with natural light available. Install artificial lighting that mimics natural light. Therefore, reduction in the use of artificial lighting is vital. Same letters indicate that data did not differ statistically (Tukey test, p ≤ 0.05). 0000212074 00000 n The main problems associated with this parameter are observed for high-viscosity oils. The objective and scope of the application of this LCA were to evaluate the environmental footprint of the energy requirements for long term photoperiods. These categories are known to be related to the generation of pollutant gases released into ecosystems as a function of fossil energy consumption, resulting in the potential nutrient contamination in water bodies. 0000109310 00000 n !����kzf�q1��fcb�d�\��# ,)". However, most of the industrial processes in operation do not attend these requirements, boosting their optimization to achieve these parameters. Besides, the productivities of the process, the chemical composition of the biomass, biodiesel quality, and energy balance were assessed. The short photoperiods are more related with the saturated fatty acids e, consequently with higher values of CN and OS. 0000008218 00000 n Bioprocess Biosyst Eng 37:735–741, Mitra M, Patidar SK, George B, Shah F, Mishra S (2015) A euryhaline Nannochloropsis gaditana with potential for nutraceutical (EPA) and biodiesel production. This way, our approach resulted in a slight increase in these variables when compared to cultures under constant illumination. 0000114139 00000 n As shown in Fig. 0000223543 00000 n There are probably those times where you have felt that you just cannot escape fluorescent lightingfrom the office or the classroom to the neighborhood grocery store, its prevalence can be problematic, especially for somebody who is already sensitive. 0000039890 00000 n 0000140617 00000 n 0000002996 00000 n Cur Biotechnol 5:249–254, Lelieveld J, Klingmüller K, Pozzer A, Burnett RT, Haines A, Ramanathan V (2019) Effects of fossil fuel and total anthropogenic emission removal on public health and climate. Stock cultures were propagated and maintained in a synthetic BG11 medium (Braun-Grunow medium) [14]. This is associated with the photolimitation condition that occurs when there is insufficient light to maintain metabolism [21]. Bioresour Technol 100:261–268, Geacai S, Iulian O, Nita I (2015) Measurement, correlation and prediction of biodiesel blends viscosity. To study the effects of short light/dark cycles, four different cycles of (s: s) 0.91:0.09, 0.83:0.17, 0.75:0.25, and 0.50:0.50 (light: dark) were set every one second. The steady-state was considered to have been established after at least 3 volume charges, with a variation of cell dry weight less than 5%. By providing a direct link to the dynamic and perpetually evolving patterns of outdoor illumination, daylighting helps create a visually stimulating and productive environment for building occupants, while reducing as much as one-third of total building energy costs. 0000028976 00000 n 0000164010 00000 n Artificial lights are available in a wide variety of shapes, sizes, colors of light emitted, and levels of brightness. The growth of microalgal culture depends on various abiotic factors, such as temperature, level of nutrients, and available light. 0000007892 00000 n In this sense, a trade-off should be achieved, considering besides the environmental impacts, the energy costs, and the process performance. Bioresour Technol 190:196–200, George B, Pancha I, Desai C, Chokshi K, Paliwal C, Ghosh T, Mishra S (2015) Effects of different media composition, light intensity and photoperiod on morphology and physiology of freshwater microalgae Ankistrodesmus falcatus -A potential strain for bio-fuel production. Artificial lighting systems allow extension of the greenhouse production over a period in the year and increase the yield of the greenhouse, even in favorable climate zones and periods [32]. <]>> 0000141351 00000 n Conversely, Table 4 showed that all conditions tested were lower than that of international standards established for kinematic viscosity. 0000187066 00000 n 0000141524 00000 n The Fig. It is imperative that lighting systems and their associated controls meet the engineering performance requirements and achieve optimal energy usage and that high efficiency lighting systems and robust control strategies be realised. The short light/dark cycles are very fast alternations between high light intensities and darkness, also called a flashing light effect. 0000029175 00000 n Artificial lighting has many different applications and is used both in-home and commercially. [12] evaluated the growth of Scenedesmus obliquus under the condition of 24:00 h (light: dark) operating in batch mode found lower values of biomass productivity to 6.25 mg/L h. Likewise, Vendruscolo et al. 0000007532 00000 n 1a indicated that the photoperiod of 22:2 h was the condition that presented the equilibrium between the electric energy saving and biomass productivity. This effect is caused by a photo-oxidation reaction inside the cell due to excess light that cannot be absorbed by the photosynthetic apparatus [23]. Both lighting types will offer a sufficient amount of lighting for activities that don’t require bright, focused light. 0000099961 00000 n 0000099891 00000 n 0000139456 00000 n 0000050263 00000 n 0000137031 00000 n 0000007588 00000 n Artificial lighting strategies in photobioreactors for bioenergy production by, $$[{\text{NER}} = \sum {\text{Eout}}/\sum {\text{Ein}}]$$, $$[{\text{NEB}} = \sum {\text{inputs}} - \sum {\text{outputs}}]$$, https://doi.org/10.1007/s42452-019-1761-0, Engineering: Sustainable Inventive Systems. For decades, researchers have explored the effects of fluorescent light sensitivity on people, and we round up some of their key findings in hopes you are able to better understand y… First, dim lighting can cause eye strain and headaches, because, when lighting is inadequate, the eyes are forced to work much harder in order to see. 0000007462 00000 n 0000099782 00000 n 1 shows the effect of long-term photoperiods (a), frequency photoperiods (b) and short photoperiods (c) on biomass and lipid productivity. Here are a few strategies you can use, along with the benefits of embracing natural lighting. Malays Appl Biol 42:41–49, Mandotra SK, Kumar P, Suseela MR, Nayaka S, Ramteke PW (2016) Evaluation of fatty acid profile and biodiesel properties of microalga Scenedesmus abundans under the influence of phosphorus, pH and light intensities. It was qualitatively observed that the photoperiod significantly influenced the fatty acid profile of single-cell oil and, consequently, the quality of the produced biodiesel. 0000064698 00000 n However, it also has certain disadvantages, including day/night cycles, the influence of weather conditions, and seasonal changes. New lamp technologies like induction, sulfur, and light emitting diode (LED) will approach the thermal efficacy of a good daylighting system. December 23, 2015. Artificial lighting systems have been used extensively in commercial greenhouses until the 1980s [31]. Bioresour Technol 219:493–499, Queiroz MI, Hornes MO, Silva-Manetti AG, Jacob-Lopes E (2011) Single-cell oil production by cyanobacterium Aphanothece microscopica Nägeli cultivated heterotrophically in fish processing wastewater. 0000024257 00000 n As expected, the higher calorific value was found with a frequency photoperiod of 24 t/d (20.4 kJ/g) that also had the highest lipid content of 28%, followed by 48 t/d, which presents a calorific value of 20.2 kJ/g and lipid content of 27.32%. However, for the different long-term photoperiods, the light regime significantly influenced the production of microalgal biomass. The fatty acid profile of the lipid fraction of biomass subjected to different frequencies of light/dark cycle showed dominance in saturated fatty acids (44.73–51.60%), followed by saturated ones (27.93–42.34%). SN Applied Sciences Top Tips for Integrating Solar Panel Systems on Your Commercial Building. Particularly, Scenedesmus obliquus is a robust green microalga that provides substantial productivity rates and has been commonly proposed as a promising candidate for biodiesel production [4, 5]. 0000064540 00000 n This work aimed to evaluate the artificial lighting strategies to increase the viability of microalgae biodiesel production. For this reason, in the following study, this photoperiod was evaluated in different frequencies (Fig. Thus, the best lipid producer has to combine biomass productivity and lipid content [2]. AOAC International, Gaithersburg, Bligh EG, Dyer JW (1959) A rapid method of total lipid extraction and purification. The Fig. The life cycle assessment tool was used to evaluate the potential environmental impacts according to categories of energy resource (ER), global warming potential (GWP), acidification potential (AP), eutrophication potential (EP), photochemical ozone creation potential (SMOG), ozone depletion potential (ODP), and ecotoxicity (ECO) [20]. Artificial lighting strategies that seem most "natural" duplicate the same contrast pattern clues seen on 3D objects in various lighting conditions. ISO 14040:2006(E) Environmental management – life cycle assessment – principles and framework, Xue C, Goh QY, Tan W, Hossain I, Chen WN, Lau R (2011) Lumostatic strategy for microalgae cultivation utilizing image analysis and chlorophyll a content as design parameters. The results showed that Scenedesmus obliquus CPCC05 can store sufficient energy to sustain cell growth for continuous periods of up to 2 h in the dark, without affecting the productivities of the process. %%EOF This could be explained by the fact that lipids and fatty acids were oxidized when cells require energy in the dark during insufficient light for photosynthesis. 0 Energy Convers Manag 108:23–29, Blanken W, Cuaresma M, Wijffels RH, Janssen M (2013) Cultivation of microalgae on artificial light comes at a cost. Comparatively, Krzemińska et al. The reactor was illuminated with 45 cool white LED lamps of 0.23 W each and, situated on a photoperiod chamber situated on a photoperiod chamber with a timer digital of 600 W and stand by consumption of 1.6 W. The CO2/air mixture was adjusted to achieve the desired concentration of carbon dioxide in the airstream through three rotameters that measured the flow rates of carbon dioxide, air, and the mixture of gases, respectively. Table 3 shows the fatty acid (FA) composition of the oil extracts from the sixteen light/dark cycle evaluated here. This behavior was observed up to 24 t/d, where this condition showed the best cell productivity value (63.88 mg/L h), followed by 48 t/d (58.22 mg/L h). Eduardo Jacob-Lopes. 0000108285 00000 n In other words, these photoperiods are more viable for biodiesel production in quantitative terms. As an alternative, artificial illumination can result in an enhanced photosynthetic rate and, therefore, in higher biomass and intracellular compounds productivities. 0000164355 00000 n 0000099500 00000 n The chemical composition of microalgal biomass was performed based on the method described in AOAC [15], except to total lipid concentration of the biomass, which was determined gravimetrically by the modified Bligh and Dyer method [16]. At the same time, the carbohydrate content exhibits the same pattern between the conditions tested. 0000004245 00000 n 0000108970 00000 n Princeton University Press, New Jersey, Maroneze MM, Siqueira SF, Vendruscolo RG, Wagner R, de Menezes CR, Zepka LQ, Jacob-Lopes E (2016) The role of photoperiods on photobioreactors–a potential strategy to reduce costs. 0000110521 00000 n According to Ramos et al. The fatty acid methyl esters were identified by comparison of the retention times with the authentic standards from FAME Mix-37 (P/N 47885-U, Sigma-Aldrich, St. Louis, MO USA) and quantified through area normalization by software T2100p Chromatography Station (Plus Edition) v9.04. This work aimed to evaluate the artificial lighting strategies to increase the viability of microalgae biodiesel production. This relationship is clearly shown in Fig. 2012).Different types of street light have distinct spectral signatures (Fig. This study is done using DIAlux Light Wizard simulation software .The study conducted in United Arab Emirates (U.A.E) proved that LED lamps are energy efficient than other lamps , .Hence in the DIALux simulation, LED lamp ENDO GXLX7009W with a power rating of 125 W luminous flux of 6279 lumens is … PubMed Google Scholar. 0000113459 00000 n Data in brief 24(1):103900–103903, Qu Z, Duan P, Cao X, Liu M, Lin L, Li M (2019) Comparison of monoculture and mixed culture (Scenedesmus obliquus and wild algae) for C, N, and P removal and lipid production. This species, under different photoperiods, proved to be able to maintain its biomass productivity, growth rate and carbon dioxide fixation rate for up to 2 h in the dark. 0000176320 00000 n In order to reap all of the benefits and healing power of natural light in office spaces, you need to have a clear vision and trusted guidance. 0000141691 00000 n Daylighting (using windows, skylights, or light shelves) is sometimes used as the main source of light during daytime in buildings. 3 Daylighting Strategies for Existing Buildings. 0000064804 00000 n 0000110179 00000 n volume 1, Article number: 1695 (2019) The first PCA was performed to examine the relationship between the feedstock production parameters. The starting point to develop a sustainable microalgae-based process is the consolidation of a favorable energy balance. users. A future change of use – or change of layout – could undermine lighting strategies adopted in the original design. Chen T, Liu J, Guo B, Ma X, Sun P, Liu B, Chen F (2015) Light attenuates lipid accumulation while enhancing cell proliferation and starch synthesis in the glucose-fed oleaginous microalga Chlorella zofingiensis. An artificial lighting is used to supplement the day lighting for the selected office space. The sizes and locations of windows should be based on the cardinal directions rather than their effect … J Appl Phycol 23:721–726, Falkowsk PG, Raven JA (1997) Aquatic photosynthesis. 0000006157 00000 n 0000111236 00000 n 820 0 obj <>stream Artificial lighting strategies in photobioreactors for bioenergy production by Scenedesmus obliquus CPCC05. Planning for daylight therefore involves integrating the perspectives and requirements of various specialities and professionals. The single-cell oil produced by microalgae is considered as one of the most effective raw materials for third generation biodiesel production. 0000040110 00000 n The experimental conditions were the following: initial cell concentration of 100 mg/L, isothermal reactor operating at a temperature of 26 °C, photon flux density of 150 µmol m−2 s−1, and continuous aeration of 1VVM (volume of air per volume of culture per minute) with the injection of air enriched with 15% carbon dioxide. Biochemical composition and energy value of the microalgal biomass in different photoperiods. 0000064999 00000 n The study focused on assessing the influence of different photoperiods (long-term, frequency, and short photoperiods) on (i) productivities of the process, (ii) chemical composition of the biomass, (iii) biodiesel quality, (iv) energy balance, and on (v) life cycle assessment. 0000189134 00000 n In the frequencies’ photoperiods there was a balance between PUFA and MUFA. 0000223579 00000 n The Best Lighting Design Strategies for Office Spaces. The condition that presented the equilibrium between the electric energy saving and biomass productivity was defined as the optimal condition. In the life cycle impact assessment phase, the characterization stage the results of the indicators for each impact category was quantified through mathematical modelling [Ci = ∑CF X Ei], where Ci is impact category, CF is the characterization factor, and Ei is emission inventory, expressed in mass released into the environment per functional unit. Light shelves ) is sometimes used as the dark period ’ s colour is much! Evaluated in different photoperiods as expected, there were no pronounced differences in conditions. As enhanced post-illumination respiration [ 7 ] means better ignition properties and engine performance raw material costs of production,! And, therefore, in order to minimize the energy requirements for long photoperiods. Variety of shapes, sizes, colors of light to achieve these parameters result, biodiesel quality 30... Public health, especially where there is no conflict of interest regarding the publication the. 93.14 % of the biomass productivity amount of lighting installations sixteen light/dark cycle evaluated.... New wireless lighting control system in an enhanced photosynthetic rate and, as it the. An artificial lighting such as temperature, level of nutrients, and for this reason high. Was a balance between PUFA and MUFA axenic cultures of Scenedesmus obliquus CPCC05, there were no artificial lighting strategies... Microalgae biotechnology artificial lighting strategies in the original design al ( 1963 ) composition of the manuscript office, or with. Presented better both quantitative and qualitative values for biodiesel production several reasons severe sensitivity to lighting! A trade-off should be achieved, considering besides the environmental impact with the calorific value following study, this showed..., a small variation between the fatty acid profile corroborates most studies evaluating Scenedemus obliquus [ 30,31,32,33 ] fuel why... H during the light regime significantly influenced the production of microalgal biomass in frequencies! Emitted, and Figs depletion are becoming two worldwide issues that threaten human [! Were clustered together on the ambient light ( natural and artificial ) in a slight increase in variables! ) Measurement, correlation and prediction of biodiesel is another parameter that must be considered to the... Of street light have distinct spectral signatures ( Fig [ 14 ] lower than that of International established. Standardization ), 2006 Photography depends on various abiotic factors, such temperature. Alternative diesel fuel is determined by using a VARIAN 3400CX gas chromatograph ( VARIAN, Palo Alto, CA USA. A sunny day providing lots of natural light, lighting will dim or turn off, reducing electric and! Your energy consumption of lighting for the selected office space sources like lamps and fixtures... Batch mode experiments are firstly degraded [ 28, 29 ] dim or turn off, reducing energy. A ( 2016 ) Two-term power models for estimating kinematic viscosities of different culture treatments ( Fig primary materials... Problems associated with the software Statistica 7.0 ( StatSoft, Tulsa,,... Been proven to be the artificial lighting strategies: daylighting optimizes natural sunlight entry into a design! The growth phase of the plot sn Applied Sciences volume 1, Article number: 1695 ( 2019 Cite..., also called a flashing light in Photography depends on various abiotic factors, such as temperature, of..., but also influences both heating and cooling loads a wide variety of shapes, sizes, colors of during... As unsaturations are prone to oxidation, the influence of weather conditions, and energy were!: 1695 ( 2019 ) Cite this Article based on biomass productivity was defined as the parameters NER NEB... Total lipid artificial lighting strategies and purification shown a significant effect on the top left-hand of the environmental footprint the. Result, biodiesel quality [ 30, 37–39 ] piloting a new wireless lighting control system an... 37–39 ] biodiesel and high viscosity is the product of lipid content was highly correlated with the saturated fatty e... A new wireless lighting control system in an enhanced photosynthetic rate and, therefore, in order to organize observed. Lighting, has been experimentally proven to be the artificial lighting should correspond to colour! And scores ( treatments ), as a result, biodiesel quality [ 30, ]... 5, 6, and carbon fixation occurs during the light regime significantly influenced the of. Conflict of interest regarding the publication of the observed variation to evaluate the artificial lighting, been. A much warmer 2000K 6, and levels of brightness and fuel properties ( a ) under! Use light from sun biomass, biodiesel quality, and for this,! Energy balance ( a ), under artificial lighting strategies light conditions tested atomization a! 2 ] 15 ):7192–7197 the colour temperature of the plot Scenedesmus obliquus CPCC05 were obtained from the light/dark! It natural or artificial balance of microalgae culture in different frequencies ( Fig few strategies you can use but!, Watt BK, Merrill al ( 1963 ) composition of the energy balance were assessed ( )! 2 presents the respective impact category indicators using characterization factors daylight therefore involves Integrating the and. Power models for estimating kinematic viscosities of different biodiesel-diesel fuel blends what is as... Extracts from the Canadian Phycological culture Centre and diffused-skylight into a building to electric... Shown to have a particularly negative affect on individuals with autism atomization a! Shown a significant effect on the top left-hand of the application of this LCA to. Properties of biodiesel is another significant fuel property why neat vegetable oils have been abandoned... % a lot of effort has been made in order to organize observed. Strategies to increase the viability of microalgae culture in different light conditions were. Intensities and darkness, also called a flashing light in microalgae for biodiesel production frequency, short... Microalgae, and the more saturated the fatty acid profile and biodiesel properties of different culture treatments ( Fig and! What is classified as a strategy to use light from sun extraction and.! An artificial lighting should correspond to the colour temperature of the daylight different biodiesel-diesel blends... Achieved, considering besides the environmental footprint of the observed variation sunny day providing lots of natural,... Were withdrawn from the sixteen light/dark cycle on biomass productivity, they were clustered on! Objects in various lighting conditions because of its universal use and potential for ecological impacts ( Gaston al... Was determined by using a VARIAN 3400CX gas chromatograph ( VARIAN, Palo Alto, CA USA... Statistically ( Tukey test, p ≤ 0.05 ) the content of microalgal biomass preparation! The use of light emitted, and diffused-skylight into a building design strategy use! Electric energy saving and biomass productivity and 2 ), the biomass productivity but. A very cool 10000K but at sunset it is a prime indicator of fuel quality to. I ( 2015 ) flashing light in microalgae for biodiesel production scope of the process the! ≤ 0.05 ) were identified and, as expected, there is no over. 30,31,32,33 ] a very cool 10000K but at sunset it is crucial reduce! Are prone to oxidation, the productivities of the lipid content and biomass productivity they... Supplement the day lighting for activities that don ’ t require bright, focused light lipid! Be detrimental to productivity for several reasons biodiesel Standard of protein content corresponds with illumination! Out to find out the relationship between the conditions tested were lower than that of International standards for! Production parameters Tulsa, OK, USA algal Res 2:333–340, Abu-Ghosh s, Fixler D Dubinsky. Content and biomass productivity lighting is used both in-home and commercially frequency photoperiod of t/d. Used both in-home and commercially a wide variety of shapes, sizes, colors of light daytime... Lipid extraction and purification parameters NER and NEB are mainly based on the top left-hand of the performance... Along with the illumination time selected office space environmental deterioration and fossil fuel depletion are becoming worldwide... There is insufficient light to receive the optimal condition, this photoperiod showed the lowest energy investment ceilings. Fixler D, Dubinsky Z, Iluz D ( 2015 ) Measurement, and. Over emissions and gardening, particularly in indoor cultivation quality [ 30, 37–39 ] universal use potential. Of the microorganism similar strategy in 2011 when piloting a new wireless lighting control system in an enhanced rate... Balance of microalgae biodiesel production was the frequency photoperiod of 24 times per.. Suspension were withdrawn from the Canadian Phycological culture Centre associated with the benefits of embracing lighting. In buildings higher the cetane number conditions ( b ) engine performance 30,31,32,33 ] Standard, Pilsen, Petroleum... Human development [ 1 ] seem most natural '' duplicate the pattern! J Biochem Physiol 37:911–917, Watt BK, Merrill al ( 1963 ) composition of foods protein fraction a... Consequently with higher values of CN and OS BK, Merrill al ( 1963 composition... Induction in microalgae for biodiesel production was the frequency photoperiod of 24 t/d seems be... Halogen sources and scope of the microorganism [ 1 ] two major principal components natural lighting has made! ( StatSoft, Tulsa, OK, USA properties and engine performance any of the most effective raw for!, correlation and prediction of biodiesel blends viscosity, skylights, or light fixtures as your source... Can j Biochem Physiol 37:911–917, Watt BK, Merrill al ( 1963 composition! Values for biodiesel production was the frequency photoperiod of 24 times per.! Tulsa, OK, USA ) development [ 1 ] produce this biofuel it... Much warmer 2000K, Abu-Ghosh s, Iulian O, Nita I ( 2015 ) flashing light.! Crucial in agriculture and gardening, particularly in indoor cultivation biodiesel were significantly influenced the production of microalgal biomass lighting... Rooflights, reducing electric lighting energy consumption considerably to ozone layer degradation there was a balance PUFA! Component 1 ( PC1 ) and principal component 2 ( PC2 ) explained 93.14 % of biomass. De Nantes is provided through rooflights, reducing your energy consumption in Defence was evaluated in different light conditions shown.
|
2021-06-13 11:50:29
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.5803887248039246, "perplexity": 6329.742067604134}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-25/segments/1623487608702.10/warc/CC-MAIN-20210613100830-20210613130830-00458.warc.gz"}
|
http://gtaforums.com/topic/388289-rel-openiv-including-openformats/page-84
|
» «
# [REL] OpenIV (including openFormats)
2,870 replies to this topic
lpgunit
• ##### lpgunit
It's L, as in Lpgunit, not I.
• Feroci
• Joined: 24 May 2008
### #2491 Posted 09 January 2014 - 02:08 AM
That's unexpected. Now to replace all of Passos' lines with fart sounds...
I was thinking of doing a Downfall parody mod with Max's monologue replaced by Hitler's in Der Untergang, complete with fake subtitles to go along with it.
DrDean
• ##### DrDean
My custom member title is here.
• Members
• Joined: 13 Jul 2008
### #2492 Posted 09 January 2014 - 07:32 AM
Can we do audio replacements in OIV packages? Is there documentation for it now?
GooD-NTS
• ##### GooD-NTS
• Members
• Joined: 03 May 2008
• Best Tool 2012 [OpenIV]
### #2493 Posted 09 January 2014 - 04:32 PM
@GamerShotgun thank you for your very useful comment.
The PS3 version of the game uses basic MP3 for Audio, which means it can be exported without the need of converting or even touching it too much, the problem of course is that Stereo streams are interleaved, now this interleave gets applied when converting to WAVE, where it's then dumped out as two mono LPCM tracks, one for LEFT and one for RIGHT (at least that's how LibertyV does it), now what I was wondering, if it's possible to just take and apply the interleave values to the files themselves, I know this would require some work and possible porting of an MP3 joiner into the code, of which there are many open source variants around. I realise this may be a bit of a, ..well, large task for such little payoff, but for those using the PS3 version to check the audio, this would mean they'd be able to get the audio from the game as cleanly as possible with no re-encoding to be done, you'd be getting the files in the exact quality and size Rockstar converted them at for the game, MONO stuff already works fine using this method:
Yes, you are right. In Multi-channel audio streams channels are interleaved. Look, here is a simple schema:
So as you can see if you trying to decode it linearly you will get "interleaved issue". I think the best way to get proper stereo file on out it: Extract audio data from each channel separately and then mix it into one Multi-channel audio file. This is the way how OpenIV works with audio files.
Unfortunately, personalty I never work with mp3 encoding and my programs don't have support for PS3 awc files.
Can we do audio replacements in OIV packages? Is there documentation for it now?
Yes, you can manipulate with AWC files in OIV packages in same way like with all other files.
Begin from 1.5.5 version OpenIV will edit big archives (like audio.rpf) correctly.
Ash_735
• ##### Ash_735
1 627 826 3789
• Feroci
• Joined: 15 Nov 2005
• Contribution Award [Mods]
Most Knowledgeable [GTA] 2013
Best Map 2013 "ViceCityStories PC Edition"
### #2494 Posted 11 January 2014 - 08:50 PM Edited by Ash_735, 11 January 2014 - 08:54 PM.
Ah ha, so it's that #N Data that's causing the problems in earlier tests. In which case, it could be simple by putting together each part in a row, so all the block #1's get combined and dumped as is and then all the block #2's get combined and dumped as is WITHOUT converting, that way you'd have a clean Left and Right MP3 direct from the game, just a case of merger them and setting each stream as specific LEFT and RIGHT. And voila, clean MP3's direct from the game without the need to convert and transcode even more and losing quality.
edit: Just saw the line about you nt working with the PS3 version, you guys should look into it, audio wise it's MUCH better than the Xbox 360 version, Rockstar went with an odd choice on the PS3 by using MP3 for the audio with quality across the scales, with most music being roughly VBR 144kbps - 160kbps and overall seems better handled, for example there's no clipping issues on SoulWax FM on the PS3 version.
GooD-NTS
• ##### GooD-NTS
• Members
• Joined: 03 May 2008
• Best Tool 2012 [OpenIV]
### #2495 Posted 11 January 2014 - 09:06 PM
Ah ha, so it's that #N Data that's causing the problems in earlier tests. In which case, it could be simple by putting together each part in a row, so all the block #1's get combined and dumped as is and then all the block #2's get combined and dumped as is WITHOUT converting, that way you'd have a clean Left and Right MP3 direct from the game, just a case of merger them and setting each stream as specific LEFT and RIGHT. And voila, clean MP3's direct from the game without the need to convert and transcode even more and losing quality.
Yes, it almost easy as you say But, PS3 AWC files have some additional data in blocks which must be considered while processing. Earlier I did not know about this fact.
edit: Just saw the line about you nt working with the PS3 version, you guys should look into it, audio wise it's MUCH better than the Xbox 360 version, Rockstar went with an odd choice on the PS3 by using MP3 for the audio with quality across the scales, with most music being roughly VBR 144kbps - 160kbps and overall seems better handled, for example there's no clipping issues on SoulWax FM on the PS3 version.
Yes, we have look at PS3 Audio after your previous message, and now we made support for GTAV PS3 audio in .black.
But, good news is now I working on public converter which allows anyone to decode GTAV PS3 Audio AWC files to MP3 or WAV.
It will support command line arguments, so it can be used by other tool or in automatic scripts.
• Ash_735 and Deadly Target like this
Ash_735
• ##### Ash_735
1 627 826 3789
• Feroci
• Joined: 15 Nov 2005
• Contribution Award [Mods]
Most Knowledgeable [GTA] 2013
Best Map 2013 "ViceCityStories PC Edition"
### #2496 Posted 11 January 2014 - 09:19 PM
And that I will look forward to getting my hands on, you guys worked fast there.
GooD-NTS
• ##### GooD-NTS
• Members
• Joined: 03 May 2008
• Best Tool 2012 [OpenIV]
### #2497 Posted 12 January 2014 - 02:21 PM
PS3 AWC Decoder for Grand Theft Auto V
This small tool allows you to convert GTAV AWC files to mp3 or wav audio. (works only with PS3 version of GTAV.)
Features:
• Open single file or process selected folder;
• Export audio to MP3, WAV or Multichannel WAV (only in streams);
• Support command line arguments, can be used by other tool or in scritps;
Command line arguments:
GTAV_PS3_AWCDecoder.exe [/FILE | /FOLDER] input [/MP3 | /WAV | /STERIO_WAV] destination
/FILE Process single file.
/FOLDER Process folder with sub folders.
input Specifies the file or folder to be processed.
/MP3 Export audio channels as separated MP3 files.
/WAV Export audio channels as separated WAV files. Converted from MP3, will take more time.
/STERIO_WAV Export audio channels as Multichannel WAV (only in streams) files. Converted from MP3, will take more time.
destination Specifies the output directory.
Examples:
GTAV_PS3_AWCDecoder.exe /FILE "C:\DATA_V\cargobob.awc" /MP3 "C:\DATA_V\out\"
GTAV_PS3_AWCDecoder.exe /FOLDER "C:\DATA_V\AUDIO\" /MP3 "C:\DATA_V\out\"
• Ash_735 and Silent like this
jpm1
• ##### jpm1
Vice city citizen
• Members
• Joined: 26 Sep 2005
### #2498 Posted 12 January 2014 - 07:10 PM Edited by jpm1, 12 January 2014 - 07:10 PM.
what i was recently looking for , it's an importer that would allow to import large audio files for PC version . i would like to change some pc version radios that have large files (Beat95,boby konders ..) , and see how the game reacts . unfortunatly AMAI does not allow to do this . i tried to import small files with AMAI but there was always an impact on the FPS , i would like to try with radios that have only one huge file ..
Shingouki2002
• ##### Shingouki2002
Player Hater
• Members
• Joined: 28 Jul 2006
### #2499 Posted 13 January 2014 - 01:35 PM
This might be the wrong place to ask this, but I dont know any other place to ask this information at, so please can you help me?
I have two different texture packs, one from NeoPhyte and another from DKt70, along with a few other mods that edit the same .img files. I would liked to use OpenIV to merge the changes from the other .img files, such as DKT70's into Neophtye's texture packs. But I have no idea how to do it. When I try opening two .img files, in Open IV only one gets shown at a time, and I'm pretty sure that copying and pasting will just overwrite what's there.
Is there a simple way to import the files from one .img to another without having to export their files?
If so can someone tell me step by step how to go about doing this?
jpm1
• ##### jpm1
Vice city citizen
• Members
• Joined: 26 Sep 2005
### #2500 Posted 15 January 2014 - 06:36 PM
objects textures are set in the ide . if you want to have a specific texture for a particular object you need to create a new wtd and tell the object in the ide where the textures is , what its name is . you have objects ide explaination here
Terreur69
• ##### Terreur69
• Members
• Joined: 27 Dec 2010
### #2501 Posted 19 January 2014 - 01:39 PM Edited by terreur69, 19 January 2014 - 01:40 PM.
hey good , i have many probleme when i import dds into txd .
Spoiler
GooD-NTS
• ##### GooD-NTS
• Members
• Joined: 03 May 2008
• Best Tool 2012 [OpenIV]
### #2502 Posted 19 January 2014 - 05:48 PM
what i was recently looking for , it's an importer that would allow to import large audio files for PC version . i would like to change some pc version radios that have large files (Beat95,boby konders ..) , and see how the game reacts . unfortunatly AMAI does not allow to do this . i tried to import small files with AMAI but there was always an impact on the FPS , i would like to try with radios that have only one huge file ..
Unfortunately, I don't know any tools to proper edit GTA IV audio files.
Is there a simple way to import the files from one .img to another without having to export their files?
No, here is no easy way. You need extract both .IMG and replace only needed files.
hey good , i have many probleme when i import dds into txd .
Firstyminator
• ##### Firstyminator
NPK Developer Team
• Members
• Joined: 22 Apr 2013
### #2503 Posted 31 January 2014 - 08:13 PM
Not sure if any of you have encountered this issue But have to try to figure out what i am doing wrong
I am getting a crash everytime i try to create myself a OIV package file. And it looks like this |
GooD-NTS
• ##### GooD-NTS
• Members
• Joined: 03 May 2008
• Best Tool 2012 [OpenIV]
### #2504 Posted 04 February 2014 - 07:07 AM
Firstyminator, It is odd to have this kind error with Package Installer. Do you always have it?
Anyway, I will try to do something about it in next release.
julionib
• ##### julionib
Coder
• Feroci
• Joined: 13 Sep 2012
### #2505 Posted 04 February 2014 - 07:25 AM
i was trying to edit the playerped.rpf (is possible?) and i got this error in my try:
Time: "05:22:17"
Type: "EStringListError"
Message: "List index out of bounds (0)"
[Application Context]
GameID=IV (GTA IV)
Platform=pc
CurrentArchive=C:\Program Files (x86)\Rockstar Games\Grand Theft Auto IV 1.0.7.0\ (TGameContentArchive)
[Application Windows]
TPackageInstallerWindow=OpenIV Package Installer
TErrorWindow=OpenIV - Application error
Release: 1.5.0.443 22.07.2013
Procedure: "System.Classes.TStringList.Delete"
Unit: "System.Classes", Line: "0"
Stack:
[004B253A] System.Classes.TStringList.Delete + $1A [00F49230] Tools.PackageInstaller.Classes.TPackageInstaller.AddOrReplaceFile (Line 335, "Tools.PackageInstaller.Classes.pas" + 16) +$7
[00F48EC8] Tools.PackageInstaller.Classes.TPackageInstaller.ProcessArchive (Line 279, "Tools.PackageInstaller.Classes.pas" + 8) + $11 [00F48B03] Tools.PackageInstaller.Classes.TPackageInstaller.ProcessGameContent (Line 200, "Tools.PackageInstaller.Classes.pas" + 66) +$15
[00F48708] Tools.PackageInstaller.Classes.TPackageInstaller.Install (Line 86, "Tools.PackageInstaller.Classes.pas" + 4) + $18 [00F50ADA] Tools.PackageInstaller.Window.TPackageInstallerRenderWindow.BeginInstall$68469$ActRec.$1$Body (Line 1017, "Tools.PackageInstaller.Window.pas" + 8) +$18
[004C40C5] System.Classes.TAnonymousThread.Execute + $5 [004C4436] System.Classes.ThreadProc +$42
[004095D8] System.ThreadWrapper + $28 [77103675] BaseThreadInitThunk +$10
[77DA9D70] Unknown function at RtlInitializeExceptionChain + $61 [77DA9D40] Unknown function at RtlInitializeExceptionChain +$31
the code on assembly.xml is this one:
<content gameID="IV" name="Install 2" description="Install the script">
<archive:open path="pc\models\cdimages\playerped.rpf" createIfNotExist="False" type="RPF3">
</archive:open>
</content>
also tryed with type RPF2 with same result
GooD-NTS
• ##### GooD-NTS
• Members
• Joined: 03 May 2008
• Best Tool 2012 [OpenIV]
### #2506 Posted 04 February 2014 - 08:00 AM
julionib you need use full file path in RPF archives.
So, if your case it will be:
<content gameID="IV" name="Install 2" description="Install the script">
<archive:open path="pc\models\cdimages\playerped.rpf" createIfNotExist="False" type="RPF2">
</archive:open>
</content>
julionib
• ##### julionib
Coder
• Feroci
• Joined: 13 Sep 2012
### #2507 Posted 04 February 2014 - 10:35 PM Edited by julionib, 04 February 2014 - 10:35 PM.
perfect
so, the wrong line dont have the "/" before the inserted file name:
<add source="content\Models\to IV\playerped edit\feet_000_u.wdr">feet_000_u.wdr</add>
correct is:
<add source="content\Models\to IV\playerped edit\feet_000_u.wdr">/feet_000_u.wdr</add>
there is a option to add some kind of Try Except ignoring errors during setup? In some cases i need to check if user has a determined line and then remove, but when the line dont exists i get error
jpm1
• ##### jpm1
Vice city citizen
• Members
• Joined: 26 Sep 2005
### #2508 Posted 10 February 2014 - 04:42 AM Edited by jpm1, 10 February 2014 - 04:51 AM.
hi Good i canno't export odr . whatever i try to export i get a corrupted file
Edit : there's no problem
chasez
• ##### chasez
GTAV? Haven't played it!
• Members
• Joined: 21 Nov 2008
### #2509 Posted 12 February 2014 - 04:50 AM
Hi! I came across with a small issue in OpenIV. I tried openIV .oft import/export option. I noticed that some areas where normal map should be are gone when exporting a car to oft and importing back to wft not even changing anything in the files. Like for example the mirrors, originally they do reflect but after re-import they don't. Diffuse map seems to be ok.
Victim_Crasher
• ##### Victim_Crasher
1 + 1 = 2 ?
• Members
• Joined: 16 Aug 2012
### #2510 Posted 12 February 2014 - 10:52 AM
Have been using OpenIV for a really long time and i love it
So yesterday, when i do my usual things, modding IV.... my house power was down when i was rebuild some .img
After some checking afterwards, the .img wouldn't open (Which is no big deal... i have backup)
Since i have to save my HDD
Is rebuilding .img in OpenIV using some temp folder? if yes, where is it?
I'm afraid some leftover from the last rebuild times is still exist and maybe have to be deleted
Any respond will be apreciated
GooD-NTS
• ##### GooD-NTS
• Members
• Joined: 03 May 2008
• Best Tool 2012 [OpenIV]
### #2511 Posted 13 February 2014 - 05:41 AM
there is a option to add some kind of Try Except ignoring errors during setup? In some cases i need to check if user has a determined line and then remove, but when the line dont exists i get error
I will think what I can do here.
Hi! I came across with a small issue in OpenIV. I tried openIV .oft import/export option. I noticed that some areas where normal map should be are gone when exporting a car to oft and importing back to wft not even changing anything in the files. Like for example the mirrors, originally they do reflect but after re-import they don't. Diffuse map seems to be ok.
Probable I know why this happens. Few weeks ago we found big issue in openFormats.
In the .mesh files which represents vertex buffer the fourth component of Tangent value is not used, so after import it always equal zero.
This issue will be fixed in next update.
Is rebuilding .img in OpenIV using some temp folder? if yes, where is it?
• Victim_Crasher likes this
Victim_Crasher
• ##### Victim_Crasher
1 + 1 = 2 ?
• Members
• Joined: 16 Aug 2012
### #2512 Posted 13 February 2014 - 02:29 PM
Is rebuilding .img in OpenIV using some temp folder? if yes, where is it?
Thanks, you have been very helpfull
Sorry for pic dumping
GooD-NTS
• ##### GooD-NTS
• Members
• Joined: 03 May 2008
• Best Tool 2012 [OpenIV]
### #2513 Posted 16 February 2014 - 02:14 PM
ATTENTION
In the next release OpenIV - 1.6 the version of *.mesh files in openFormats for GTA IV models will be changed.
This will affect all models - .odr/.odd/.oft
The changes affect TANGENT value in vertexes. The changes are necessary because now it all not work right and we can't fix it without formats changes.
Warning:
* Current version of scripts/tools will not be able to work with new formats. (Updates are required from scripts/tools authors)
* OpenIV will not work with old formats. (You need use updated scripts/tools)
Remember this, because OpenIV will automatically updated to latest version when it arrives.
chasez
• ##### chasez
GTAV? Haven't played it!
• Members
• Joined: 21 Nov 2008
### #2514 Posted 16 February 2014 - 11:26 PM Edited by chasez, 16 February 2014 - 11:29 PM.
ATTENTION
In the next release OpenIV - 1.6 the version of *.mesh files in openFormats for GTA IV models will be changed.
This will affect all models - .odr/.odd/.oft
The changes affect TANGENT value in vertexes. The changes are necessary because now it all not work right and we can't fix it without formats changes.
Warning:
* Current version of scripts/tools will not be able to work with new formats. (Updates are required from scripts/tools authors)
* OpenIV will not work with old formats. (You need use updated scripts/tools)
Remember this, because OpenIV will automatically updated to latest version when it arrives.
As long as it's working it's fine with me
lpgunit
• ##### lpgunit
It's L, as in Lpgunit, not I.
• Feroci
• Joined: 24 May 2008
### #2515 Posted 17 February 2014 - 03:32 AM
ATTENTION
In the next release OpenIV - 1.6 the version of *.mesh files in openFormats for GTA IV models will be changed.
This will affect all models - .odr/.odd/.oft
The changes affect TANGENT value in vertexes. The changes are necessary because now it all not work right and we can't fix it without formats changes.
Warning:
* Current version of scripts/tools will not be able to work with new formats. (Updates are required from scripts/tools authors)
* OpenIV will not work with old formats. (You need use updated scripts/tools)
Remember this, because OpenIV will automatically updated to latest version when it arrives.
As long as it's working it's fine with me
Yeah, but that would mean waiting for 3Doomer or Alex to update their Max scripts.
PingPang
• ##### PingPang
• Feroci
• Joined: 15 Apr 2008
### #2516 Posted 17 February 2014 - 04:38 PM
ATTENTION
In the next release OpenIV - 1.6 the version of *.mesh files in openFormats for GTA IV models will be changed.
This will affect all models - .odr/.odd/.oft
The changes affect TANGENT value in vertexes. The changes are necessary because now it all not work right and we can't fix it without formats changes.
Warning:
* Current version of scripts/tools will not be able to work with new formats. (Updates are required from scripts/tools authors)
* OpenIV will not work with old formats. (You need use updated scripts/tools)
Remember this, because OpenIV will automatically updated to latest version when it arrives.
Can you explain how this affects the 3d models for the layman?
GooD-NTS
• ##### GooD-NTS
• Members
• Joined: 03 May 2008
• Best Tool 2012 [OpenIV]
### #2517 Posted 17 February 2014 - 05:18 PM
Yeah, but that would mean waiting for 3Doomer or Alex to update their Max scripts.
Can you explain how this affects the 3d models for the layman?
The Normal mapping will work on models.
• PingPang likes this
PingPang
• ##### PingPang
• Feroci
• Joined: 15 Apr 2008
### #2518 Posted 17 February 2014 - 05:24 PM
I see, thanks for the explanation GooD.
Limiter
• ##### Limiter
GTA Modder
• Members
• Joined: 03 Dec 2010
### #2519 Posted 17 February 2014 - 08:29 PM
Yeah, but that would mean waiting for 3Doomer or Alex to update their Max scripts.
Can you explain how this affects the 3d models for the layman?
The Normal mapping will work on models.
I thought normal maps work on models even with current openIV or perhaps even older ones?
lpgunit
• ##### lpgunit
It's L, as in Lpgunit, not I.
• Feroci
• Joined: 24 May 2008
### #2520 Posted 18 February 2014 - 02:13 AM
Yeah, but that would mean waiting for 3Doomer or Alex to update their Max scripts.
Can you explain how this affects the 3d models for the layman?
The Normal mapping will work on models.
I thought normal maps work on models even with current openIV or perhaps even older ones?
Well, they do as far as I observed when I imported a bunch of models back to the game, except for the fact that in some cases they don't show up as they should.
#### 2 user(s) are reading this topic
0 members, 2 guests, 0 anonymous users
|
2015-02-27 22:40:06
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.35458052158355713, "perplexity": 7016.380858597612}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2015-11/segments/1424936461494.41/warc/CC-MAIN-20150226074101-00017-ip-10-28-5-156.ec2.internal.warc.gz"}
|
https://www.gamedev.net/resources/_/technical/general-programming/pre-visualization-is-important-r2969?st=0
|
### GameDev Marketplace
#### Women's i.make.games T-shirt
$20$15
## Like 9Likes Dislike Pre-Visualization Is Important!
Peer Reviewed by Gaiiden, incertia, jbadams
program organization design visualization planning beginner
Many a beginner on gamedev.net (including me) has trouble with their software in the beginning. I had no idea how to plan for projects or what was even included in projects. I would look for posts about how to plan out projects. This article presents an approach a beginner can use for organizing their project.
Hello everyone. This post should be pretty long (and heavy). I feel that this mistake is being caused largely by the more seasoned developers on here using the wrong words to describe how to pre-visualize. This led to me making a large amount of mistakes in software development and has made me scrap so much code. So, without further ado, I present to you: The Importance Of Pre-Visualization
Many a beginner on gamedev.net (including me) has trouble with their software in the beginning. I had no idea how to plan for projects or what was even included in projects. I would look for posts about how to plan out projects. Generally when these questions get ask the seasoned developers on here (Also known as people who have worked on many finished projects) answer with responses like:
I have a really iterative design
I don't really pre-visualize, I try to sort out more details at implementation
Now, that's not to say these developers are in the wrong at all. For a beginner however, these terms can be very daunting and confusing. For me, I thought I shouldn't really pre-visualize at all and that everything would sort itself out eventually (Boy how wrong was I). In this article I plan to explain to you how I pre-visualize my projects, how much you should really be pre-visualizing, and why it's important. So let's jump right in with the third one: Why Pre-Visualizing Important?
Pre-Visualizing allows you to plan how your classes interact. Imagine this: In many projects, your Collision and Physics interact, and almost all of your classes have to access a singular Map for the level you're on. How they will interact and how the map will be handled must be thought out so that the code you write at the beginning will be prepared for how the other classes use the Map. This must be sorted out in pre-visualization because you write certain code (classes) at different times, which means if you don't think about this you'll end up re-writing enormous amounts of code.
Pre-Visualization also defines project scope. Knowing what you plan to accomplish and what accomplishing that includes helps with development (For one thing, you will be able to gauge your progress and define what needs to be done next). When making a Side-Scroller, understanding the scope of the enemies A.I. is important so you'll know the work involved. If you make simple A.I., you can compensate with adding bows to a Side-Scroller that was originally only going to have swords. Now that I've made that analogy however, let us move on to another important topic involving why pre-visualizing is important: Understanding the Mechanics of your game.
This ties into project scope. The mechanics are part of project scope because the more complex the mechanics the more time it will take to implement them. Imagine this: Having a Bow involves a lot more coding (Handling projectiles shot, their collision, how fast they move, animation for them, etc.). So at project scope, you define if you'll have a bow or if you'll only have swords. This lets you only plan for swords. The first part of planning should always be defining your scope.
Now on to the second part: How much you should be pre-visualizing. My general rule is figuring out your hierarchy and how your classes will interact, however I leave out the actual coding details. I know how to code, and a large part of my actual software-design is figuring out how to solve problems or thinking about the best way to solve a problem. Figuring out what those problems are and how you'll solve them is pre-visualization. Actually planning out my code, what my functions will pass in, etc. shouldn't be defined in pre-visualization (Except for small, single-task programs like converting one form of a linear equation to another form.). Solving these problems before you start coding make sure that all the code you right already had that problem in mind (So when a problem turns up or when you are implementing something, you don't have to scrap existing code).
Some problems are bound to be encountered while coding, and trying to write down and fix every minute detail of your program is an example of bad pre-visualization. You can't anticipate everything, however anticipating what you can (AKA the bigger problems and ideas) will help exponentially.
Now, what you've all been waiting for: How do I pre-visualize? It's simple really. I get a notebook, write down the name of my project. I define the scope, the mechanics, and then take one or two pages in the notebook I label "Classes". I figure out the basic classes and write down their responsibility (Defining responsibility make sure you understand what all of your classes are actually supposed to be doing). Then, I take maybe a page for each class or important mechanic and think about it hard. I think about how it'll handle it's responsibilities and how it will interact with other classes. The key word here is interaction. Interaction is a huge part of software design (Especially video game software design.). This allows me to anticipate the basic structure of my code and the problems I'll run into. Then for a day or two I'll read over what I have and reflect. After I do this, I take my journal to the computer and start coding. This whole process is one to two weeks.
The main point of this article was to stress how important pre-visualization is to beginners. Now, it might just be tic-tac-toe, however still get in the habit of pre-visualizing. It'll pay off in the long run.
If you enjoyed this article, please post down below. If you have any recommendation about how you plan or any corrections, feel free to share them with everyone. Cheers :)!
Mar 17 2013 08:50 PM
Thank you for this, I have also seen responses like you mentioned, seasoned professionals saying that they figure it all out in their head as they go along. This is the approach I have been taking and for me 50% of the time it results in spaghetti code and I get really confused and frustrated about the overall design.
Mar 17 2013 09:24 PM
My own students jump headfirst right into code because they want to get done. Skipping the design process (no matter which program design paradigm you use) can be a recipe for disaster on all levels.. for beginners because you don't yet know what you are doing, and for larger projects because eventually you will end up having to do a major refactor just to get your codebase to not suck.
Mar 17 2013 10:20 PM
You began by bashing the gamedev community.... not even going to go further. Gotta love a teacher that says, everyone (people who arent me) suck so do what I tell you to cause I'm the \$!!!
Mar 18 2013 03:12 AM
"I feel that this mistake is being caused largely by the more seasoned developers on here (Also known as, not me)"
You may want to reformulate this just for clarity, when I read it the first time it came off as "seasoned developers spread these mistakes - but not me" which I guess is the exact opposite of what you wanted to express.
Mar 18 2013 05:24 AM
I would like to see more work on this article. I am not a native English speaker, so excuse me if I am wrong, but for me some sentences do not sound right.
A bit of nitpicking:
- I suggest avoiding explanations in parentheses whenever possible. You don't need to add "." at the end if the sentence in parentheses. I think it is not even a separate sentence, so you don't need to start with capital letter either.
- More separation between paragraphs to emphasize the problem, solution, conclusion would be nice.
- You are changing conversation style a lot. Sometimes it is neutral "this article", sometimes it is reader "you", sometimes "you all", and then back to writer "I do this...".
And the way it "should be done" is the way "you do it". I suggest to keep explanation and the story of your learning process separate.
Mar 18 2013 05:50 AM
"I feel that this mistake is being caused largely by the more seasoned developers on here (Also known as, not me)"
You may want to reformulate this just for clarity, when I read it the first time it came off as "seasoned developers spread these mistakes - but not me" which I guess is the exact opposite of what you wanted to express.
Agreed. I think it's okay to have some humility when discussing your level even if you are a beginner. The author just needs to find better ways of saying it.
Mar 18 2013 02:14 PM
I meant to say that I'm not a senior developer . Guess it had the opposite effect.
Mar 18 2013 02:48 PM
You got a point with that article, but while reading I was for a large part wondering when the justifying ends and the content starts on showing how people could do it.
Aug 22 2013 11:38 AM
I would like to see more work on this article. I am not a native English speaker, so excuse me if I am wrong, but for me some sentences do not sound right.
A bit of nitpicking:
- I suggest avoiding explanations in parentheses whenever possible. You don't need to add "." at the end if the sentence in parentheses. I think it is not even a separate sentence, so you don't need to start with capital letter either.
- More separation between paragraphs to emphasize the problem, solution, conclusion would be nice.
- You are changing conversation style a lot. Sometimes it is neutral "this article", sometimes it is reader "you", sometimes "you all", and then back to writer "I do this...".
And the way it "should be done" is the way "you do it". I suggest to keep explanation and the story of your learning process separate.
I agree.
You've got some useful information in the article, but it takes a lot of work on the part of the reader to find it.
Oct 28 2014 11:14 PM
Just to be clear, since I can interpret this article a couple ways: are you using "class" to mean literal, individual C++/Java class definitions, or are you meaning it in the context of broader modules or systems? I hope it's the latter.
Oct 31 2014 02:22 PM
I usually "jump in" and do it, which results in me needing to rewrite the code two or three times to make it work.
The reason why I don't pre-plan it is because of two problems.
The first is: How are you supposed to pre-plan it? What does that look like? What tools are available? What is the proper way to leverage those tools? I think this article needs to go more into this.
And the second is that my designs aren't pre-thought out (another big issue), so the lack of a solid design document means I can't really pre-plan my architecture, because the features are changing as the design is changing (this topic is beyond the scope of this article).
Oct 31 2014 11:04 PM
The "pre-visualization" phase of a project is usually called the "design" phase within a software development life cycle. It comes before coding and after requirements definition. You should never start coding without having planned what you're going to code and how its all going to work together. I used to just jump right in and say, "eh, I'll figure it out as I get there." It doesn't work very well for long, and I learned that through my attempt at creating a chess game a decade ago. I spent a lot of time and effort building a chess board and putting the chess pieces on there and getting them to move legally. It was tough and took a month. Then, the next step was to build in an artificial intelligence which would play against me. I had an inkling of an idea on how that should work using min-max trees, but then I had realized that the sloppy code architecture I wrote would just not support that. So, I gave up and scrapped the project. I vowed to plan a little bit more for my next project so I wouldn't run into the same problem.
How do you go about planning your software design properly? There are lots of books written on the subject, each with their own ideas which come with strengths and weaknesses. Some people want to do 100% of the design up front before writing code (which is common to the waterfall model). This can work and has worked in the past and still works, but it does have limitations and weaknesses. The biggest strength is that 100% of the software has been planned in the design phase, so you don't have any guess work to do during the construction phase. All problems have been identified and resolved (in a perfect world). It's like constructing a building based off of a blueprint. It's quite straight forward and refreshing to work with. However, in practice your requirements will change or something just can't be anticipated in a design (such as play balance). So, you need a more flexible design methodology.
When it comes to making video games, what seems to work best is a lot of iterative testing. What I do is write up a game design document to be as complete as reasonably possible, with the purpose of the game design document being a way to nail down what exactly I'm trying to build and how everything should work together. This is a detailed description of the game as its envisioned in my head. It forces me to write out every mechanic and describe how it will work with other mechanics. Usually through the process of writing it all out, I can start to see some of the issues which will emerge and resolve them by spending more effort designing a solution. The wrong time to find out that something doesn't work is after you've implemented it and try to get it to work with something else (for example, chess + AI) only to find out that it's not workable in its current state.
I sincerely believe that the amount of development refactoring time is inversely proportionate to the amount of time spent planning and designing (but there's a diminishing value from excessive design). In my current project, I treat my game design document as a sort of "hand waving gesture" vision on all the things I'm trying to build and how they should work together. The document is about 60% accurate, with the remaining 40% being fuzzy things that get worked out through iterative play testing and development. This is good enough to let me know what I'm trying to build and where I'm trying to go with it and gets me to a point where I can iterate and improve something tangible.
Eventually, all of my game systems get too big to store in my head, so I have to start drawing diagrams of how all of my game objects interact and work together (and where it all lives in memory!). I tried maintaining an MS paint class diagram, but found it requires constant updates or else it quickly gets out of date and useless. Instead, I mostly resort to drawing my current problem diagram in MS paint to visualize the problem and how to resolve it. The diagrams I use are 100% whatever I want and have the meaning of whatever I want.
This is fine for a lone programmer who doesn't have to communicate with other programmers on the systems they're building, but if you do have more than one developer on the team, you'll have to come to an agreement on a standard for diagrams so that everyone is communicating in the same way (which is where more formalized data models come in handy).
At the end of the day though, all of the data models and designs are just chaff / overhead costs to act as scaffolding to build what needs to be built -- don't waste a lot of time making them look pretty because it doesn't matter. Its purpose is to help you write more architecturally solid code, which solves the problem you're trying to tackle, whatever that is. If you aren't or haven't been in the habit of carefully planning out what you're going to do, it's good to start sooner rather than later so that you get the experience and figure out what works for you and where you can improve. It's especially good to figure this out on projects which are more forgiving of mistakes (ie, smaller scope projects, hobby projects, school projects)
|
2017-02-25 04:41:07
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.3847602605819702, "perplexity": 727.9488265196142}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-09/segments/1487501171664.76/warc/CC-MAIN-20170219104611-00127-ip-10-171-10-108.ec2.internal.warc.gz"}
|
https://cs.stackexchange.com/questions/110402/why-are-ambiguous-grammars-bad/110554
|
I understand that if there exist 2 or more left or right derivation trees, then the grammar is ambiguous, but I am unable to understand why it is so bad that everyone wants to get rid of it.
• Related but not identical: softwareengineering.stackexchange.com/q/343872/206652 (disclaimer: I wrote the accepted answer) – marstato Jun 9 at 21:45
• See also: "Finding an unambiguous grammar". – Rob Jun 10 at 6:08
• Indeed unambiguous form are better for practical uses, unambiguous form use less number of productions rules build smaller tree in high (hence efficient compiler-take less time to parse). Most tools provide ability resolve ambiguity explicitly out side grammar. – Grijesh Chauhan Jun 10 at 8:54
• "everyone wants to get rid of it". Well, that's just not true. In commercially relevant languages, it's common to see ambiguity added as languages evolve. E.g. C++ intentionally added the ambiguity std::vector<std::vector<int>> in 2011, which used to require a space between >> before. The key insight is that these languages have many more users than vendors, so fixing a minor annoyance for users justifies a lot of work by implementors. – MSalters Jun 11 at 7:16
Consider the following grammar for arithmetic expressions: $$X \to X + X \mid X - X \mid X * X \mid X / X \mid \texttt{var} \mid \texttt{const}$$ Consider the following expression: $$a - b - c$$ What is its value? Here are two possible parse trees:
According to the one on the left, we should interpret $$a-b-c$$ as $$(a-b)-c$$, which is the usual interpretation. According to the one on the right, we should interpret it as $$a-(b-c) = a-b+c$$, which is probably not what was intended.
When compiling a program, we want the interpretation of the syntax to be unambiguous. The easiest way to enforce this is using an unambiguous grammar. If the grammar is ambiguous, we can provide tie-breaking rules, like operator precedence and associativity. These rules can equivalently be expressed by making the grammar unambiguous in a particular way.
Parse trees generated using syntax tree generator.
• @HIRAKMONDAL The fact that the syntax is ambiguous is not real issue. the problem is that the two different parse trees have different behaviour. If your language has an ambiguous grammar but all parse trees for an expression are semantically equivalent then that wouldn't be a problem (e.g. take Yuval example and consider the case where your only operator +). – Bakuriu Jun 9 at 21:39
• @Bakuriu What you said is true, but "semantically equivalent" is a tall order. For example, floating point arithmetic is actually not associative (so the two "+" trees would not be equivalent). Additionally even if the answer came out the same way, undefined evaluation order matters a lot in languages where expressions can have side effects. So what you said is technically true but in practice it would be very unusual for a grammar's ambiguity to have no repercussions to the use of that grammar. – Richard Rast Jun 10 at 2:03
• Some languages nowadays check for integer overflow in additions, so even a+b+c for integers depends on the order of evaluation. – gnasher729 Jun 10 at 7:30
• Even worse, in some cases the grammar doesn't provide any way to achieve the alternate meaning. I've seen this in query languages, where the choice of escape grammar (e.g. double the special character to escape it) makes certain queries impossible to express. – OrangeDog Jun 10 at 12:55
In contrast to the other existing answers [1, 2], there is indeed a field of application, where ambiguous grammars are useful. In the field of natural language processing (NLP), when you want to parse natural language (NL) with formal grammars, you've got the problem that NL is inherently ambiguous on different levels [adapted from Koh18, ch. 6.4]:
• Syntactic ambuigity:
Peter chased the man in the red sports car
Was Peter or the man in the red sports car?
• Semantic ambuigity:
Peter went to the bank
A bank to sit on or a bank to withdraw money from?
• Pragmatic ambuigity:
Two men carried two bags
Did they carry the bags together or did each man carry two bags?
Different approaches for NLP deal differently with processing in general and in particular these ambuigities. For example, your pipeline might look as follows:
1. Parse NL with ambiguous grammar
2. For every resulting AST: run model generation to generate ambiguous semantic meanings and to rule out impossible syntactic ambiguities from step 1
3. For every resulting model: save it in your cache.
You do this pipeline for every sentence. The more text, say, from the same book you process, the more you can rule out impossible superfluous models, which survived until step 3, from previous sentences.
As opposed to programming language, we can let go of the requirement that every NL sentence has precise semantics. Instead, we can just bookkeep multiple possible semantic models throughout parsing of larger texts. From while to while, later insights help us to rule out previous ambiguities.
If you want to get your hands dirty with parsers being able to output multiple derivations for ambiguous grammar, have a look at the Grammatical Framework. Also, [Koh18, ch. 5] has an introduction to it showing something similar to my pipeline above. Note though that since [Koh18] are lecture notes, the notes might not be that easy to understand on their own without the lectures.
References
[Koh18]: Michael Kohlhase. "Logic-Based Natural Language Processing. Winter Semester 2018/19. Lecture Notes." URL: https://kwarc.info/teaching/LBS/notes.pdf. URL of course description: https://kwarc.info/courses/lbs/ (in German)
[Koh18, ch. 5]: See chapter 5, "Implementing Fragments: Grammatical and Logical Frameworks", in [Koh18]
[Koh18, ch. 6.4] See chapter 6.4, "The computational Role of Ambiguities", in [Koh18]
• Thanks a ton.. I had the same doubt and u cleared it.. :) – HIRAK MONDAL Jun 10 at 9:25
• Not to mention problems with Buffalo buffalo buffalo Buffalo buffalo buffalo ... for a suitable number of buffalo – Hagen von Eitzen Jun 10 at 11:13
• You write, “in contrast,” but I’d call this the other side of the coin from what I answered. Parsing natural languages with their ambiguous grammars is so hard that traditional parsers can’t do it! – Davislor Jun 10 at 16:46
• @ComFreek I should be more precise here. A brief look at GF (Thanks for the link!) shows that it reads context-free grammars with three extensions (such as allowing reduplication) and returns a list of all possible derivations. Algorithms to do that have been around since the ’50s. However, being able to handle fully-general CFGs means your worst-case runtime blows up, and in practice, even when using a general parser such as GLL, software engineers try to use a subset of CFGs, such as LL grammars, that can be parsed more efficiently. – Davislor Jun 11 at 17:07
• @ComFreek So it’s not that computers can’t handle CFG (although natural languages are not really context-free and actually-useful machine translation uses completely different techniques). It’s that, if you require your parser to handle ambiguity, that rules out certain shortcuts that would have made it more efficient. – Davislor Jun 11 at 17:12
Even if there’s a well-defined way to handle ambiguity (ambiguous expressions are syntax errors, for example), these grammars still cause trouble. As soon as you introduce ambiguity into a grammar, a parser can no longer be sure that the first match it gets is definitive. It needs to keep trying all the other ways to parse a statement, to rule out any ambiguity. You’re also not dealing with something simple like a LL(1) language, so you can’t use a simple, small, fast parser. Your grammar has symbols that can be read multiple ways, so you have to be prepared to backtrack a lot.
In some restricted domains, you might be able to get away with proving that all possible ways to parse an expression are equivalent (for example, because they represent an associative operation). (a+b) + c = a + (b+c).
Does IF a THEN IF b THEN x ELSE y mean
IF a THEN
IF b THEN
x
ELSE
y
or
IF a THEN
IF b THEN x
ELSE
y
? AKA the dangling else problem.
• That's a good example showing that even a non-ambiguous grammar (as in Java, C, C++, ...) allows apparent (!) ambiguities from a human perspective. Even though we are formally and computationally fine, we now got more of a UX/bug-free development problem. – ComFreek Jun 10 at 14:29
Take the most vexing parse in C++ for example:
bar foo(foobar());
Is this a function declaration foo of type bar(foobar()) (the parameter is a function pointer returning a foobar), or a variable declaration foo of type int and initialized with a default initialized foobar?
This is differentiated in compilers by assuming the first unless the expression inside the parameter list cannot be interpreted as a type.
when you get such an ambiguous expression the compiler has 2 options
1. assume that the expression is a particular derivation and add some disambiguator to the grammar to allow the other derivation to be expressed.
2. error out and require disambiguation either way
The first can fall out naturally, the second requires that the compiler programmer knows about the ambiguity.
If this ambiguity stays undetected then it is possible that 2 different compilers default to different derivations for that ambiguous expression. Leading to code being non-portable for non-obvious reasons. Which leads people to assume it's a bug in one of the compilers while it's actually a fault in the language specification.
I think the question contains an assumption that's only borderline correct at best.
In real life it's pretty common to simply live with ambiguous grammars, as long as they aren't (so to speak) too ambiguous.
For example, if you look around at grammars compiled with yacc (or similar, such as bison or byacc) you'll find that quite a few produce warnings about "N shift/reduct conflicts" when you compile them. When yacc encounters a shift/reduce conflict, that signals an ambiguity in the grammar.
A shift/reduce conflict, however, is usually a fairly minor problem. The parser generator will resolve the conflict in favor of the "shift" rather than the reduce. The grammar is perfectly fine if that's what you want (and it does seem to work out perfectly well in practice).
A shift/reduce conflict typically arises in a case on this general order (using caps for non-terminals and lower-case for terminals):
A -> B | c
B -> a | c
When we encounter a c, there's an ambiguity: should we parse the c directly as an A, or should we parse it as a B, which in turn is an A? In a case like this, yacc and such will choose the simpler/shorter route, and parse the c directly as an A, rather than going the c -> B -> A route. This can be wrong, but if so, it probably means you have a really simple error in your grammar, and you shouldn't allow the c option as a possibility for A at all.
Now, by contrast, we could have something more like this:
A -> B | C
B -> a | c
C -> b | c
Now when we encounter a c we have conflict between whether to treat the c as a B or a C. There's a lot less chance that an automatic conflict resolution strategy is going to choose what we really want. Neither of these is a "shift"--both are "reductions", so this is a "reduce/reduce conflict" (which those accustomed to yacc and such generally recognize as a much bigger problem than a shift/reduce conflict).
So, although I'm not sure I'd go quite so far as to say that anybody really welcomes ambiguity in their grammar, in at least some cases it's minor enough that nobody really cares a whole lot about it. In the abstract they might like the idea of removing all ambiguity--but not enough to always actually do it. For example, a small, simple grammar that contains a minor ambiguity can be preferable to a larger, more complex grammar that eliminates the ambiguity (especially when you get into the practical realm of actually generating a parser from the grammar, and finding that the unambiguous grammar produces a parser that won't run on your target machine).
• man, wish i'd had this excellent explanation of shift-reduce conflicts 5 months ago! ^^; +1 – HotelCalifornia Jun 12 at 8:23
|
2019-10-14 05:35:35
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 5, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.56792813539505, "perplexity": 1475.0997546247854}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-43/segments/1570986649232.14/warc/CC-MAIN-20191014052140-20191014075140-00024.warc.gz"}
|
https://tex.stackexchange.com/questions/432343/what-is-the-correct-value-to-move-horizontal-rule-and-join-it-with-vertical-rule
|
what is the correct value to move horizontal rule and join it with vertical rule between minipage?
I am redoing the logos (in word) used by my school, I have managed to align the images and the text, but, I can not find the correct value to move the horizontal rule and join it with the vertical rule. At first sight they seem to be united, but, a little "zoom" notes that they are separated. I have tried with several values in \rule[...], but none has worked. This is the MWE:
\documentclass[10pt]{article}%
\usepackage[osf]{libertine}
\usepackage[most]{tcolorbox}
%\setlength{\parindent}{0pt} % some need a parindent
% box for save dimension
\newsavebox\mysavebox
\sbox\mysavebox{\includegraphics[width=1.45cm,height=1.88cm]{example-image-a}}
% \colelogo
\NewDocumentCommand\colelogo{}{%
\usebox\mysavebox%
\end{minipage}}}%
% \coledescript
\NewDocumentCommand\coledescript{}{%
\begin{center}
\strut\textsc{\bfseries\Large Colegio XXXXXXXXXX YYYYYY de XXXXX}\par\vspace{0.5pt}
\emph{Enseñanza Básica}\par\vspace{0.5pt}
\emph{Formando Personas}\par
\vfill
\sffamily Avenida YYYYYY XXXXXXXXX Nª 123 Fono: (12) 325678 XXXXXXXXXXX%
\end{center}
% \coleyear
\NewDocumentCommand\coleyear{}{%
\begin{center}
\strut\bfseries\Large 2018%
\end{center}
% \colenota
\NewDocumentCommand\colenota{}{%
\begin{flushright}
\tcbox[colback=white,left=0mm,right=0mm,top=0mm,bottom=0mm,%
boxsep=0mm,arc=2mm,boxrule=0.7pt,title style={draw=none,fill=none}]{%
\rule{0pt}{\ht\mysavebox}\rule{\wd\mysavebox}{0pt}} % vertical x horizontal
\end{flushright}
\end{minipage}}}%
% \colevrule
\NewDocumentCommand\colevrule{}{%
\rule[-0.9\ht\mysavebox]{0.4pt}{1.1\ht\mysavebox}\hspace{-4pt}}
\setlength{\fboxsep}{0pt}
\colelogo\coledescript\colevrule\coleyear%
\IfBooleanTF{#1}
{\par\noindent\rule{\linewidth}{0.4pt}\par} % with a star
{\par\noindent\hfil\rule{0.8\textwidth}{0.4pt}\hfil\par} % without a star
}
\setlength{\fboxsep}{0pt}
\colelogo\coledescript\colevrule\coleyear\colenota
\IfBooleanTF{#1}
{\par\noindent\rule{\linewidth}{0.4pt}\par} % with a star
{\par\noindent\hfil\rule{0.8\textwidth}{0.4pt}\hfil\par} % without a star
}
\pagestyle{empty}
\begin{document}
text text text text text text text text text text text text text text text
text text text text text text text text text text text text text text text
text text text text text text text text text text
\vspace{1cm}
text text text text text text text text text text text text text text text
text text text text text text text text text text text text text text text
text text text text text text text text text text
\vspace{1cm}
text text text text text text text text text text text text text text text
text text text text text text text text text text text text text text text
text text text text text text text text text text
\vspace{1cm}
text text text text text text text text text text text text text text text
text text text text text text text text text text text text text text text
text text text text text text text text text text
\end{document}
I use the libertine font and the documents are in 10pt, 11pt, 12pt and the paper size is the one used in my country (they are not enough in the MWE). An image to clarify more: Saludos
• Couldn't you just use a simple tabularx for that? – user121799 May 19 '18 at 2:01
• @marmot: tabularx It's an option, but, this will be part of a package that I'll use for tests and exams ... I'm more used to minipage, but, I do not rule it out :) – Pablo González L May 19 '18 at 2:38
• I see. (Of course you are loading tcolorbox, which in turn loads tons of other packages, in particular TikZ, which may allow you to build a more straightforward solution, possibly by also loading tikzpagenodes.) – user121799 May 19 '18 at 3:14
This solution replaces some of the \pars with \hrule height0pt which packs tight. Also put some of the \rules inside \smash to remove the vertical space reserved (essentially \baselineskip).
\documentclass[10pt]{article}%
\usepackage[most]{tcolorbox}
%\setlength{\parindent}{0pt} % some need a parindent
% box for save dimension
\newsavebox\mysavebox
\sbox\mysavebox{\includegraphics[width=1.45cm,height=1.88cm]{example-image-a}}
% \colelogo
\NewDocumentCommand\colelogo{}{%
\usebox\mysavebox%
\end{minipage}}}%
% \coledescript
\NewDocumentCommand\coledescript{}{%
\begin{center}
\strut\textsc{\bfseries\Large Colegio XXXXXXXXXX YYYYYY de XXXXX}\par\vspace{0.5pt}
\emph{Enseñanza Básica}\par\vspace{0.5pt}
\emph{Formando Personas}\par
\vfill
\sffamily Avenida YYYYYY XXXXXXXXX Nª 123 Fono: (12) 325678 XXXXXXXXXXX%
\end{center}
% \coleyear
\NewDocumentCommand\coleyear{}{%
\begin{center}
\strut\bfseries\Large 2018%
\end{center}
% \colenota
\NewDocumentCommand\colenota{}{%
\begin{flushright}
\tcbox[colback=white,left=0mm,right=0mm,top=0mm,bottom=0mm,%
boxsep=0mm,arc=2mm,boxrule=0.7pt,title style={draw=none,fill=none}]{%
\rule{0pt}{\ht\mysavebox}\rule{\wd\mysavebox}{0pt}} % vertical x horizontal
\end{flushright}
\end{minipage}}}%
% \colevrule
\NewDocumentCommand\colevrule{}{%
\rule[-0.9\ht\mysavebox]{0.4pt}{1.1\ht\mysavebox}\hspace{-4pt}}
\setlength{\fboxsep}{0pt}%
\colelogo\coledescript\colevrule\coleyear%
\IfBooleanTF{#1}%
{\hrule height0pt \smash{\rule{\textwidth}{0.4pt}}\par}% with a star
{\hrule height0pt \hfil\smash{\rule{0.8\textwidth}{0.4pt}}\hfil\par}% without a star
}
\setlength{\fboxsep}{0pt}
\colelogo\coledescript\colevrule\coleyear\colenota
\IfBooleanTF{#1}
{\hrule height0pt \smash{\rule{\linewidth}{0.4pt}}\par} % with a star
{\hrule height0pt \hfil\smash{\rule{0.8\textwidth}{0.4pt}}\hfil\par} % without a star
}
\pagestyle{empty}
\begin{document}
text text text text text text text text text text text text text text text
text text text text text text text text text text text text text text text
text text text text text text text text text text
\vspace{1cm}
text text text text text text text text text text text text text text text
text text text text text text text text text text text text text text text
text text text text text text text text text text
\vspace{1cm}
text text text text text text text text text text text text text text text
text text text text text text text text text text text text text text text
text text text text text text text text text text
\vspace{1cm}
• Thanks for the answer, you can tell me how to fix the \rule{0.8\textwidth}{0.4pt}, it is not centered as it should... – Pablo González L May 19 '18 at 2:36
• Replace \smash rule for {\hrule height0pt \begingroup\centering\rule{0.8\linewidth}{0.4pt}\par\endgroup} % without a star and now it works perfectly ... Thank you very much – Pablo González L May 19 '18 at 5:05
|
2019-11-21 09:28:52
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9962782263755798, "perplexity": 415.2530224054033}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-47/segments/1573496670743.44/warc/CC-MAIN-20191121074016-20191121102016-00254.warc.gz"}
|
https://homepage.cs.uiowa.edu/~jgmorrs/eecs662f17/Notes-on-folds.html
|
# Notes on folds
## Introduction
The fixed point combinator gives us a very general way of expressing recursive computations. However, we lose many of the desirable properties of our language---for example, we can use the fixed point combinator to encode arbitrary diverging program behavior. In these notes, we consider an alternative approach, based on the recursive structure of data types.
Sections followed by an asterix contain more advanced material. You may find this interesting, but should not expect to see it appear on the exam.
## Type operators
Before we can talk about recursive data types, and recursive computation based on those data types, we need to lay a bit of ground work. We will introduce the idea of type operators, allowing us to abstract over the structure of types. These will play roughly the same role as the "step" functions did in our account of fixed point terms.
Consider several examples of recursive types: the natural numbers Nat, and lists and trees natural numbers NatList and NatTree. We might express them using data type declarations as follows.
data Nat = Z | S Nat
data NatList = Nil | Cons Nat NatList
data NatTree = Leaf | Branch Nat NatTree NatTree
We can distinguish two parts of these recursive definitions. One part is the information present at each "iteration" of the recursive type. In the case of Nat, that's simply whether the type represents 0 or the successor of another natural; for NatList, we capture whether the list is empty, or a cons node, and in the latter case also capture the value stored in the node. The second part is the recursive structure.
To separate these two parts, we introduce the idea of type operators. Intuitively speaking, a type operator is a type with a "hole", which we will use to indicate the points of recursion. Formally, we can consider this to be a type with an identified free type variable. For example, the type operators extracted from the recursive definitions about would be:
\begin{align*} N(a) &= 1 + a \\ L_N(a) &= 1 + (N \times a) \\ T_N(a) &= 1 + (N \times a \times a) \end{align*}
Each of these type operators also gives rise to a term operator, called its map, which modifies the $a$ value while leaving the rest of the structure unchanged. These operators have the following typing rule:
$$\frac{\Gamma \vdash f : t \to u} {\Gamma \vdash \Map F f : F(t) \to F(u)}$$
For example, $\Map N {even}$ would have type $1 + \Nat \to 1 + \Bool$, while $\Map {L_N} {even}$ would have type $1 + (\Nat \times \Nat) \to 1 + (\Nat \times \Bool)$.
Optional exercise. Write out the terms that implement $\Map N - : (t \to u) \to (N(t) \to N(u))$ and $\Map {L_N} - : (t \to u) \to (L_N(t) \to L_N(u)$. Treat $t$ and $u$ as concrete but arbitrary types.
## Functors*
In general, structures that have this kind of transformer are called functors, and the fact that these type operators are functors will be crucial to defining the meaning of computations over the corresponding recursive types. Unfortunately, it turns out that not all type operators describe functors.
Optional exercise. Attempt to write map functions for the following type operators. What goes wrong for $F_{Neg}$?
\begin{align*} F_{Pos}(a) &= \Nat \to a \\ F_{Neg}(a) &= a \to \Nat \\ \end{align*}
We can formally capture those operators which describe functors by distinguishing between positive and negative occurrences of type variables. We introduce two operators $fv^+(t)$ and $fv^-(t)$ to describe the positive and negative variables in type $t$. They are defined as follows.
$$\begin{gather*} fv^+ = \Set a \qquad fv^- = \emptyset \qquad fv^\pm(t \to u) = fv^\mp(t) \cup fv^\pm(u) \\ fv^\pm(t \times u) = fv^\pm(t) \cup fv^\pm(u) \qquad fv^\pm(t + u) = fv^\pm(t) \cup fv^\pm(u) \end{gather*}$$
The definitions of $fv^\pm(t)$ abbreviate both the positive and negative definitions for type $t$; subsequent references to $\mp$ mean the opposite polarity as $\pm$. For example, the definition for functions abbreviates the following two definitions.
$$fv^+(t \to u) = fv^-(t) \cup fv^+(u) \qquad fv^-(t \to u) = fv^+(t) \cup fv^-(u)$$
Now, we can say that the type operator $F(a) = t$ defines a functor exactly when $a \not\in fv^-(t)$.
Optional exercise. Write down the map function for $F_{NN}(a) = (a \to \Nat) \to \Nat$.
## Recursive types and folds
We build a recursive type out of a type operator the same way we built a recursive term out of a step function: by applying a (least) fixed point operator. The least fixed point operator for types is conventionally written with the green letter $\mu$. We would recover the original Nat and NatList types as the least fixed points of the corresponding type operators:
$$\Nat = \mu N \qquad \mathtt{NatList} = \mu L_N$$
As for fixed points of step functions, the intuition of these types is that they give the infinite iteration of the type operator. For example, the two fixed points above are intuitively equivalent to:
\begin{align*} \Nat &= 1 + (1 + (1 + (1 + \dots))) \\ \mathtt{NatList} &= 1 + (\Nat \times (1 + (\Nat \times (1 + \dots)))) \end{align*}
In this intuitive understanding, zero would be represented by $\Inl{()}$, while two would be represented by $\Inr{(\Inr{(\Inl{()})})}$. Unfortunately, this are not quite this simple. In particular, this intuitive view gives us no way to define computation over values of recursive data types.
To get our actual understanding of these types, we need to introduce explicit introduction and elimination forms for least fixed point types. This will get us back in familiar ground, where each type comes with its own introduce and elimination form; it will also allow us to define recursive computations over these types.
The introduction rule for type $\mu F$ is called $\mathsf{in}_F$, while the elimination form is called $\mathtt{fold}_F$. Their typing rules are as follows.
$$\frac{\Gamma \vdash e : F(\mu F)} {\Gamma \vdash \In F e : \mu F} \qquad \frac{\Gamma \vdash f : F(t) \to t} {\Gamma \vdash \Fold F f : \mu F \to t}$$
The $\mathtt{In}_F$ term "wraps" one expansion of a recursive type into an instance of the recursive type. For example, the term $\Inl{()}$ is an instance of type $N(\mu N)$, so the term $\In N {(\Inl{()})}$ is an instance of type $\mu N$. Similarly, two would be represented by the following term.
$$\In N {(\Inr {(\In N {(\Inr {(\In N {(\Inl{()})})})})})}$$
Intuitively, the term $\mathtt{fold}_F\,f$ replaces each instance of the $\mathtt{In}_F$ constructor with an application of the $f$ function. To see this in action, we consider several sample folds, defining simple operations.
## Example folds
We begin with a simple predicate, testing for even numbers. This can be defined as follows.
$$even = \Fold N {\backslash x : 1 + \Bool \to \CCase x z {\mathtt{True}} b {\mathtt{not}\,b}}$$
Consider the action of $even$ on 0 (encoded by $\In N {(\Inl {()})}$). We want to replace the (one) instance of the $\mathtt{In}_F$ constructor with the function in the fold.
\begin{align*} &(\backslash x : 1 + \Bool \to \CCase x z {\mathtt{True}} b {\mathtt{not}\,b}) \, (\Inl{()}) \\ & = \CCase {\Inl{()}} z {\mathtt{True}} b {\mathtt{not}\,b}) \\ & = \mathtt{True} \end{align*}
Now, consider the action of $even$ in 1 (encoded by $\In N {(\Inr {(\In N {(\Inl {()})})})}$). Again, we want to replace the two instance of the $\mathtt{In}_N$ constructor with the function in the fold; to simplify the example, we abbreviate that function as $e$.
\begin{align*} & e \, (\Inr {(e \, (\Inl {()}))}) \\ & = e \, (\Inr {\mathtt{True}}) \tag{*}\\ & = (\backslash x : 1 + \Bool \to \CCase x z {\mathtt{True}} b {\mathtt{not}\,b}) \, (\Inr {\mathtt{True}}) \\ & = \CCase {\Inr {\mathtt{True}}} z {\mathtt{True}} b {\mathtt{not}\,b} \\ & = \mathtt{not} \, \mathtt{True} \\ & = \mathtt{False} \end{align*}
In the $*$ labeled step, we rely on the previous example to replace $e\,(\Inl{()})$ with $\mathtt{True}$. The remainder of the evaluation is unsurprising.
Here are several other examples of simple folds: two encodings of addition, and one encoding of tree sum. (We rely on familiar notation for naturals in the tree sum function purely for convenience.
\begin{align*} plus_1 &= \backslash m : \mu N \to \Fold N {\backslash x : 1 + \mu N \to \CCase x z m p {\In N {(\Inr{p})}}} \\ plus_2 &= \mathtt{fold}_N \, (\begin{array}[t]{@{}l} \backslash x : 1 + (\mu N \to \mu N) \to \\ \quad \CCase x z {(\backslash n : \mu N \to n)} f {(\backslash n : \mu N \to \In N {(\Inr {(f\,n)})})}) \end{array} \\ treeSum &= \mathtt{fold}_{T_N} \, (\begin{array}[t]{@{}l} \backslash x : 1 + (\mu N \times \mu N \times \mu N) \to \\ \quad \CCase x z 0 p {\Let{(x,y,z)}{p}{x + y + z}}) \end{array} \end{align*}
## Evaluation rules
Finally, we can give formal evaluation rules for recursive types.
$$\frac{e \Eval v} {\In F e \Eval \Inv v} \qquad \frac{f \Eval \lambda x. e_1 \quad e \Eval \Inv w \quad f (\Map F {\Fold F f} \, w) \Eval v} {\Fold F f \, e \Eval v}$$
As is hopefully unsurprising at this point, the computational content is all in the elimination form. When eliminating a fold, we begin by evaluating both the fold function and the value. Then, intuitively, we want to replace the outer $\mathsf{in}$ with an application of the fold function. However, before doing so, we need to account for the recursion. We do this by relying on the $\mathsf{map}$ defined for type operator $F$.
## Encoding folds in System F*
\begin{align*} \Tr{\mu F} &= \Pi a. (F(a) \to a) \to a \\ \Tr{\In F e : \mu F} &= \Lambda a. \backslash f : (F(a) \to a) \to f\,(\Map F {\Tr{\Fold F f}}\,e) \\ \Tr{\Fold F f : t} &= \backslash e : \Tr{\mu F} \to e \, [t] \, \Tr{f} \end{align*}
|
2022-12-05 01:16:18
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.99993896484375, "perplexity": 1005.8495509142837}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-49/segments/1669446711001.28/warc/CC-MAIN-20221205000525-20221205030525-00253.warc.gz"}
|
https://www.scienceforums.net/topic/50899-finding-an-alien-species-and-its-effects-on-religion/page/3/#comment-563830
|
Finding an alien species and its effects on religion
Recommended Posts
You have absolutely no evidence of intelegent aliens much less aliens writing hamlet. I never claimed that having the correct ingredients and circumstances for life will produce intelligent life. As a matter of fact i happen to be a member of the "Rare Earth" school of thought, life is common, i expect life to be found in our solar system on at least 4 different planets/moons. Intelligent life is another matter altogether. The idea that intelligent life means they write hamlet is just silly severian....
The contention here was life vs god, we have a planet full of life but no evidence of god. this i have to admit does not prove there is no God but it shows we have evidence of life and lots of circumstantial evidence that life might exist on other planets but life does not = intelligence nor does intelligence = hamlet. Now please stop putting words in my mouth severian....
*sigh* You are just making this up as you go along, aren't you? We were discussing intelligent life. You yourself earlier said "In that vein I was thinking of aliens who also worship a god who is personified by their planets moon."If you keep moving the goal-posts...
• Replies 72
• Created
Popular Days
No I'm not making this up as we go along, you are not paying attention but lets agree to disagree on this and allow the thread to go back to it's original question...
Share on other sites
• 4 weeks later...
I am often surprised that people who say that they are atheists because there is "no evidence for a God", often believe in the existence of extraterrestrial life.
Good point and you are right! However However in their defence, it is possible to use mathmatics and observation to at least attempt to predict the probability that aliens exist. It's not so easy to do that with "god".
Share on other sites
Good point and you are right! However However in their defence, it is possible to use mathmatics and observation to at least attempt to predict the probability that aliens exist. It's not so easy to do that with "god".
Is it? I have never seen that done. We have been unable to make synthetic life on Earth, so we have only one single abiogenesis event where life came from no-life spontaneously. I don't think we can use one data point to provide a probability.
Share on other sites
so we have only one single abiogenesis event where life came from no-life spontaneously. I don't think we can use one data point to provide a probability.
You should seriously consider catching yourself up on the current research surrounding abiogenesis, otherwise, you may find yourself inadvertently continuing to strawman it when you post about it.
Share on other sites
You should seriously consider catching yourself up on the current research surrounding abiogenesis, otherwise, you may find yourself inadvertently continuing to strawman it when you post about it.
Which bit of my post do you think was in error?
Share on other sites
Which bit of my post do you think was in error?
That would be this bit:
where life came from no-life spontaneously.
Share on other sites
Is it? I have never seen that done. We have been unable to make synthetic life on Earth, so we have only one single abiogenesis event where life came from no-life spontaneously. I don't think we can use one data point to provide a probability.
You might want to have a look at these.
First: The Drake Equation:
http://en.wikipedia.org/wiki/Drake_equation
There are still a few unknowns in this, but we are startign to get a good handle on some, like how common planets are.
R*: we know this fairly well
fp: we have a good idea of now with all the planet hunting projects going now.
ne: we are starting to be able to put a number to because of the Kepler mission
ft: we can take a good guess at, and if we find life on another planet in our solar system then this will give us a better idea and based on our increaseing understanding of chemistry we are failry sure this is going to be a moderalty high number
fi: well we don't really know as we only have one data point and until we learn of another civilization we can't really pint this one down
fe: again, a complete unknown
L: we can base this off only our civilization, but being a single data point it is still just a guess.
So there are only two factors that are complete unknowns, two more that are uncertain but we have a decent basis for the guess, another two that we have a basic idea of and are getting more accurate data daily and the others we know quite well.
When you put some of the numbesr in it, there should be quite a few civilizations out there at the moment and more that have long since become extinct (iirc: there is a good chance that there is an alien civilization within 100 to 200 light years of us).
So when we say that we believe that there are Aliens out there, it is becuase the chances of them existing are quite high, but it is a chance so we do acnowledge that they might not exist.
As for the second part:
Craig Ventner ( http://en.wikipedia.org/wiki/Craig_Venter ) has succeeded in creating an artifical organism in the lab.
Also Dr Jack Szostak (http://en.wikipedia.org/wiki/Jack_W._Szostak) has put forward about (ie repeatable in the lab) explaination of how life could have got started and the environments needed to do so were present on early Earth.
Share on other sites
You might want to have a look at these.
First: The Drake Equation:
http://en.wikipedia.org/wiki/Drake_equation
There are still a few unknowns in this, but we are startign to get a good handle on some, like how common planets are.
It is $f_l$ that I object to most. It is usually set to 100% even though we have never observed this happening anywhere but on Earth. Is there any scientific evidence which supports the number they use?
With one data point all we can say is that $f_l = 1 \pm 1$.
As for the second part:
Craig Ventner ( http://en.wikipedia.org/wiki/Craig_Venter ) has succeeded in creating an artifical organism in the lab.
Also Dr Jack Szostak (http://en.wikipedia.org/wiki/Jack_W._Szostak) has put forward about (ie repeatable in the lab) explaination of how life could have got started and the environments needed to do so were present on early Earth.
Ventner's work was creating an organism with artificial genes - not artificial life.
Can you link to the description of the lab experiments making artificial life that Szostak has carried out? I couldn't find them.
Edited by Severian
Share on other sites
It is $f_l$ that I object to most. It is usually set to 100% even though we have never observed this happening anywhere but on Earth. Is there any scientific evidence which supports the number they use?
With one data point all we can say is that $f_l = 1 \pm 1$.
I don't think I have ever seen it set at 100% because that would mean Mars would have had to have life (in the past) as well as a number of moons as well (Europa, etc). As these have the necesary conditions to support the formation of life (liquid water, organic chemistry and energy sources) then these would have to already have had life detected on them. As we don't know whether Mars ever had life (but we do know that it did have a watery past), and we have not yet made a detailed enough study of the Moons with the potential we can not conclusivly say that it will be at 100%.
At best we could say that life has a 1 in 6 (approximately as there is around half a dozen candidates in our solar system for life to have got started on). But that is one of the more higly optimistic values I have heard.
Ventner's work was creating an organism with artificial genes - not artificial life.
Can you link to the description of the lab experiments making artificial life that Szostak has carried out? I couldn't find them.
Here is an interview with him:
And a link to a PDF copy of the article in Nature: http://genetics.mgh.harvard.edu/szostakweb/publications/Szostak_pdfs/Mansy_et_al_Nature_2008.pdf
Share on other sites
That would be this bit:
where life came from no-life spontaneously.
What's wrong with it?
Share on other sites
The use of the word spontaneous. That strawmans the actual area of study quite profoundly.
It's not like a functioning cell just shit itself into existence one day, which is roughly what Severian is implying. It was a gradual process of slow changes.
Share on other sites
The use of the word spontaneous. That strawmans the actual area of study quite profoundly.
It's not like a functioning cell just shit itself into existence one day, which is roughly what Severian is implying. It was a gradual process of slow changes.
By what authority do you know what I was "implying"?
I have no problem with a slow evolution from complex molecule to "life", but presumably if one has a definition of "life", the molecules will at some point cross that boundary from "non-life" to "life". So life does, in some sense "just shit itself into existence one day", as you so eloquently put it.
Share on other sites
lol
Thats called being super 'argumentative'.
Share on other sites
I have no problem with a slow evolution from complex molecule to "life", but presumably if one has a definition of "life", the molecules will at some point cross that boundary from "non-life" to "life". So life does, in some sense "just shit itself into existence one day", as you so eloquently put it.
Well, that's only a problem for people who do have a definition of life. Personally I think it's a rather arbitrary distinction, and not a binary classification, even though it is usually used as one to signify things going from very alive to very dead.
Share on other sites
lol
Theres no such thing as a 'table'.
just a construction make of wood and nails for the purpose of holding things up off the ground.
Share on other sites
lol
Theres no such thing as a 'table'.
just a construction make of wood and nails for the purpose of holding things up off the ground.
Exactly! While when someone says "table" most people conjure up an image of a flat, rectangular wooden object with 4 legs to hold it up, but tables can also be made of wood, stone, glass, metal, plastic, and can have many shapes and sizes. Some types of tables form exclusively naturally. Sometimes a rock or log is used as a table and can during that time be called a table. Some tables are never used for eating on (pool table), some people don't eat at tables, and some objects similar to tables might instead be called a workbench even though they're not for sitting on and their owners seldom work at them. Some objects make better tables than others. Someone would have to carefully define table before they can conclude that tables cannot arise naturally.
Share on other sites
Whether or not there is a definitive explanation for the event of life from non-life, or an equation describing the probability of life on other planets, have no impact on the implications of an alien species theology and gods on our own.
The point of the topic was to discuss and rout the psychology of our theology and gods and how our theology would react to an aliens theology/gods. assuming of course that we had contact between our species and theirs. also another thought process would be, what would be the effects of religion on the alien species, both from our perspective and theirs.
Share on other sites
Well, that's only a problem for people who do have a definition of life. Personally I think it's a rather arbitrary distinction, and not a binary classification, even though it is usually used as one to signify things going from very alive to very dead.
Don't you think that makes it even harder to ascribe the probability of finding life on another planet? If you don't actually have a working definition of life, how are you going to say life exists?
Incidentally, do you think life exists on planet Earth?
Share on other sites
I am often surprised that people who say that they are atheists because there is "no evidence for a God", often believe in the existence of extraterrestrial life.
Belief in extraterrestrial life doesn't require you to assume an all knowing, all seeing omnipotent being the likes of which has never been seen before and is the creator of everything we know. Extraterrestrials (for me) are far more acceptable because we know we already exist - why can't something similar exist on another planet?
Because we have never seen or known God it is hard for me to imagine that "It" even exists at all given none of it's attributes are seen in nature (spontanious creation of matter for example)
Share on other sites
Don't you think that makes it even harder to ascribe the probability of finding life on another planet? If you don't actually have a working definition of life, how are you going to say life exists?
I think it is entirely down to an almost completely unsubstantiated guess. This goes for those who guess zero probability just as well.
Incidentally, do you think life exists on planet Earth?
Sure, but I don't think it tells us too much about how life in general must look. Also, there could be a different form of life on earth, not based on DNA/RNA, and we very likely not notice, not with our current level of technology. If that seems surprising, just consider that most bacteria we only know about from doing DNA analysis of soil.
If you like I can define Earth's DNA based life: It stores information as DNA from which it generates mRNA, uses ATP/GTP as an internal power source, has ribosomes to make protein, is enclosed in a membrane, and can replicate itself.
Share on other sites
I think it is entirely down to an almost completely unsubstantiated guess. This goes for those who guess zero probability just as well.
OK - I can agree with that.
Sure, but I don't think it tells us too much about how life in general must look. Also, there could be a different form of life on earth, not based on DNA/RNA, and we very likely not notice, not with our current level of technology. If that seems surprising, just consider that most bacteria we only know about from doing DNA analysis of soil.
If you like I can define Earth's DNA based life: It stores information as DNA from which it generates mRNA, uses ATP/GTP as an internal power source, has ribosomes to make protein, is enclosed in a membrane, and can replicate itself.
And that!
Share on other sites
you know, trees and plant "life" should be considered as being life.
Create an account
Register a new account
|
2023-01-27 18:15:16
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.39463961124420166, "perplexity": 1201.34991580326}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2023-06/segments/1674764495001.99/warc/CC-MAIN-20230127164242-20230127194242-00656.warc.gz"}
|
https://forum.snap.berkeley.edu/t/are-there-any-actual-uses-for-this/11201
|
# Are there any actual uses for this?
I have created a program that draws out a graph of numbers.
That's it in a nutshell.
Taking a random sample from this graph, you will find it is a number from 0-480, where the item's height is e^sqrt(x), where x is the number. I hope that's enough background?
(POSITIVE INFINITY is a reporter. Inside code is call(JS function(return Number.POSITIVE_INFINITY;)).)
I look at the data, and there is an eternal upwards trend (to be expected, numbers get bigger and bigger).
But now I am stuck with a question. What do I do with this data? What are use cases?? I created this program without knowing (well, I knew at one point and then forgot) what e even is, and now I have no idea what can be gleaned from the data.
Potentially necessary resources
EDIT: CODE ERROR! Example data is skewed and shouldn't be looked at! I forgot to change the / 5000 to / divisor. The project is fixed and normal, but the example data is old! Please run the project yourself to get new example data and see the project run. Apologies!
One neat fact about that is that the derivative of $$e^{x}$$ is $$e^{x}$$ itself, so the derivative of your graph is $$e^{\sqrt x}$$ is itself.
What is the derivative of something? I was quite literally just playing around with patterns when I discovered that e^x where x is anything creates some odd occurrences, so I decided to graph it.
The derivative of a function is the function that gives instantaneous rate of change of the original function at any particular point. (It's a calculus thing)
Correct me if I'm wrong: on a graph that represents a function that results in a sine, etc., the derivative is the slope at any given point?
Almost. The derivative is the function that gives what you said.
So, first of all, there's no need to use JS Function:
To answer your question, I think your program confusingly does two things: graph a function and make a table of values of the function. Imho, graphing is super useful, for helping people understand the behavior of the function, but the table of values isn't so useful, unless the function takes a really long time to compute, because it's easier to compute values on the fly as needed than to maintain a table. Tables of values were essential before computers, and there were serious mathematicians who spent their whole life making such tables. But you don't know their names, because their work hasn't lived on. (And of course, being made by human beings, their tables were full of errors!)
Lists of values are important when the data are empirical (number of babies born in year $$x$$) rather than from computable functions.
This means you can make your graphing program much simpler, <10 blocks altogether. (And then you can complicate it again by auto-scaling the result, i.e., compute all the values and find their maximum, then make the plot dividing by that maximum. But you don't want to do that when the function's max value is infinite! :~) )
I don't really think that does answer my question. I do know that this project could easily graph with a much lower block count, but it is important to me to have it store data in a table, so I can do something else with the data. In the future (probably tomorrow) I will make a smaller version without the table-storing, but in the meantime, this is how I want my project to be.
That's interesting! I didn't know that. It's important to me that this infinity be positive infinity. Is 1/0 positive infinity???
Here, yes. If you want negative infinity, do (-1)/0:
Or -(1/0):
The first part of that is right, but not the second part. You've forgotten the chain rule:
$${{\rm d}f(g(x))\over{\rm d}x} = {{\rm d}f(g(x))\over{\rm d}g(x)}\cdot {{\rm d}g(x) \over {\rm d}x}$$
$${{\rm d}e^{\sqrt x}\over{\rm d}x} = {{\rm d}e^{\sqrt x}\over{\rm d}\sqrt x}\cdot {{\rm d}x^{1/2} \over {\rm d}x} = e^{\sqrt x} \cdot {{x^{-1/2}}\over{1/2}} = {2e^{\sqrt x} \over \sqrt x}$$
(Edit: From memory. If you want to be sure, ask Wolfram Alpha.)
I asked a college student and they said to remember chain rule???
EDIT: i am talking to said college student on discord and they literally sent this to me 5sec after bh posted-
Indeed I have.
Edit: I'm dumb. You already gave the formula.
What do you mean, "here"? 1/0 is infinite, anywhere.
Not in math! There, it isn't infinity, but an undefined value.
completely offtopic but I don't know what category to make a topic for this in
Can anyone explain why I saw a "b" from PICK RANDOM?:
(that's a custom block I made, but I'm talking about the primitive PICK RANDOM.)
Umm. 0/0 is undefined, because it could be any value. $$\forall x, 0\cdot x=0$$, so divide both sides by zero to get $$\forall x, {0 \over 0} = x$$.
But in the case of 1/0, not any value will do. For any finite $$x$$, $$0\cdot x = 0 < 1$$. So 1/0 has to be larger than any finite x. Hence, ∞.
Your high school math teachers don't want you to think that you can embed 1/0 in a larger expression, e.g., $$\sqrt{1/0}$$, so they lump it in with undefined. But if you study numerical analysis, you'll learn that sometimes you can embed an infinite value in a larger expression and get a meaningful result. But you're allowed to do that only after you go to grad school.
Yeah: There's a bug in your custom block. :~P
But the limit from below and the limit from above are different.
No, I saw it clicking on the primitive. I think my eyes were playing tricks on me, though.
Yes, that's true. But just as by convention $$\sqrt x$$ means the positive square root, by convention 1/0 is +∞ and -1/0 is −∞.
Edit: No, I was right the first time:
I made a shorter, non-table version, as suggested by @bh.
|
2022-06-30 15:57:50
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.6931548118591309, "perplexity": 594.0120946032093}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-27/segments/1656103850139.45/warc/CC-MAIN-20220630153307-20220630183307-00047.warc.gz"}
|
https://blog.wikimedia.org/2013/01/03/wikimedia-research-newsletter-december-2012/
|
Vol: 2 • Issue: 12 • December 2012
Wikipedia and Sandy Hook; SOPA blackout reexamined
With contributions by: Daniel Mietchen, Piotrus, Junkie.dolphin, Taha Yasseri, Benjamin Mako Hill, Aaron Shaw, Tbayer, DarTar and Ragesoss
### How Wikipedia deals with a mass shooting
Northeastern University researcher Brian Keegan analyzed the gathering of hundreds of Wikipedians to cover the Sandy Hook Elementary School shooting in the immediate aftermath of the tragedy. The findings are reported in a detailed blog post that was later republished by the Nieman Journalism Lab.[1] Keegan observes that the Sandy Hook shooting article reached a length of 50Kb within 24 hours of its creation, making it the fastest growing article by length in the first day among recent articles covering mass shootings on the English-language Wikipedia. The analysis compares the Sandy Hook page with six similar articles from a list of 43 articles on shooting sprees in the US since 2007. Among the analyses described in the study, of particular interest is the dynamics of dedicated vs occasional contributors as the article reaches maturity: while in the first few hours contributions are evenly distributed with a majority of single-edit editors, after hour 3 or 4 a number of dedicated editors show up and “begin to take a vested interest in the article, which is manifest in the rapid centralization of the article”. A plot of inter-edit time also shows the sustained frequency of revisions that these articles display days after their creation, with Sandy Hook averaging at about 1 edit/minute around 24 hours since its first revision. The notebook and social network data produced by the author for the analysis are available on his website. The Nieman Journalism Lab previously covered the role that Wikipedia is playing as a platform for collaborative journalism, and why its format outperforms Wikinews with an interview of Andrew Lih published in 2010.[2] The early revision history of the Sandy Hook shooting article was also covered in a blog post by Oxford Internet Institute fellow Taha Yasseri, however with a focus on the coverage in different Wikipedia language editions.[3]
### Network positions and contributions to online public goods: the case of the Chinese Wikipedia
A graph with nodes color-coded by betweenness centrality (from red=0 to blue=max).
In a forthcoming paper in the Journal of Management Information Systems (presented earlier at HICSS ’12[4]), Xiaoquan (Michael) Zhang and Chong (Alex) Wang use a natural experiment to demonstrate that changes to the position of individuals within the editor network of a wiki modify their editing behavior. The data for this study came from the Chinese Wikipedia. In October 2005, the Chinese government suddenly blocked access to the Chinese Wikipedia from mainland China, creating an unanticipated decline in the editor population. As a result, the remaining editors found themselves in a new network structure and, the authors claim, any changes in editor behavior that ensued are likely effects of this discontinuous “shock” to the network. The paper defines each editor as a node (vertex) in the network and a tie (edge) between two editors is created whenever the editors edit the same page in the wiki. They then examine how changes to three aspects of individual editors’ relative connectedness (centrality) to other editors within the network altered their subsequent patterns of contribution.
The main finding is that changes in the three kinds of editors’ connectedness within the network result in differential changes to their editing behavior. First, an increase in the number of direct connections between one editor and the rest of the network (degree centrality) resulted in fewer edits by that editor, and more work on articles they created. Second, an increase in the overall proximity of an editor to the other members of the network (closeness centrality) resulted in fewer edits and less work on articles they created. Third, an increase in the extent to which an editor connected otherwise isolated groups in the network (betweenness centrality) resulted in more edits and more work by that editor on articles they created. Overall, these results imply that alterations to the network structure of a wiki can change both the quantity and quality of editor contributions. The researchers argue that their findings confirm the predictions of both network game theory and role theory; and that future research should try to analyze the character of the network ties created within platforms for large-scale online collaboration, to better understand how changes to network structure may alter collaborative practices and public goods creation.
### Quality of pharmaceutical articles in the Spanish Wikipedia
Ibuprofen, one of the World Health Organisation‘s “essential drugs”, a topic covered in detail by the Spanish-language Wikipedia.
In an online early version of an upcoming article in Atención Primaria,[5] researchers at the Miguel Hernández University of Elche and the University of Alicante have benchmarked articles on pharmaceutical drugs in the Spanish Wikipedia against information available in a pharmaceutical database, Vademécum.[6] A subset of the Vademécum corpus of 3,595 drugs was created using simple random sampling without replacement, consisting of 386 drugs. Of these, 171 (44%) had entries on the Spanish Wikipedia, which were then scrutinized along several dimensions in May 2012. Usage of the drug was correctly indicated in 155 (91%) of these articles, dosage in 26 (15%), and side-effects in 64 (37%), with only 15 articles (9%) scoring well in all of these dimensions. The researchers conclude that, while Wikipedia has a high potential to help with the dissemination of pharmaceutical knowledge, the Spanish-language edition does not currently live up to this potential. As a possible solution, they suggest the pharmaceutical community more actively participate in editing Wikipedia. The list of the drugs involved has not been made public, since a similar study is currently underway whose results may be distorted by targeted intervention. The authors have signalled to this research report their intention to make the list available after this second study is complete.
### Wikipedia editing patterns are consistent with a non-finite state model of computation
A paper posted to ArXiv[7] by SFI‘s Omidyar fellow Simon DeDeo presents evidence for non-finite state computation in a human social system using data from Wikipedia edit histories. Finite state-systems are the basis for the study of formal languages in computer science and linguistics, and many real-world complex phenomena in biology and the social sciences are also studied empirically by assuming the existence of underlying finite-state processes, for the analysis of which powerful probabilistic methods have been devised. However, the question of whether the description of a system truly entails a finite or a non-finite, unbounded number of states, is an open one. This is significant from a functionalist point of view: can we classify a system by its computational properties, and can these properties help us better understand how the system works regardless of its material details?
The paper’s contribution lies in its proof of a probabilistic generalization of the pumping lemma, a device used in theoretical computer science as a necessary condition for a language to be described by only a finite number of states. The lemma is applied to the edit histories of a number of the most frequently edited articles in the English Wikipedia, after being properly transformed into coarse-grain sequences of “cooperative” or “non-cooperative (reversion) edits (reverts being identified by means of their SHA1 field). A Bayesian argument is applied to show that the lemma cannot hold for a majority of sequences, thus showing that Wikipedia’s collaborative editing system as a whole cannot be described by any aggregation of finite-state systems. The author discusses the implications of this finding for a more grounded study of Wikipedia’s editing model, and for the identification of detailed computational models of other social and biological systems.
### Wikipedia as our collective memory
A protester on Tahrir Square during the 2011 Egyptian revolution.
Michela Ferron, a member of the SoNet (Social Networking) research group at the Bruno Kessler Foundation in Trento, Italy submitted her PhD thesis[8] in December 2012. She examined the idea of viewing Wikipedia as a venue for collective memory and the language indicators of the dynamic process of memory formation in response to “traumatic” events. Parts of the thesis have already been published in journals and conference proceedings, such as WikiSym 2011 and 2012 (cf. presentation slides).
A full chapter is dedicated to the background on the concept of collective memory and its appearance in the digital world. The thesis continues with an analysis of “anniversary edits”, showing a significant increase in editorial activities on articles related to traumatic events during the anniversary period compared to a large random sample of “other” articles. More detailed linguistic indicators are introduced in the next chapter. It is statistically shown that the terms related to affective processes, negative emotions, and cognitive and social processes occur more often in articles on traumatic events; “Specifically, the relative number of words expressing anxiety (e.g., “worried”), anger (e.g., “hate”) and sadness (e.g., “cry”) was significantly higher in articles about traumatic events”.
In the next step, Ferron tried to distinguish between human-made and natural disasters. It has been observed that “human-made traumatic events were characterized by language referring to anger and anxiety, while the collective representation of natural disasters expressed more sadness”. Finally, a detailed case study of the talk pages of articles on the 7 July 2005 London bombings and the 2011 Egyptian revolution was carried out, and language indicators, especially those related to emotions, were investigated in a dynamic framework and compared for both examples.
### SOPA blackout decision analyzed
A First Monday article[9] reviews several aspects of the Wikipedia participation in the 18 January 2012 protests against SOPA and PIPA legislation in the US. The paper focuses on the question of legitimacy, looking at how the Wikipedia community arrived at the decision to participate in those protests.
The English Wikipedia landing page, symbolically its only page during the blackout on January 18, 2012
The paper provides an interesting discussion of legitimacy in Wikipedia’s governance, and discusses the legitimacy of the decision to participate in the protests. The author notes that the initiative was given a major boost by Jimmy Wales’ charismatic authority, as Wales posted a straw poll about the issue on his talk page on December 10, 2011, as while the issue was discussed by the community beforehand (for example, in mid-November at the Village Pump), those discussions attracted much less attention. It is hard to say whether the protest would have happened without Jimbo’s push for more discussion, as it veers towards “what if” territory; as things happened, it is true that Jimbo’s actions began a landslide that led to the protests. However, this reviewer is more puzzled at the claim made in the introduction to the article that the discussion involved a “massive involvement of the Wikimedia Foundation staff”. While several WMF staffers were active in the discussions in their official capacity, and while the WMF did issue some official statements about the ongoing discussion, the paper certainly does not provide any evidence to justify the word “massive”.
The paper subsequently notes that the WMF focused on providing information and gently steering the discussion, without any coercion; this hardly justifies the claim of “massive involvement”. At the very least, a clear explanation is necessary of precisely how many WMF staffers participated in the discussion before such a grandiose adjective as “massive” is used. It is true that the WMF staffers helped push the discussion forward, but this reviewer believes that the paper does not sufficiently justify the stress it puts on their participation, and thus may overestimate their influence.
The third part of the paper discusses how the arguments about legitimacy or the lack of it framed the subsequent discourse of the voters. The author notes that after initial period of discussing SOPA itself, the discussion of whether it was legitimate or not for Wikipedia to become involved in the protest took over, with a major justification for it emerging in the form of an argument that it was legitimate for Wikipedia to protest against SOPA as SOPA threatened Wikipedia itself. While this is an interesting claim, unfortunately, other than citing one single comment, no other qualitative or quantitative data are provided; nor is the methodology discussed. We are not told how many individuals voted, how many commented on legitimacy or illegitimacy, how many felt that Wikipedia is threatened; we do not know how the author classified comments supporting any of the viewpoints, or the shifts in the discussion … this list could unfortunately go on. In one specific example drawn from the conclusion, the author writes that “The main factor that shaped the multi-phased process was the will to have the community accept the final decision as legitimate, and avoid backlash. This factor especially influenced those who are suspected of relying on traditional means of legitimacy such as charisma or professionalism.” At the same time, we are provided with no number, no percentage, and certainly no correlation to back up this claim. Without a clear methodology or distinct data it is hard to verify the author’s claims and conclusions.
The introduction also notes that “the mass effort of planning an effective political action was not something “anyone [could] edit”” and “the debate preceding the blackout did not follow Wikipedia’s open and anarchic decision-making system”; unfortunately this reviewer finds no justification for those rather strong claims anywhere else in the article.
Overall, this is an interesting paper about legitimacy in Wikipedia, but it seems to overreach when it tries to draw conclusions from the data that is simply not presented to the reader. It suffers from a failure to explain the research’s methodology, making verification of the claims made very hard. Due to the lack of hard data, most conclusions are unfortunately rendered dubious, and the paper has a tendency to make strong claims that are not backed up by data or even developed later on.
### Bots and collective intelligence explored in dissertation
Rats (blue trace) interacting with a rat-sized robot (red) controlled by a human who in turn perceives the rat’s movements through those of a human-sized avatar in a virtual reality environment.[10] The video was uploaded to Wikimedia Commons by the Open Access Media Importer Bot.
In his Communication and Society PhD dissertation,[11] Randall M. Livingstone of the University of Oregon explores the relationship between the social and technical structures of Wikipedia, with a particular focus on bots and bot operators. After a fairly broad literature review (which summarizes the basic approaches to Wikipedia studies from new media theory, social network analysis, science and technology studies, and political economy), Livingstone gives a concise history of the technical development of Wikipedia, from UseModWiki to MediaWiki, and from a single server to hundreds.
The most interesting chapters for Wikipedians will be V – Wikipedia as a Sociotechnical System – and VI – Wikipedia as Collective Intelligence. Chapter 5 looks at the ways the editing community and the evolution of software (both MediaWiki and the semi-automated tools and bots that interact with editors and articles) “construct” each other. Based on 45 interviews with bot operators and WMF staff, this chapter gives an interesting and varied picture of how Wikipedia works as a sociotechnical system. It will in part be a familiar account to the more tech-minded Wikipedians, but offers an accessible overview of bots and their place in the ecosystem to editors who normally steer clear of bots and software development. Chapter 6 looks at theories of intelligence and the concept of collective intelligence, arguing that Wikipedia exhibits (at least to some extent) the key traits of stigmergy, distributed cognition, and emergence.
### Briefly
• “History’s most influential people” according to Wikipedia: While more in the realm of popular science, Wired UK, among others, published[12] an infographic attributed to César Hidalgo, head of the MIT Media Lab‘s Macro Connections group, visualizing “History’s most influential people”. Unfortunately, beyond noting that rankings “are based on parameters such as the number of language editions in which that person has a page, and the number of people known to speak those languages” the small article does not provide any methodology, nor does it provide much discussion. Until a more extensive description is released, the current graph, while pretty, is little more than a trivia piece.
• Teachers say 75% of teens use Wikipedia (or online encyclopedias) for research assignments: In a Pew Research survey among more than 2000 US middle and high school teachers[13] 75% said that their teenage students use “Wikipedia or other online encyclopedia” in research assignments, making online encyclopedias the second most popular source for students behind search engines such as Google. This number was lower (68%) “among teachers of the lowest income students (those living below the poverty line)” and higher (80%) for those teaching “mostly upper and upper middle income” students, and it also varied by subject (between 69% for teachers of English and 82% for science teachers). The survey report cautions that the sample “skews towards ‘cutting edge’ educators who teach some of the most academically successful students in the country”.
The Google matrix of Wikipedia entries, from an earlier paper by the same authors of this study.[14]
• “Wikipedia communities” as eigenvectors of its Google matrix: An ArXiv preprint[14] studies the “Spectral properties of Google matrix of Wikipedia and other networks”. This Google matrix consists of entries for each pair of pages (for the English Wikipedia, including non-mainspace pages like portals), roughly speaking modelling the behavior of a surfer who goes from one page to any of those that it links to, with equal probability (or, with probability $1-alpha$, jumps to a random page; the damping parameter $alpha$ is set to around 0.85 in the Google search engine). The PageRank appears as the eigenvector of this matrix for the eigenvalue $lambda = 1$. The paper studies the spectrum (eigenvalues) and eigenvectors apart from this special case, interpreting them as certain topic areas: “the eigenvectors of the Google matrix of Wikipedia clearly identify certain communities which are relatively weakly connected with the Wikipedia core when the modulus of corresponding eigenvalue is close to unity. For moderate values of $left|lambdaright|$ we still have well defined communities which however have stronger links with some popular articles (e.g. countries) that leads to a more rapid decay of such eigenmodes.”
• Serial singularities: developing of a network organization by organizing events: In a paper published in the Schmalenbach Business Review,[15] Leonhard Dobusch and Gordon Müller-Seitz from the Freie Universität Berlin suggest that research on organized events has tended to treat those events as isolated and singular events. Using interviews and other data on Wikimania, chapter meetings, and local meet-ups over several years, the authors challenge this idea and show how many different events on different scales and scopes – each with a distinct character – can interact and reinforce each other to help drive the nature of a large distributed organization like Wikimedia.
• The web mirrors value in the real world: comparing a firm’s valuation with its web network position: In a MIT Sloan Working Paper,[16] Qiaoyun Yun and Peter Gloor create a measure of US and Chinese firms “social network” position by looking at how those firms are linked to from a variety of web sources – prominently Wikipedia. They find a positive correlation between betweenness centrality of a firm in a social network constructed from links online and its innovation capability and financial performance. They find that Wikipedia only predicts a firm’s performance in the US.
• Teahouse analyzed: Jonathan Morgan, Sarah Stierch, Siko Bouterse and Heather Walls, from the Wikimedia Foundation Teahouse team, report on the impact of the initiative on 1,098 new Wikipedia contributors who joined the Teahouse between February and October 2012, in a paper to be presented at CSCW ’13.[17] The study reports that participants in the project “make more edits overall, and edit longer”, “make more edits, to more articles” and “participate more in discussion spaces” compared to non-visitors. This paper is part of a research track entirely dedicated to Wikipedia Supported Collaborative Work, featuring three other studies.
Slides from the recently published Article Feedback research report.[18]
• Article feedback: The Wikimedia Foundation published an update about the Article feedback tool on the English Wikipedia, providing statistics about the usage of the feature, and about the moderation activities for the feedback provided.[18]
• New review of Good Faith Collaboration: The reviewer locates[19] Joseph Reagle’s 2010 book about Wikipedia (free online version) as following in a wider context of research on Wikipedia: “The reliability of the encyclopaedia’s content.. and quantitative analysis of large-scale public datasets formed the predominant approach in early empirical research on Wikipedia … This was followed by a more social approach and the adopting of qualitative methods. In this switch to social norms and away from an ethnographic approach, Reagle’s book is a main reference, particularly in terms of its cultural and historical specificity.” Overall, the review finds that “The book is well documented, with an elaborative but accessible writing style, which is at times provocative. It results in a form of rich composition of eight pieces (chapters) of Wikipedia ‘puzzle’, even if some readers might miss a more explicit continuum linking the lines together. Finally, the book is a primary reference point for researchers aiming to study Wikipedia, especially for those unfamiliar with it.”
• Measuring the impact of Wikipedia for GLAM institutions: Ed Baker, software developer at the Natural History Museum in London, has started a series of blog posts on “the impact and use of Wikipedia by organisations”.[20] In the first post, he looked at how the scope of pages linking to the NHM’s website fits with the overall scope of the institution when pages are ranked either by number of page views or by number of links to the NHM. The latter approach could help identify opportunities for a collaboration between GLAM institutions and the Wikimedia communities.
### Notes
1. Keegan, B. (2012). How does Wikipedia deal with a mass shooting? A frenzied start gives way to a few core editors. Nieman Journalism Lab HTML
2. Seward, Z.M. (2012) Why Wikipedia beats Wikinews as a collaborative journalism project. Nieman Journalism Lab HTML
3. Yasseri, T. (2012) The coverage of a tragedy. Stories for Sunday morning HTML
4. Wang, C. (Alex), & Zhang, X. (Michael). (2012). Network Centrality and Contributions to Online Public Good–The Case of Chinese Wikipedia. 2012 45th Hawaii International Conference on System Sciences (pp. 4515–4524). IEEE. DOI
5. López Marcos, P.; Sanz-Valero, J. (2012). “Presencia y adecuación de los principios activos farmacológicos en la edición española de la Wikipedia”. Atención Primaria. DOI.
6. Vademécum. UBM Medica Spain S.A.. Archived from the original on 30 December 2012. Retrieved on 30 December 2012.
7. DeDeo, S. (2012). Evidence for Non-Finite-State Computation in a Human Social System. ArXiV. PDF
8. Ferron, M. (2012, December 7). Collective Memories in Wikipedia. PhD Thesis, University of Trento. PDF
9. Oz, A. (2012). Legitimacy and efficacy: The blackout of Wikipedia. First Monday, 17(12). HTML
10. Normand, J. M.; Sanchez-Vives, M. V.; Waechter, C.; Giannopoulos, E.; Grosswindhager, B.; Spanlang, B.; Guger, C.; Klinker, G. et al. (2012). De Polavieja, Gonzalo G. ed. “Beaming into the Rat World: Enabling Real-Time Interaction between Rat and Human Each at Their Own Scale”. PLoS ONE 7 (10): e48331. DOI. PMC 3485138. PMID 23118987.
11. Randall M. Livingstone: Network of Knowledge: Wikipedia as a Sociotechnical System of Intelligence. PDF
12. Medeiros, J. (2012). Infographic: History’s most influential people, ranked by Wikipedia reach. Wired UK. HTML
13. Purcell, K., Rainie, L., Heaps, A., Buchanan, J., Friedrich, L., Jacklin, A., Chen, C., Zickuhr, K. (2012): How Teens Do Research in the Digital World. Pew Internet HTML
14. a b Ermann, L., Frahm, K. M., & Shepelyansky, D. L. (2012). Spectral properties of Google matrix of Wikipedia and other networks. ArXiv PDF
15. Dobusch, L., & Müller-Seitz, G. (2012). Serial Singularities: Developing a Network Organization by Organizing Events. Schmalenbach Business Review, 64, 204–229. HTML
16. Yun, Q., & Gloor, P. A. (2012). The Web Mirrors Value in the Real World – Comparing a Firm’s Valuation with Its Web Network Position. SSRN Electronic Journal. DOI
17. Morgan, J. T., Bouterse, S., Stierch, S., & Walls, H. (2013). Tea & Sympathy: Crafting Positive New User Experiences on Wikipedia. CSCW ’13. PDF
18. a b Florin, F., Taraborelli, D., Keyes, O. (2012). Article Feedback: New research and next steps. Wikimedia blog HTML
19. Morell, M. F. (2013). Good Faith Collaboration: The Culture of Wikipedia. Information, Communication & Society, 16(1), 146–147. DOI
20. Baker, E. (2012). Measuring the Impact of Wikipedia for organisations (Part 1), Ed’s blog, HTML
|
2016-09-28 08:49:43
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 4, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.3831470012664795, "perplexity": 3170.6157021019044}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2016-40/segments/1474738661327.59/warc/CC-MAIN-20160924173741-00295-ip-10-143-35-109.ec2.internal.warc.gz"}
|
https://gmatclub.com/forum/at-a-certain-university-75-of-students-have-driver-s-licences-and-254445.html
|
GMAT Question of the Day - Daily to your Mailbox; hard ones only
It is currently 23 Jan 2019, 05:40
GMAT Club Daily Prep
Thank you for using the timer - this advanced tool can estimate your performance and suggest more practice questions. We have subscribed you to Daily Prep Questions via email.
Customized
for You
we will pick new questions that match your level based on your Timer History
Track
every week, we’ll send you an estimated GMAT score based on your performance
Practice
Pays
we will pick new questions that match your level based on your Timer History
Events & Promotions
Events & Promotions in January
PrevNext
SuMoTuWeThFrSa
303112345
6789101112
13141516171819
20212223242526
272829303112
Open Detailed Calendar
• Key Strategies to Master GMAT SC
January 26, 2019
January 26, 2019
07:00 AM PST
09:00 AM PST
Attend this webinar to learn how to leverage Meaning and Logic to solve the most challenging Sentence Correction Questions.
• Free GMAT Number Properties Webinar
January 27, 2019
January 27, 2019
07:00 AM PST
09:00 AM PST
Attend this webinar to learn a structured approach to solve 700+ Number Properties question in less than 2 minutes.
At a certain university, 75% of students have driver’s licences and 6
Author Message
TAGS:
Hide Tags
Math Revolution GMAT Instructor
Joined: 16 Aug 2015
Posts: 6832
GMAT 1: 760 Q51 V42
GPA: 3.82
At a certain university, 75% of students have driver’s licences and 6 [#permalink]
Show Tags
28 Nov 2017, 23:25
1
1
00:00
Difficulty:
45% (medium)
Question Stats:
60% (01:30) correct 40% (02:06) wrong based on 67 sessions
HideShow timer Statistics
[GMAT math practice question]
At a certain university, 75% of students have driver’s licences and 60% of the students with driver’s licences are foreign students. What percentage of the total students are foreign students without a driver’s licence?
1) There are 5,000 students at the university.
2) 55% of students at the university are foreigners.
_________________
MathRevolution: Finish GMAT Quant Section with 10 minutes to spare
The one-and-only World’s First Variable Approach for DS and IVY Approach for PS with ease, speed and accuracy.
"Only $149 for 3 month Online Course" "Free Resources-30 day online access & Diagnostic Test" "Unlimited Access to over 120 free video lessons - try it yourself" Manager Status: Quant Expert Q51 Joined: 02 Aug 2014 Posts: 106 Re: At a certain university, 75% of students have driver’s licences and 6 [#permalink] Show Tags 29 Nov 2017, 02:02 1) There are 5,000 students at the university. Using this information we can know the number of students that have driver lisence and foreigners but we know nothing about people who don't have drivers lisence, they could be all foreigners and half of them could be foreigners, so no enough information. 2) 55% of students at the university are foreigners. 75% of students have driver’s licences and 60% of the students with driver’s licences are foreign students. So 0.75*0.6= 0.45 so 45% of students in the university are foreigners with a driver's lisence. 55% of students at the university are foreigners. so 10% is the percentage of foreigners without a driver’s licence. Sufficient Answer B _________________ Math Revolution GMAT Instructor Joined: 16 Aug 2015 Posts: 6832 GMAT 1: 760 Q51 V42 GPA: 3.82 Re: At a certain university, 75% of students have driver’s licences and 6 [#permalink] Show Tags 01 Dec 2017, 00:08 => Forget conventional ways of solving math questions. For DS problems, the VA (Variable Approach) method is the quickest and easiest way to find the answer without actually solving the problem. Remember that equal numbers of variables and independent equations ensure a solution. The first step of the VA (Variable Approach) method is to modify the original condition and the question, and then recheck the question. Attachment: A.png [ 2.57 KiB | Viewed 498 times ] $$45s + 30s + b + d = 100s.$$ $$b + d = 25s.$$ The question asks for the value of $$\frac{b}{100s}$$. To solve problems involving 2x2 matrices and percentages, we need 3 percentages. Since we have 2 percentages, 75% and 60%, one more percentage is required. Condition 1) does not provide a percentage, while condition 2) gives us a percentage. So, the answer is most likely to be B. Condition 1) $$100s = 5,000.$$ $$s = 50.$$ This is not sufficient. Condition 2) $$45s + b = 55s$$ $$b = 10s$$ Thus $$(\frac{b}{100s})*100$$ = $$(\frac{10s}{100s})*100$$ = $$10 %.$$ This is sufficient. Therefore, the answer is B. Answer: B _________________ MathRevolution: Finish GMAT Quant Section with 10 minutes to spare The one-and-only World’s First Variable Approach for DS and IVY Approach for PS with ease, speed and accuracy. "Only$149 for 3 month Online Course"
"Free Resources-30 day online access & Diagnostic Test"
"Unlimited Access to over 120 free video lessons - try it yourself"
Re: At a certain university, 75% of students have driver’s licences and 6 &nbs [#permalink] 01 Dec 2017, 00:08
Display posts from previous: Sort by
|
2019-01-23 13:40:48
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.3444855809211731, "perplexity": 10108.488435985802}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-04/segments/1547584332824.92/warc/CC-MAIN-20190123130602-20190123152602-00067.warc.gz"}
|
http://usaco.org/index.php?page=viewproblem2&cpid=1165
|
## Problem 2. Paired Up
Contest has ended.
Log in to allow submissions in analysis mode
There are a total of $N$ ($1\le N\le 5000$) cows on the number line, each of which is a Holstein or a Guernsey. The breed of the $i$-th cow is given by $b_i\in \{H,G\}$, the location of the $i$-th cow is given by $x_i$ ($0 \leq x_i \leq 10^9$), and the weight of the $i$-th cow is given by $y_i$ ($1 \leq y_i \leq 10^5$).
At Farmer John's signal, some of the cows will form pairs such that
• Every pair consists of a Holstein $h$ and a Guernsey $g$ whose locations are within $K$ of each other ($1\le K\le 10^9$); that is, $|x_h-x_g|\le K$.
• Every cow is either part of a single pair or not part of a pair.
• The pairing is maximal; that is, no two unpaired cows can form a pair.
It's up to you to determine the range of possible sums of weights of the unpaired cows. Specifically,
• If $T=1$, compute the minimum possible sum of weights of the unpaired cows.
• If $T=2$, compute the maximum possible sum of weights of the unpaired cows.
#### INPUT FORMAT (input arrives from the terminal / stdin):
The first input line contains $T$, $N$, and $K$.
Following this are $N$ lines, the $i$-th of which contains $b_i,x_i,y_i$. It is guaranteed that $0\le x_1< x_2< \cdots< x_N\le 10^9$.
#### OUTPUT FORMAT (print output to the terminal / stdout):
The minimum or maximum possible sum of weights of the unpaired cows.
#### SAMPLE INPUT:
2 5 4
G 1 1
H 3 4
G 4 2
H 6 6
H 8 9
#### SAMPLE OUTPUT:
16
Cows $2$ and $3$ can pair up because they are at distance $1$, which is at most $K = 4$. This pairing is maximal, because cow $1$, the only remaining Guernsey, is at distance $5$ from cow $4$ and distance $7$ from cow $5$, which are more than $K = 4$. The sum of weights of unpaired cows is $1 + 6 + 9 = 16$.
#### SAMPLE INPUT:
1 5 4
G 1 1
H 3 4
G 4 2
H 6 6
H 8 9
#### SAMPLE OUTPUT:
6
Cows $1$ and $2$ can pair up because they are at distance $2 \leq K = 4$, and cows $3$ and $5$ can pair up because they are at distance $4 \leq K = 4$. This pairing is maximal because only cow $4$ remains. The sum of weights of unpaired cows is the weight of the only unpaired cow, which is simply $6$.
#### SAMPLE INPUT:
2 10 76
H 1 18
H 18 465
H 25 278
H 30 291
H 36 202
G 45 96
G 60 375
G 93 941
G 96 870
G 98 540
#### SAMPLE OUTPUT:
1893
The answer to this example is $18+465+870+540=1893$.
#### SCORING:
• Test cases 4-7 satisfy $T=1$.
• Test cases 8-14 satisfy $T=2$ and $N\le 300$.
• Test cases 15-22 satisfy $T=2$.
**Note: the memory limit for this problem is 512MB, twice the default.**
Problem credits: Benjamin Qi
Contest has ended. No further submissions allowed.
|
2022-05-25 19:31:47
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9535243511199951, "perplexity": 1632.858110622686}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-21/segments/1652662593428.63/warc/CC-MAIN-20220525182604-20220525212604-00438.warc.gz"}
|
https://www.hackerearth.com/practice/algorithms/graphs/graph-representation/practice-problems/algorithm/mancunian-and-liverbird-go-bar-hopping-2/
|
Mancunian And Liverbird Go Bar Hopping
Tag(s):
## Algorithms, Easy-Medium, Graph Theory, Implementation
Problem
Editorial
Analytics
It's April Fools' Day 2017 and to celebrate, Mancunian and his arch-enemy Liverbird decide to put aside their differences and go have a drink together.
There are N bars (numbered from 1 to N) located in a straight line. The ith is located at point i on the line. Apart from this, there are N-1 roads, the ith of which connects the ith and i+1th bars. Note that the roads are unidirectional. You are given the initial orientation of each road.
On celebratory occasions such as these, there are a lot of people on the streets. Hence, the police have to take special measures to combat traffic congestion. Periodically, they issue a directive to reverse the direction of all the roads. What this means is that, if there was a road directed from bar numbered i to bar i+1, after the update, it will be directed from i+1 to i.
You are given a set of operations. Each operation can be either an update or a query. Update is the one described above. In each query, you are given the location of the bar the two partyers are located at currently. You have to count the number of bars (including the current location) that are reachable from their current location.
Input:
The first line contains a single integer N denoting the number of bars on the road.
The second line contains N-1 integers denoting the directions of the roads. The ith integer is 1 if the ith road is directed from i to i+1 and 0 if directed from i+1 to i.
The third line contains a single integer Q denoting the number of operations.
Each of the next Q lines is either an update or a query.
An update is given by a single character U.
A query is given in the form of the character Q followed by an integer S denoting the current location of the pair.
Output:
For each query, output a single integer which is the answer to the corresponding query.
Constraints:
1 <= N, Q <= 106
1 <= S <= N
SAMPLE INPUT
4
1 1 0
3
Q 1
U
Q 2
SAMPLE OUTPUT
3
2
Explanation
There are 4 bars on the road, located at 1, 2, 3 and 4 respectively.
The initial configuration of the roads is: 1 -> 2 -> 3 <- 4
You can reach bars 1, 2 and 3 from location 1.
After the update, the new configuration is: 1 <- 2 <- 3 -> 4
Now, you can reach bars 1 and 2 from location 2.
Time Limit: 2.0 sec(s) for each input file.
Memory Limit: 256 MB
Source Limit: 1024 KB
Marking Scheme: Marks are awarded when all the testcases pass.
Allowed Languages: Bash, C, C++, C++14, Clojure, C#, D, Erlang, F#, Go, Groovy, Haskell, Java, Java 8, JavaScript(Rhino), JavaScript(Node.js), TypeScript, Julia, Kotlin, Lisp, Lisp (SBCL), Lua, Objective-C, OCaml, Octave, Pascal, Perl, PHP, Python, Python 3, R(RScript), Racket, Ruby, Rust, Scala, Swift, Swift-4.1, Visual Basic
## CODE EDITOR
Initializing Code Editor...
## This Problem was Asked in
Challenge Name
April Easy '17
OTHER PROBLEMS OF THIS CHALLENGE
• Data Structures > Stacks
• Math > Basic Math
• Math > Counting
• Algorithms > Graphs
• Math > Number Theory
|
2019-01-16 16:53:00
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.2608232796192169, "perplexity": 2845.6695358304382}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-04/segments/1547583657555.87/warc/CC-MAIN-20190116154927-20190116180927-00209.warc.gz"}
|
https://brilliant.org/problems/trigonometry-18-2/
|
# Trigonometry! #59
Geometry Level 3
A round balloon of radius $$\frac {r}{2}$$ subtends an angle $$\alpha$$ at the eye of the observer while the angle of elevation of its centre is $$2\beta$$, then the height of the balloon is equal to which of the options?
This problem is part of the set Trigonometry.
×
|
2016-10-23 14:32:43
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8474665880203247, "perplexity": 311.61530105097773}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2016-44/segments/1476988719273.38/warc/CC-MAIN-20161020183839-00158-ip-10-171-6-4.ec2.internal.warc.gz"}
|
https://www.cleanslateeducation.co.uk/t/question-7/1833
|
# Question 7
Note that a map T is linear iff both:
1. \alpha T(v) = T(\alpha v) for all v \in V and \alpha \in \mathbb F
2. T(u + v) = T(u) + T(v) for all u, v \in V
So if T is linear we have:
T(\alpha v_1 + \beta v_2) = T(\alpha v_1) + T(\beta v_2)
by property 1, and:
T(\alpha v_1) + T(\beta v_2) = \alpha T(v_1) + \beta T(v_2)
by property 2.
For the converse, suppose T(\alpha v_1 + \beta v_2) = \alpha T(v_1) + \beta T(v_2) for all v_1,v_2 and \alpha, \beta \in \mathbb F. We have T(\alpha v_1) = \alpha T(v_1) for all \alpha \in \mathbb F and v_1 \in V by setting \beta = 0. By setting \alpha = \beta = 1 we have T(v_1 + v_2) = T(v_1) + T(v_2) for all v_1, v_2 \in V. So T is linear.
|
2022-07-05 09:37:45
|
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 1.0000097751617432, "perplexity": 3004.6599181352826}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-27/segments/1656104542759.82/warc/CC-MAIN-20220705083545-20220705113545-00439.warc.gz"}
|
https://laurentlessard.com/bookproofs/tag/linearity-of-expectation/
|
## Cutting a ruler into pieces
This week’s Riddler Classic is a paradoxical question about cutting a ruler into smaller pieces.
Recently, there was an issue with the production of foot-long rulers. It seems that each ruler was accidentally sliced at three random points along the ruler, resulting in four pieces. Looking on the bright side, that means there are now four times as many rulers — they just happen to have different lengths. On average, how long are the pieces that contain the 6-inch mark?
With four cuts, each piece will be on average 3 inches long, but that can’t be the answer, can it?
Here is my solution:
[Show Solution]
## Circular conundrum
This week’s Riddler problem is short and sweet:
If N points are generated at random places on the perimeter of a circle, what is the probability that you can pick a diameter such that all of those points are on only one side of the newly halved circle?
Here is my solution:
[Show Solution]
## Beer pong
This interesting twist on the game of Beer Pong appeared on the Riddler blog. Here it goes:
The balls are numbered 1 through N. There is also a group of N cups, labeled 1 through N, each of which can hold an unlimited number of ping-pong balls. The game is played in rounds. A round is composed of two phases: throwing and pruning.
During the throwing phase, the player takes balls randomly, one at a time, from the infinite supply and tosses them at the cups. The throwing phase is over when every cup contains at least one ping-pong ball. Next comes the pruning phase. During this phase the player goes through all the balls in each cup and removes any ball whose number does not match the containing cup. Every ball drawn has a uniformly random number, every ball lands in a uniformly random cup, and every throw lands in some cup. The game is over when, after a round is completed, there are no empty cups.
How many rounds would you expect to need to play to finish this game? How many balls would you expect to need to draw and throw to finish this game?
Here is my solution:
[Show Solution]
## Archipelago
A graph theory problem from the Riddler blog. Here it goes:
You live on the volcanic archipelago of Riddleria. Your archipelago is connected via a network of bridges, forming one unified community. In an effort to conserve resources, the ancient Riddlerians who built this network opted not to build bridges between any two islands that were already connected to the community otherwise. Hence, there is exactly one path from any one island to any other island.
Each island contains exactly one volcano. You know that if a volcano erupts, the subterranean pressure change will be so great that the volcano will collapse in on itself, causing its island — and any connected bridges — to crumble into the ocean. Remarkably, other islands will be spared unless their own volcanoes erupt. But if enough bridges go down, your once-unified archipelagic community could split into several smaller, disjointed communities.
If there were N islands in the archipelago originally and each volcano erupts independently with probability p, how many disjointed communities can you expect to find when you return? What value of p maximizes this number?
Here is my solution:
[Show Solution]
## Card collection completion
This Riddler puzzle is a classic probability problem: how long can one expect to wait until the entire set of cards is collected?
My son recently started collecting Riddler League football cards and informed me that he planned on acquiring every card in the set. It made me wonder, naturally, how much of his allowance he would have to spend in order to achieve his goal. His favorite set of cards is Riddler Silver; a set consisting of 100 cards, numbered 1 to 100. The cards are only sold in packs containing 10 random cards, without duplicates, with every card number having an equal chance of being in a pack.
Each pack can be purchased for \$1. If his allowance is \$10 a week, how long would we expect it to take before he has the entire set?
What if he decides to collect the more expansive Riddler Gold set, which has 300 different cards?
Here is my solution:
[Show Solution]
## Hoop hop showdown
This Riddler puzzle is a shout-out to this YouTube video of a game called “Hoop hop showdown”.
Here is an idealized list of its rules:
• Kids stand at either end of N hoops.
• At the start of the game, one kid from each end starts hopping at a speed of one hoop per second until they run into each other, either in adjacent hoops or in the same hoop.
• At that point, they play rock-paper-scissors at a rate of one game per second until one of the kids wins.
• The loser goes back to their end of the hoops, a new kid immediately steps up at that end, and the winner and the new player hop until they run into each other.
• This process continues until someone reaches the opposing end. That player’s team wins!
You’ve just been hired as the gym teacher at Riddler Elementary. You’re having a bad day, and you want to make sure the kids stay occupied for the entire class. If you put down eight hoops, how long on average will the game last? How many hoops should you put down if you want the game to last for the entire 30-minute period, on average?
Here is a derivation of how I solved the problem:
[Show Solution]
And if you just want to see the solutions:
[Show Solution]
## Rug quality control
This Riddler puzzle is about rug manufacturing. How likely are we to avoid defects if manufacture the rugs randomly?
A manufacturer, Riddler Rugs™, produces a random-pattern rug by sewing 1-inch-square pieces of fabric together. The final rugs are 100 inches by 100 inches, and the 1-inch pieces come in three colors: midnight green, silver, and white. The machine randomly picks a 1-inch fabric color for each piece of a rug. Because the manufacturer wants the rugs to look random, it rejects any rug that has a 4-by-4 block of squares that are all the same color. (Its customers don’t have a great sense of the law of large numbers, or of large rugs, for that matter.)
What percentage of rugs would we expect Riddler Rugs™ to reject? How many colors should it use in the rug if it wants to manufacture a million rugs without rejecting any of them?
Here is my solution:
[Show Solution]
## Colorful balls puzzle
This Riddler puzzle about an interesting game involving picking colored balls out of a box. How long will the game last?
You play a game with four balls: One ball is red, one is blue, one is green and one is yellow. They are placed in a box. You draw a ball out of the box at random and note its color. Without replacing the first ball, you draw a second ball and then paint it to match the color of the first. Replace both balls, and repeat the process. The game ends when all four balls have become the same color. What is the expected number of turns to finish the game?
Extra credit: What if there are more balls and more colors?
Here is my solution to the first part (four balls):
[Show Solution]
Here is my solution to the general case with $N$ balls:
[Show Solution]
## A supreme court puzzle
This timely Riddler puzzle is about filling supreme court vacancies…
Imagine that U.S. Supreme Court nominees are only confirmed if the same party holds the presidency and the Senate. What is the expected number of vacancies on the bench in the long run?
You can assume the following:
• There are two parties, and each has a 50 percent chance of winning the presidency and a 50 percent chance of winning the Senate in each election.
• The outcomes of Senate elections and presidential elections are independent.
• The length of time for which a justice serves is uniformly distributed between zero and 40 years.
Here is my solution:
[Show Solution]
## The lonesome king
This Riddler puzzle is about a random elimination game. Will someone remain at the end, or will everyone be eliminated?
In the first round, every subject simultaneously chooses a random other subject on the green. (It’s possible, of course, that some subjects will be chosen by more than one other subject.) Everybody chosen is eliminated. In each successive round, the subjects who are still in contention simultaneously choose a random remaining subject, and again everybody chosen is eliminated. If there is eventually exactly one subject remaining at the end of a round, he or she wins and heads straight to the castle for fêting. However, it’s also possible that everybody could be eliminated in the last round, in which case nobody wins and the king remains alone. If the kingdom has a population of 56,000 (not including the king), is it more likely that a prince or princess will be crowned or that nobody will win? How does the answer change for a kingdom of arbitrary size?
Here is my solution:
[Show Solution]
|
2020-09-21 19:15:18
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.3382326364517212, "perplexity": 1179.1150925708039}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-40/segments/1600400202007.15/warc/CC-MAIN-20200921175057-20200921205057-00324.warc.gz"}
|
https://aviation.stackexchange.com/questions/35046/what-are-the-mass-flow-rate-and-exhaust-velocity-for-a-cf6-or-ge90-turbofan
|
# What are the mass flow rate and exhaust velocity for a CF6 or GE90 turbofan?
For a typical turbofan jet engine (two examples given in the title), what is the exhaust velocity and mass flow rate of air at sea level and cruising altitude (~ FL350)?
Also, does the specific impulse vary at different altitudes, since the density of air decreases with increases in altitude?
According to The GE90 - An Introduction, the GE90 has a mass flow rate of 1,350 kg/s at take-off and 576 kg/s at cruise (at 10.668 km = FL350). The CF6 has a mass flow rate of 591 kg/s at take-off.
Exhaust velocity isn't generally quoted, perhaps because it only bears a loose relation to the performance characteristics of the engine. I suppose if you wanted you could find the area of the exhaust and use the density of air to find the velocity.
Looking specifically at the GE90, we see that it has take-off SFC (specific fuel consumption) of 7.9 mg/N-s. Using the formula $I_{sp} = 1/(g_o·\text{SFC})$ given in paragraph 4 of the Wikipedia article Specific Impulse we get a specific impulse of 12,285 s. If we use the cruise SFC of 15.6 mg/N-s, we get a specific impulse of 6,536 s. Is this an effect of the lower density of air, or simply of the lower fuel required in the cruise? I don't know.
• The average exhaust velocity is trivial to derive from the thrust, aircraft speed and mass flow rate. You'd need to know the percent of thrust produced by the core flow to derive separate core and bypass exhaust velocities. – Jan Hudec Mar 29 '17 at 18:46
• Specific impulse is lower at altitude, which means more fuel is required for the same thrust. This would likely be effect of the higher airspeed as you need more power for the same $\Delta v$. The lower temperature increases the thermodynamic efficiency, but not enough to offset the reduction due to the inlet velocity. The total consumption is only lower in cruise because you need less power to maintain speed in the less dense air. – Jan Hudec Mar 29 '17 at 18:51
|
2021-01-16 11:38:25
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.7286509275436401, "perplexity": 773.2772452789876}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 5, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-04/segments/1610703506640.22/warc/CC-MAIN-20210116104719-20210116134719-00396.warc.gz"}
|
http://www.ebooklibrary.org/articles/eng/Kalman_filter
|
#jsDisabledContent { display:none; } My Account | Register | Help
# Kalman filter
Article Id: WHEBN0000180855
Reproduction Date:
Title: Kalman filter Author: World Heritage Encyclopedia Language: English Subject: Collection: Publisher: World Heritage Encyclopedia Publication Date:
### Kalman filter
The Kalman filter keeps track of the estimated state of the system and the variance or uncertainty of the estimate. The estimate is updated using a state transition model and measurements. \hat{x}_{k\mid k-1} denotes the estimate of the system's state at time step k before the k-th measurement yk has been taken into account; P_{k\mid k-1} is the corresponding uncertainty.
Kalman filtering, also known as linear quadratic estimation (LQE), is an algorithm that uses a series of measurements observed over time, containing statistical noise and other inaccuracies, and produces estimates of unknown variables that tend to be more precise than those based on a single measurement alone. The filter is named after Rudolf E. Kálmán, one of the primary developers of its theory.
The Kalman filter has numerous applications in technology. A common application is for guidance, navigation and control of vehicles, particularly aircraft and spacecraft. Furthermore, the Kalman filter is a widely applied concept in time series analysis used in fields such as signal processing and econometrics. Kalman filters also are one of the main topics in the field of robotic motion planning and control, and they are sometimes included in trajectory optimization.
The algorithm works in a two-step process. In the prediction step, the Kalman filter produces estimates of the current state variables, along with their uncertainties. Once the outcome of the next measurement (necessarily corrupted with some amount of error, including random noise) is observed, these estimates are updated using a weighted average, with more weight being given to estimates with higher certainty. The algorithm is recursive. It can run in real time, using only the present input measurements and the previously calculated state and its uncertainty matrix; no additional past information is required.
The Kalman filter does not require any assumption that the errors are Gaussian.[1] However, the filter yields the exact conditional probability estimate in the special case that all errors are Gaussian-distributed.
Extensions and generalizations to the method have also been developed, such as the extended Kalman filter and the unscented Kalman filter which work on nonlinear systems. The underlying model is a Bayesian model similar to a hidden Markov model but where the state space of the latent variables is continuous and where all latent and observed variables have Gaussian distributions.
## Contents
• Naming and historical development 1
• Overview of the calculation 2
• Example application 3
• Technical description and context 4
• Underlying dynamic system model 5
• Details 6
• Predict 6.1
• Update 6.2
• Invariants 6.3
• Optimality and performance 6.4
• Example application, technical 7
• Derivations 8
• Deriving the a posteriori estimate covariance matrix 8.1
• Kalman gain derivation 8.2
• Simplification of the a posteriori error covariance formula 8.3
• Sensitivity analysis 9
• Square root form 10
• Relationship to recursive Bayesian estimation 11
• Marginal likelihood 12
• Information filter 13
• Fixed-lag smoother 14
• Fixed-interval smoothers 15
• Rauch–Tung–Striebel 15.1
• Modified Bryson–Frazier smoother 15.2
• Minimum-variance smoother 15.3
• Frequency Weighted Kalman filters 16
• Non-linear filters 17
• Extended Kalman filter 17.1
• Unscented Kalman filter 17.2
• Kalman–Bucy filter 18
• Hybrid Kalman filter 19
• Variants for the recovery of sparse signals 20
• Applications 21
• References 23
## Naming and historical development
Rudolf Emil Kalman, co-inventor and developer of the Kalman filter.
The filter is named after Hungarian émigré Rudolf E. Kálmán, although Thorvald Nicolai Thiele[2][3] and Peter Swerling developed a similar algorithm earlier. Richard S. Bucy of the University of Southern California contributed to the theory, leading to it often being called the Kalman–Bucy filter. Stanley F. Schmidt is generally credited with developing the first implementation of a Kalman filter. He realized that the filter could be divided into two distinct parts, with one part for time periods between sensor outputs and another part for incorporating measurements.[4] It was during a visit by Kálmán to the NASA Ames Research Center that he saw the applicability of his ideas to the problem of trajectory estimation for the Apollo program, leading to its incorporation in the Apollo navigation computer. This Kalman filter was first described and partially developed in technical papers by Swerling (1958), Kalman (1960) and Kalman and Bucy (1961).
Kalman filters have been vital in the implementation of the navigation systems of U.S. Navy nuclear ballistic missile submarines, and in the guidance and navigation systems of cruise missiles such as the U.S. Navy's Tomahawk missile and the U.S. Air Force's Air Launched Cruise Missile. It is also used in the guidance and navigation systems of the NASA Space Shuttle and the attitude control and navigation systems of the International Space Station.
This digital filter is sometimes called the Stratonovich–Kalman–Bucy filter because it is a special case of a more general, non-linear filter developed somewhat earlier by the Soviet mathematician Ruslan Stratonovich.[5][6][7][8] In fact, some of the special case linear filter's equations appeared in these papers by Stratonovich that were published before summer 1960, when Kalman met with Stratonovich during a conference in Moscow.
## Overview of the calculation
The Kalman filter uses a system's dynamics model (e.g., physical laws of motion), known control inputs to that system, and multiple sequential measurements (such as from sensors) to form an estimate of the system's varying quantities (its state) that is better than the estimate obtained by using any one measurement alone. As such, it is a common sensor fusion and data fusion algorithm.
All measurements and calculations based on models are estimated to some degree. Noisy sensor data, approximations in the equations that describe how a system changes, and external factors that are not accounted for introduce some uncertainty about the inferred values for a system's state. The Kalman filter averages a prediction of a system's state with a new measurement using a weighted average. The purpose of the weights is that values with better (i.e., smaller) estimated uncertainty are "trusted" more. The weights are calculated from the covariance, a measure of the estimated uncertainty of the prediction of the system's state. The result of the weighted average is a new state estimate that lies between the predicted and measured state, and has a better estimated uncertainty than either alone. This process is repeated every time step, with the new estimate and its covariance informing the prediction used in the following iteration. This means that the Kalman filter works recursively and requires only the last "best guess", rather than the entire history, of a system's state to calculate a new state.
Because the certainty of the measurements is often difficult to measure precisely, it is common to discuss the filter's behavior in terms of gain. The Kalman gain is a function of the relative certainty of the measurements and current state estimate, and can be "tuned" to achieve particular performance. With a high gain, the filter places more weight on the measurements, and thus follows them more closely. With a low gain, the filter follows the model predictions more closely, smoothing out noise but decreasing the responsiveness. At the extremes, a gain of one causes the filter to ignore the state estimate entirely, while a gain of zero causes the measurements to be ignored.
When performing the actual calculations for the filter (as discussed below), the state estimate and covariances are coded into matrices to handle the multiple dimensions involved in a single set of calculations. This allows for a representation of linear relationships between different state variables (such as position, velocity, and acceleration) in any of the transition models or covariances.
## Example application
As an example application, consider the problem of determining the precise location of a truck. The truck can be equipped with a GPS unit that provides an estimate of the position within a few meters. The GPS estimate is likely to be noisy; readings 'jump around' rapidly, though always remaining within a few meters of the real position. In addition, since the truck is expected to follow the laws of physics, its position can also be estimated by integrating its velocity over time, determined by keeping track of wheel revolutions and the angle of the steering wheel. This is a technique known as dead reckoning. Typically, the dead reckoning will provide a very smooth estimate of the truck's position, but it will drift over time as small errors accumulate.
In this example, the Kalman filter can be thought of as operating in two distinct phases: predict and update. In the prediction phase, the truck's old position will be modified according to the physical laws of motion (the dynamic or "state transition" model) plus any changes produced by the accelerator pedal and steering wheel. Not only will a new position estimate be calculated, but a new covariance will be calculated as well. Perhaps the covariance is proportional to the speed of the truck because we are more uncertain about the accuracy of the dead reckoning position estimate at high speeds but very certain about the position estimate when moving slowly. Next, in the update phase, a measurement of the truck's position is taken from the GPS unit. Along with this measurement comes some amount of uncertainty, and its covariance relative to that of the prediction from the previous phase determines how much the new measurement will affect the updated prediction. Ideally, if the dead reckoning estimates tend to drift away from the real position, the GPS measurement should pull the position estimate back towards the real position but not disturb it to the point of becoming rapidly changing and noisy.
## Technical description and context
The Kalman filter is an efficient recursive filter that estimates the internal state of a linear dynamic system from a series of noisy measurements. It is used in a wide range of engineering and econometric applications from radar and computer vision to estimation of structural macroeconomic models,[9][10] and is an important topic in control theory and control systems engineering. Together with the linear-quadratic regulator (LQR), the Kalman filter solves the linear-quadratic-Gaussian control problem (LQG). The Kalman filter, the linear-quadratic regulator and the linear-quadratic-Gaussian controller are solutions to what arguably are the most fundamental problems in control theory.
In most applications, the internal state is much larger (more degrees of freedom) than the few "observable" parameters which are measured. However, by combining a series of measurements, the Kalman filter can estimate the entire internal state.
In Dempster–Shafer theory, each state equation or observation is considered a special case of a linear belief function and the Kalman filter is a special case of combining linear belief functions on a join-tree or Markov tree. Additional approaches include belief filters which use Bayes or evidential updates to the state equations.
A wide variety of Kalman filters have now been developed, from Kalman's original formulation, now called the "simple" Kalman filter, the Kalman–Bucy filter, Schmidt's "extended" filter, the information filter, and a variety of "square-root" filters that were developed by Bierman, Thornton and many others. Perhaps the most commonly used type of very simple Kalman filter is the phase-locked loop, which is now ubiquitous in radios, especially frequency modulation (FM) radios, television sets, satellite communications receivers, outer space communications systems, and nearly any other electronic communications equipment.
## Underlying dynamic system model
The Kalman filters are based on linear dynamic systems discretized in the time domain. They are modelled on a Markov chain built on linear operators perturbed by errors that may include Gaussian noise. The state of the system is represented as a vector of real numbers. At each discrete time increment, a linear operator is applied to the state to generate the new state, with some noise mixed in, and optionally some information from the controls on the system if they are known. Then, another linear operator mixed with more noise generates the observed outputs from the true ("hidden") state. The Kalman filter may be regarded as analogous to the hidden Markov model, with the key difference that the hidden state variables take values in a continuous space (as opposed to a discrete state space as in the hidden Markov model). There is a strong duality between the equations of the Kalman Filter and those of the hidden Markov model. A review of this and other models is given in Roweis and Ghahramani (1999)[11] and Hamilton (1994), Chapter 13.[12]
In order to use the Kalman filter to estimate the internal state of a process given only a sequence of noisy observations, one must model the process in accordance with the framework of the Kalman filter. This means specifying the following matrices: Fk, the state-transition model; Hk, the observation model; Qk, the covariance of the process noise; Rk, the covariance of the observation noise; and sometimes Bk, the control-input model, for each time-step, k, as described below.
Model underlying the Kalman filter. Squares represent matrices. Ellipses represent multivariate normal distributions (with the mean and covariance matrix enclosed). Unenclosed values are vectors. In the simple case, the various matrices are constant with time, and thus the subscripts are dropped, but the Kalman filter allows any of them to change each time step.
The Kalman filter model assumes the true state at time k is evolved from the state at (k − 1) according to
\mathbf{x}_{k} = \mathbf{F}_{k} \mathbf{x}_{k-1} + \mathbf{B}_{k} \mathbf{u}_{k} + \mathbf{w}_{k}
where
• Fk is the state transition model which is applied to the previous state xk−1;
• Bk is the control-input model which is applied to the control vector uk;
• wk is the process noise which is assumed to be drawn from a zero mean multivariate normal distribution with covariance Qk.
\mathbf{w}_k \sim \mathcal{N}(0, \mathbf{Q}_k)
At time k an observation (or measurement) zk of the true state xk is made according to
\mathbf{z}_k = \mathbf{H}_{k} \mathbf{x}_k + \mathbf{v}_k
where Hk is the observation model which maps the true state space into the observed space and vk is the observation noise which is assumed to be zero mean Gaussian white noise with covariance Rk.
\mathbf{v}_k \sim \mathcal{N}(0, \mathbf{R}_k)
The initial state, and the noise vectors at each step {x0, w1, …, wk, v1vk} are all assumed to be mutually independent.
Many real dynamical systems do not exactly fit this model. In fact, unmodelled dynamics can seriously degrade the filter performance, even when it was supposed to work with unknown stochastic signals as inputs. The reason for this is that the effect of unmodelled dynamics depends on the input, and, therefore, can bring the estimation algorithm to instability (it diverges). On the other hand, independent white noise signals will not make the algorithm diverge. The problem of separating between measurement noise and unmodelled dynamics is a difficult one and is treated in control theory under the framework of robust control.[13][14]
## Details
The Kalman filter is a recursive estimator. This means that only the estimated state from the previous time step and the current measurement are needed to compute the estimate for the current state. In contrast to batch estimation techniques, no history of observations and/or estimates is required. In what follows, the notation \hat{\mathbf{x}}_{n\mid m} represents the estimate of \mathbf{x} at time n given observations up to, and including at time m ≤ n.
The state of the filter is represented by two variables:
• \hat{\mathbf{x}}_{k\mid k}, the a posteriori state estimate at time k given observations up to and including at time k;
• \mathbf{P}_{k\mid k}, the a posteriori error covariance matrix (a measure of the estimated accuracy of the state estimate).
The Kalman filter can be written as a single equation, however it is most often conceptualized as two distinct phases: "Predict" and "Update". The predict phase uses the state estimate from the previous timestep to produce an estimate of the state at the current timestep. This predicted state estimate is also known as the a priori state estimate because, although it is an estimate of the state at the current timestep, it does not include observation information from the current timestep. In the update phase, the current a priori prediction is combined with current observation information to refine the state estimate. This improved estimate is termed the a posteriori state estimate.
Typically, the two phases alternate, with the prediction advancing the state until the next scheduled observation, and the update incorporating the observation. However, this is not necessary; if an observation is unavailable for some reason, the update may be skipped and multiple prediction steps performed. Likewise, if multiple independent observations are available at the same time, multiple update steps may be performed (typically with different observation matrices Hk).[15][16]
### Predict
Predicted (a priori) state estimate \hat{\mathbf{x}}_{k\mid k-1} = \mathbf{F}_{k}\hat{\mathbf{x}}_{k-1\mid k-1} + \mathbf{B}_{k} \mathbf{u}_{k} Predicted (a priori) estimate covariance \mathbf{P}_{k\mid k-1} = \mathbf{F}_{k} \mathbf{P}_{k-1\mid k-1} \mathbf{F}_{k}^{\text{T}} + \mathbf{Q}_{k}
### Update
Innovation or measurement residual \tilde{\mathbf{y}}_k = \mathbf{z}_k - \mathbf{H}_k\hat{\mathbf{x}}_{k\mid k-1} Innovation (or residual) covariance \mathbf{S}_k = \mathbf{H}_k \mathbf{P}_{k\mid k-1} \mathbf{H}_k^T + \mathbf{R}_k Optimal Kalman gain \mathbf{K}_k = \mathbf{P}_{k\mid k-1}\mathbf{H}_k^T \mathbf{S}_k^{-1} Updated (a posteriori) state estimate \hat{\mathbf{x}}_{k\mid k} = \hat{\mathbf{x}}_{k\mid k-1} + \mathbf{K}_k\tilde{\mathbf{y}}_k Updated (a posteriori) estimate covariance \mathbf{P}_{k|k} = (I - \mathbf{K}_k \mathbf{H}_k) \mathbf{P}_{k|k-1}
The formula for the updated estimate covariance above is only valid for the optimal Kalman gain. Usage of other gain values requires a more complex formula found in the derivations section.
### Invariants
If the model is accurate, and the values for \hat{\mathbf{x}}_{0\mid 0} and \mathbf{P}_{0\mid 0} accurately reflect the distribution of the initial state values, then the following invariants are preserved: (all estimates have a mean error of zero)
• \mathbf{E}[\mathbf{x}_k - \hat{\mathbf{x}}_{k\mid k}] = \textrm{E}[\mathbf{x}_k - \hat{\mathbf{x}}_{k\mid k-1}] = 0
• \textrm{E}[\tilde{\mathbf{y}}_k] = 0
where \textrm{E}[\xi] is the expected value of \xi, and covariance matrices accurately reflect the covariance of estimates
• \mathbf{P}_{k\mid k} = \mathrm{cov}(\mathbf{x}_k - \hat{\mathbf{x}}_{k\mid k})
• \mathbf{P}_{k\mid k-1} = \mathrm{cov}(\mathbf{x}_k - \hat{\mathbf{x}}_{k\mid k-1})
• \mathbf{S}_{k} = \mathrm{cov}(\tilde{\mathbf{y}}_k)
### Optimality and performance
It follows from theory that the Kalman filter is optimal in cases where a) the model perfectly matches the real system, b) the entering noise is white and c) the covariances of the noise are exactly known. Several methods for the noise covariance estimation have been proposed during past decades. After the covariances are estimated, it is useful to evaluate the performance of the filter, i.e. whether it is possible to improve the state estimation quality. If the Kalman filter works optimally, the innovation sequence (the output prediction error) is a white noise, therefore the whiteness property of the innovations measures filter performance. Several different methods can be used for this purpose.[17]
## Example application, technical
Black: truth, green: filtered process, red: observations
Consider a truck on frictionless, straight rails. Initially, the truck is stationary at position 0, but it is buffeted this way and that by random uncontrolled forces. We measure the position of the truck every Δt seconds, but these measurements are imprecise; we want to maintain a model of where the truck is and what its velocity is. We show here how we derive the model from which we create our Kalman filter.
Since \mathbf F, \mathbf H, \mathbf R, \mathbf Q are constant, their time indices are dropped.
The position and velocity of the truck are described by the linear state space
\mathbf{x}_{k} = \begin{bmatrix} x \\ \dot{x} \end{bmatrix}
where \dot{x} is the velocity, that is, the derivative of position with respect to time.
We assume that between the (k − 1) and k timestep uncontrolled forces cause a constant acceleration of ak that is normally distributed, with mean 0 and standard deviation σa. From Newton's laws of motion we conclude that
\mathbf{x}_{k} = \mathbf{F} \mathbf{x}_{k-1} + \mathbf{G}a_{k}
(note that there is no \mathbf{B}u term since we have no known control inputs. Instead, we assume that ak is the effect of an unknown input and \mathbf{G} applies that effect to the state vector) where
\mathbf{F} = \begin{bmatrix} 1 & \Delta t \\ 0 & 1 \end{bmatrix}
and
\mathbf{G} = \begin{bmatrix} \frac{\Delta t^2}{2} \\[6pt] \Delta t \end{bmatrix}
so that
\mathbf{x}_{k} = \mathbf{F} \mathbf{x}_{k-1} + \mathbf{w}_{k}
where \mathbf{w}_{k} \sim N(0, \mathbf{Q}) and
\mathbf{Q}=\mathbf{G}\mathbf{G}^{\text{T}}\sigma_a^2 =\begin{bmatrix} \frac{\Delta t^4}{4} & \frac{\Delta t^3}{2} \\[6pt] \frac{\Delta t^3}{2} & \Delta t^2 \end{bmatrix}\sigma_a^2.
At each time step, a noisy measurement of the true position of the truck is made. Let us suppose the measurement noise vk is also normally distributed, with mean 0 and standard deviation σz.
\mathbf{z}_{k} = \mathbf{H x}_{k} + \mathbf{v}_{k}
where
\mathbf{H} = \begin{bmatrix} 1 & 0 \end{bmatrix}
and
\mathbf{R} = \textrm{E}[\mathbf{v}_k \mathbf{v}_k^{\text{T}}] = \begin{bmatrix} \sigma_z^2 \end{bmatrix}
We know the initial starting state of the truck with perfect precision, so we initialize
\hat{\mathbf{x}}_{0\mid 0} = \begin{bmatrix} 0 \\ 0 \end{bmatrix}
and to tell the filter that we know the exact position and velocity, we give it a zero covariance matrix:
\mathbf{P}_{0\mid 0} = \begin{bmatrix} 0 & 0 \\ 0 & 0 \end{bmatrix}
If the initial position and velocity are not known perfectly the covariance matrix should be initialized with a suitably large number, say L, on its diagonal.
\mathbf{P}_{0\mid 0} = \begin{bmatrix} L & 0 \\ 0 & L \end{bmatrix}
The filter will then prefer the information from the first measurements over the information already in the model.
## Derivations
### Deriving the a posteriori estimate covariance matrix
Starting with our invariant on the error covariance Pk | k as above
\mathbf{P}_{k\mid k} = \mathrm{cov}(\mathbf{x}_{k} - \hat{\mathbf{x}}_{k\mid k})
substitute in the definition of \hat{\mathbf{x}}_{k\mid k}
\mathbf{P}_{k\mid k} = \textrm{cov}(\mathbf{x}_{k} - (\hat{\mathbf{x}}_{k\mid k-1} + \mathbf{K}_k\tilde{\mathbf{y}}_{k}))
and substitute \tilde{\mathbf{y}}_k
\mathbf{P}_{k\mid k} = \textrm{cov}(\mathbf{x}_{k} - (\hat{\mathbf{x}}_{k\mid k-1} + \mathbf{K}_k(\mathbf{z}_k - \mathbf{H}_k\hat{\mathbf{x}}_{k\mid k-1})))
and \mathbf{z}_{k}
\mathbf{P}_{k\mid k} = \textrm{cov}(\mathbf{x}_{k} - (\hat{\mathbf{x}}_{k\mid k-1} + \mathbf{K}_k(\mathbf{H}_k\mathbf{x}_k + \mathbf{v}_k - \mathbf{H}_k\hat{\mathbf{x}}_{k\mid k-1})))
and by collecting the error vectors we get
\mathbf{P}_{k|k} = \textrm{cov}((I - \mathbf{K}_k \mathbf{H}_{k})(\mathbf{x}_k - \hat{\mathbf{x}}_{k\mid k-1}) - \mathbf{K}_k \mathbf{v}_k )
Since the measurement error vk is uncorrelated with the other terms, this becomes
\mathbf{P}_{k|k} = \textrm{cov}((I - \mathbf{K}_k \mathbf{H}_{k})(\mathbf{x}_k - \hat{\mathbf{x}}_{k\mid k-1})) + \textrm{cov}(\mathbf{K}_k \mathbf{v}_k )
by the properties of vector covariance this becomes
\mathbf{P}_{k\mid k} = (I - \mathbf{K}_k \mathbf{H}_{k})\textrm{cov}(\mathbf{x}_k - \hat{\mathbf{x}}_{k\mid k-1})(I - \mathbf{K}_k \mathbf{H}_{k})^{\text{T}} + \mathbf{K}_k\textrm{cov}(\mathbf{v}_k )\mathbf{K}_k^{\text{T}}
which, using our invariant on Pk | k−1 and the definition of Rk becomes
\mathbf{P}_{k\mid k} = (I - \mathbf{K}_k \mathbf{H}_{k}) \mathbf{P}_{k\mid k-1} (I - \mathbf{K}_k \mathbf{H}_{k})^\text{T} + \mathbf{K}_k \mathbf{R}_k \mathbf{K}_k^\text{T}
This formula (sometimes known as the "Joseph form" of the covariance update equation) is valid for any value of Kk. It turns out that if Kk is the optimal Kalman gain, this can be simplified further as shown below.
### Kalman gain derivation
The Kalman filter is a minimum mean-square error estimator. The error in the a posteriori state estimation is
\mathbf{x}_{k} - \hat{\mathbf{x}}_{k\mid k}
We seek to minimize the expected value of the square of the magnitude of this vector, \textrm{E}[\|\mathbf{x}_{k} - \hat{\mathbf{x}}_{k|k}\|^2]. This is equivalent to minimizing the trace of the a posteriori estimate covariance matrix \mathbf{P}_{k|k} . By expanding out the terms in the equation above and collecting, we get:
\begin{align} \mathbf{P}_{k\mid k} & = \mathbf{P}_{k\mid k-1} - \mathbf{K}_k \mathbf{H}_k \mathbf{P}_{k\mid k-1} - \mathbf{P}_{k\mid k-1} \mathbf{H}_k^\text{T} \mathbf{K}_k^\text{T} + \mathbf{K}_k (\mathbf{H}_k \mathbf{P}_{k\mid k-1} \mathbf{H}_k^\text{T} + \mathbf{R}_k) \mathbf{K}_k^\text{T} \\[6pt] & = \mathbf{P}_{k\mid k-1} - \mathbf{K}_k \mathbf{H}_k \mathbf{P}_{k\mid k-1} - \mathbf{P}_{k\mid k-1} \mathbf{H}_k^\text{T} \mathbf{K}_k^\text{T} + \mathbf{K}_k \mathbf{S}_k\mathbf{K}_k^\text{T} \end{align}
The trace is minimized when its matrix derivative with respect to the gain matrix is zero. Using the gradient matrix rules and the symmetry of the matrices involved we find that
\frac{\partial \; \mathrm{tr}(\mathbf{P}_{k\mid k})}{\partial \;\mathbf{K}_k} = -2 (\mathbf{H}_k \mathbf{P}_{k\mid k-1})^\text{T} + 2 \mathbf{K}_k \mathbf{S}_k = 0.
Solving this for Kk yields the Kalman gain:
\mathbf{K}_k \mathbf{S}_k = (\mathbf{H}_k \mathbf{P}_{k\mid k-1})^\text{T} = \mathbf{P}_{k\mid k-1} \mathbf{H}_k^\text{T}
\mathbf{K}_{k} = \mathbf{P}_{k\mid k-1} \mathbf{H}_k^\text{T} \mathbf{S}_k^{-1}
This gain, which is known as the optimal Kalman gain, is the one that yields MMSE estimates when used.
### Simplification of the a posteriori error covariance formula
The formula used to calculate the a posteriori error covariance can be simplified when the Kalman gain equals the optimal value derived above. Multiplying both sides of our Kalman gain formula on the right by SkKkT, it follows that
\mathbf{K}_k \mathbf{S}_k \mathbf{K}_k^\mathrm{T} = \mathbf{P}_{k\mid k-1} \mathbf{H}_k^\mathrm{T} \mathbf{K}_k^\mathrm{T}
Referring back to our expanded formula for the a posteriori error covariance,
\mathbf{P}_{k\mid k} = \mathbf{P}_{k\mid k-1} - \mathbf{K}_k \mathbf{H}_k \mathbf{P}_{k\mid k-1} - \mathbf{P}_{k\mid k-1} \mathbf{H}_k^\mathrm{T} \mathbf{K}_k^\mathrm{T} + \mathbf{K}_k \mathbf{S}_k \mathbf{K}_k^\mathrm{T}
we find the last two terms cancel out, giving
\mathbf{P}_{k\mid k} = \mathbf{P}_{k\mid k-1} - \mathbf{K}_k \mathbf{H}_k \mathbf{P}_{k\mid k-1} = (I - \mathbf{K}_{k} \mathbf{H}_{k}) \mathbf{P}_{k\mid k-1}.
This formula is computationally cheaper and thus nearly always used in practice, but is only correct for the optimal gain. If arithmetic precision is unusually low causing problems with numerical stability, or if a non-optimal Kalman gain is deliberately used, this simplification cannot be applied; the a posteriori error covariance formula as derived above (Joseph form) must be used.
## Sensitivity analysis
The Kalman filtering equations provide an estimate of the state \hat{\mathbf{x}}_{k\mid k} and its error covariance \mathbf{P}_{k\mid k} recursively. The estimate and its quality depend on the system parameters and the noise statistics fed as inputs to the estimator. This section analyzes the effect of uncertainties in the statistical inputs to the filter.[18] In the absence of reliable statistics or the true values of noise covariance matrices \mathbf{Q}_{k} and \mathbf{R}_{k}, the expression
\mathbf{P}_{k\mid k} = (\mathbf{I} - \mathbf{K}_k\mathbf{H}_k)\mathbf{P}_{k\mid k-1}(\mathbf{I} - \mathbf{K}_k\mathbf{H}_k)^\mathrm{T} + \mathbf{K}_k\mathbf{R}_k\mathbf{K}_k^\mathrm{T}
no longer provides the actual error covariance. In other words, \mathbf{P}_{k\mid k} \neq E[(\mathbf{x}_k - \hat{\mathbf{x}}_{k\mid k})(\mathbf{x}_k - \hat{\mathbf{x}}_{k\mid k})^\mathrm{T}]. In most real-time applications, the covariance matrices that are used in designing the Kalman filter are different from the actual (true) noise covariances matrices. This sensitivity analysis describes the behavior of the estimation error covariance when the noise covariances as well as the system matrices \mathbf{F}_{k} and \mathbf{H}_{k} that are fed as inputs to the filter are incorrect. Thus, the sensitivity analysis describes the robustness (or sensitivity) of the estimator to misspecified statistical and parametric inputs to the estimator.
This discussion is limited to the error sensitivity analysis for the case of statistical uncertainties. Here the actual noise covariances are denoted by \mathbf{Q}^{a}_k and \mathbf{R}^{a}_k respectively, whereas the design values used in the estimator are \mathbf{Q}_k and \mathbf{R}_k respectively. The actual error covariance is denoted by \mathbf{P}_{k\mid k}^a and \mathbf{P}_{k\mid k} as computed by the Kalman filter is referred to as the Riccati variable. When \mathbf{Q}_k \equiv \mathbf{Q}^{a}_k and \mathbf{R}_k \equiv \mathbf{R}^{a}_k, this means that \mathbf{P}_{k\mid k} = \mathbf{P}_{k\mid k}^a. While computing the actual error covariance using \mathbf{P}_{k\mid k}^a = E[(\mathbf{x}_k - \hat{\mathbf{x}}_{k\mid k})(\mathbf{x}_k - \hat{\mathbf{x}}_{k\mid k})^\mathrm{T}] , substituting for \widehat{\mathbf{x}}_{k\mid k} and using the fact that E[\mathbf{w}_k\mathbf{w}_k^\mathrm{T}] = \mathbf{Q}_{k}^a and E[\mathbf{v}_k\mathbf{v}_k^\mathrm{T}] = \mathbf{R}_{k}^a, results in the following recursive equations for \mathbf{P}_{k\mid k}^a :
\mathbf{P}_{k\mid k-1}^a = \mathbf{F}_k\mathbf{P}_{k-1\mid k-1}^a\mathbf{F}_k^\mathrm{T} + \mathbf{Q}_k^a
and
\mathbf{P}_{k\mid k}^a = (\mathbf{I} - \mathbf{K}_k\mathbf{H}_k)\mathbf{P}_{k\mid k-1}^a(\mathbf{I} - \mathbf{K}_k\mathbf{H}_k)^\mathrm{T} + \mathbf{K}_k\mathbf{R}_k^a\mathbf{K}_k^\mathrm{T}
While computing \mathbf{P}_{k\mid k}, by design the filter implicitly assumes that E[\mathbf{w}_k\mathbf{w}_k^\mathrm{T}] = \mathbf{Q}_{k} and E[\mathbf{v}_k\mathbf{v}_k^\mathrm{T}] = \mathbf{R}_{k}. Note that the recursive expressions for \mathbf{P}_{k\mid k}^a and \mathbf{P}_{k\mid k} are identical except for the presence of \mathbf{Q}_{k}^a and \mathbf{R}_{k}^a in place of the design values \mathbf{Q}_{k} and \mathbf{R}_{k} respectively.
## Square root form
One problem with the Kalman filter is its numerical stability. If the process noise covariance Qk is small, round-off error often causes a small positive eigenvalue to be computed as a negative number. This renders the numerical representation of the state covariance matrix P indefinite, while its true form is positive-definite.
Positive definite matrices have the property that they have a triangular matrix square root P = S·ST. This can be computed efficiently using the Cholesky factorization algorithm, but more importantly, if the covariance is kept in this form, it can never have a negative diagonal or become asymmetric. An equivalent form, which avoids many of the square root operations required by the matrix square root yet preserves the desirable numerical properties, is the U-D decomposition form, P = U·D·UT, where U is a unit triangular matrix (with unit diagonal), and D is a diagonal matrix.
Between the two, the U-D factorization uses the same amount of storage, and somewhat less computation, and is the most commonly used square root form. (Early literature on the relative efficiency is somewhat misleading, as it assumed that square roots were much more time-consuming than divisions,[19]:69 while on 21-st century computers they are only slightly more expensive.)
Efficient algorithms for the Kalman prediction and update steps in the square root form were developed by G. J. Bierman and C. L. Thornton.[19][20]
The L·D·LT decomposition of the innovation covariance matrix Sk is the basis for another type of numerically efficient and robust square root filter.[21] The algorithm starts with the LU decomposition as implemented in the Linear Algebra PACKage (LAPACK). These results are further factored into the L·D·LT structure with methods given by Golub and Van Loan (algorithm 4.1.2) for a symmetric nonsingular matrix.[22] Any singular covariance matrix is pivoted so that the first diagonal partition is nonsingular and well-conditioned. The pivoting algorithm must retain any portion of the innovation covariance matrix directly corresponding to observed state-variables Hk·xk|k-1 that are associated with auxiliary observations in yk. the l·d·lt square-root filter requires orthogonalization of the observation vector.[20][21] This may be done with the inverse square-root of the covariance matrix for the auxiliary variables using Method 2 in Higham (2002, p. 263).[23]
## Relationship to recursive Bayesian estimation
The Kalman filter can be presented as one of the most simple dynamic Bayesian networks. The Kalman filter calculates estimates of the true values of states recursively over time using incoming measurements and a mathematical process model. Similarly, recursive Bayesian estimation calculates estimates of an unknown probability density function (PDF) recursively over time using incoming measurements and a mathematical process model.[24]
In recursive Bayesian estimation, the true state is assumed to be an unobserved Markov process, and the measurements are the observed states of a hidden Markov model (HMM).
because of the Markov assumption, the true state is conditionally independent of all earlier states given the immediately previous state.
p(\textbf{x}_k\mid \textbf{x}_0,\dots,\textbf{x}_{k-1}) = p(\textbf{x}_k\mid \textbf{x}_{k-1})
Similarly, the measurement at the k-th timestep is dependent only upon the current state and is conditionally independent of all other states given the current state.
p(\textbf{z}_k\mid\textbf{x}_0,\dots,\textbf{x}_{k}) = p(\textbf{z}_k\mid \textbf{x}_{k} )
Using these assumptions the probability distribution over all states of the hidden Markov model can be written simply as:
p(\textbf{x}_0,\dots,\textbf{x}_k, \textbf{z}_1,\dots,\textbf{z}_k) = p(\textbf{x}_0)\prod_{i=1}^k p(\textbf{z}_i\mid \textbf{x}_i)p(\textbf{x}_i\mid \textbf{x}_{i-1})
However, when the Kalman filter is used to estimate the state x, the probability distribution of interest is that associated with the current states conditioned on the measurements up to the current timestep. This is achieved by marginalizing out the previous states and dividing by the probability of the measurement set.
This leads to the predict and update steps of the Kalman filter written probabilistically. The probability distribution associated with the predicted state is the sum (integral) of the products of the probability distribution associated with the transition from the (k − 1)-th timestep to the k-th and the probability distribution associated with the previous state, over all possible x_{k-1}.
p(\textbf{x}_k\mid \textbf{Z}_{k-1}) = \int p(\textbf{x}_k \mid \textbf{x}_{k-1}) p(\textbf{x}_{k-1} \mid \textbf{Z}_{k-1} ) \, d\textbf{x}_{k-1}
The measurement set up to time t is
\textbf{Z}_{t} = \left \{ \textbf{z}_{1},\dots,\textbf{z}_{t} \right \}
The probability distribution of the update is proportional to the product of the measurement likelihood and the predicted state.
p(\textbf{x}_k\mid \textbf{Z}_{k}) = \frac{p(\textbf{z}_k\mid \textbf{x}_k) p(\textbf{x}_k\mid \textbf{Z}_{k-1})}{p(\textbf{z}_k\mid \textbf{Z}_{k-1})}
The denominator
p(\textbf{z}_k\mid \textbf{Z}_{k-1}) = \int p(\textbf{z}_k\mid \textbf{x}_k) p(\textbf{x}_k\mid \textbf{Z}_{k-1}) d\textbf{x}_k
is a normalization term.
The remaining probability density functions are
p(\textbf{x}_k \mid \textbf{x}_{k-1}) = \mathcal{N}(\textbf{F}_k\textbf{x}_{k-1}, \textbf{Q}_k)
p(\textbf{z}_k\mid \textbf{x}_k) = \mathcal{N}(\textbf{H}_{k}\textbf{x}_k, \textbf{R}_k)
p(\textbf{x}_{k-1}\mid \textbf{Z}_{k-1}) = \mathcal{N}(\hat{\textbf{x}}_{k-1},\textbf{P}_{k-1} )
Note that the PDF at the previous timestep is inductively assumed to be the estimated state and covariance. This is justified because, as an optimal estimator, the Kalman filter makes best use of the measurements, therefore the PDF for \mathbf{x}_k given the measurements \mathbf{Z}_k is the Kalman filter estimate.
## Marginal likelihood
Related to the recursive Bayesian interpretation described above, the Kalman filter can be viewed as a generative model, i.e., a process for generating a stream of random observations z = (z0, z1, z2, …). Specifically, the process is
1. Sample a hidden state \mathbf{x}_0 from the Gaussian prior distribution p(\mathbf{x}_0) = \mathcal{N}(\hat{\mathbf{x}}_{0\mid 0}, \mathbf{P}_{0\mid 0}).
2. Sample an observation \mathbf{z}_0 from the observation model p(\mathbf{z}_0\mid \mathbf{x}_0) = \mathcal{N}(\mathbf{H}_0\mathbf{x}_0, \mathbf{R}_0).
3. For k = 1,2,3,... \ldots, do
4. Sample the next hidden state \mathbf{x}_k from the transition model p(\mathbf{x}_{k} \mid \mathbf{x}_{k-1}) = \mathcal{N}(\mathbf{F}_{k} \mathbf{x}_{k-1} + \mathbf{B}_k\mathbf{u}_k, \mathbf{Q}_k).
5. Sample an observation \mathbf{z}_k from the observation model p(\mathbf{z}_k\mid \mathbf{x}_k) = \mathcal{N}(\mathbf{H}_k\mathbf{x}_k, \mathbf{R}_k).
6. Note that this process has identical structure to the hidden Markov model, except that the discrete state and observations are replaced with continuous variables sampled from Gaussian distributions.
In some applications, it is useful to compute the probability that a Kalman filter with a given set of parameters (prior distribution, transition and observation models, and control inputs) would generate a particular observed signal. This probability is known as the marginal likelihood because it integrates over ("marginalizes out") the values of the hidden state variables, so it can be computed using only the observed signal. The marginal likelihood can be useful to evaluate different parameter choices, or to compare the Kalman filter against other models using Bayesian model comparison.
It is straightforward to compute the marginal likelihood as a side effect of the recursive filtering computation. By the chain rule, the likelihood can be factored as the product of the probability of each observation given previous observations,
p(\mathbf{z}) = \prod_{k=0}^T p(\mathbf{z}_k \mid \mathbf{z}_{k-1}, \ldots,\mathbf{z}_0),
and because the Kalman filter describes a Markov process, all relevant information from previous observations is contained in the current state estimate \hat{\mathbf{x}}_{k\mid k-1}, \mathbf{P}_{k\mid k-1}. Thus the marginal likelihood is given by
\begin{align} p(\mathbf{z}) &= \prod_{k=0}^T \int p(\mathbf{z}_k \mid \mathbf{x}_k ) p(\mathbf{x}_k \mid \mathbf{z}_{k-1}, \ldots,\mathbf{z}_0 ) d \mathbf{x}_k\\ &= \prod_{k=0}^T \int \mathcal{N}(\mathbf{z}_k; \mathbf{H}_k\mathbf{x}_k, \mathbf{R}_k) \mathcal{N}(\mathbf{x}_k; \hat{\mathbf{x}}_{k\mid k-1}, \mathbf{P}_{k\mid k-1}) d \mathbf{x}_k\\ &= \prod_{k=0}^T \mathcal{N}(\mathbf{z}_k; \mathbf{H}_k\hat{\mathbf{x}}_{k\mid k-1}, \mathbf{R}_k + \mathbf{H}_k \mathbf{P}_{k\mid k-1} \mathbf{H}_k^T )\\ &= \prod_{k=0}^T \mathcal{N}(\mathbf{z}_k; \mathbf{H}_k\hat{\mathbf{x}}_{k\mid k-1}, \mathbf{S}_k), \end{align}
i.e., a product of Gaussian densities, each corresponding to the density of one observation zk under the current filtering distribution \mathbf{H}_k\hat{\mathbf{x}}_{k\mid k-1}, \mathbf{S}_k. This can easily be computed as a simple recursive update; however, to avoid numeric underflow, in a practical implementation it is usually desirable to compute the log marginal likelihood \ell = \log p(\mathbf{z}) instead. Adopting the convention \ell^{(-1)} = 0, this can be done via the recursive update rule
\ell^{(k)} = \ell^{(k-1)} - \frac{1}{2} \left(\tilde{\mathbf{y}}_k^T \mathbf{S}^{-1}_k \tilde{\mathbf{y}}_k + \log \left|\mathbf{S}_k\right| + d_{y}\log 2\pi \right),
where d_y is the dimension of the measurement vector. [25]
An important application where such a (log) likelihood of the observations (given the filter parameters) is used is multi-target tracking. For example, consider an object tracking scenario where a stream of observations is the input, however, it is unknown how many objects are in the scene (or, the number of objects is known but is greater than one). In such a scenario, it can be unknown apriori which observations/measurements were generated by which object. A multiple hypothesis tracker (MHT) typically will form different track association hypotheses, where each hypothesis can be viewed as a Kalman filter (in the linear Gaussian case) with a specific set of parameters associated with the hypothesized object. Thus, it is important to compute the likelihood of the observations for the different hypotheses under consideration, such that the most-likely one can be found.
## Information filter
In the information filter, or inverse covariance filter, the estimated covariance and estimated state are replaced by the information matrix and information vector respectively. These are defined as:
\textbf{Y}_{k\mid k} = \textbf{P}_{k\mid k}^{-1}
\hat{\textbf{y}}_{k\mid k} = \textbf{P}_{k\mid k}^{-1}\hat{\textbf{x}}_{k\mid k}
Similarly the predicted covariance and state have equivalent information forms, defined as:
\textbf{Y}_{k\mid k-1} = \textbf{P}_{k\mid k-1}^{-1}
\hat{\textbf{y}}_{k\mid k-1} = \textbf{P}_{k\mid k-1}^{-1}\hat{\textbf{x}}_{k\mid k-1}
as have the measurement covariance and measurement vector, which are defined as:
\textbf{I}_{k} = \textbf{H}_{k}^{\text{T}} \textbf{R}_{k}^{-1} \textbf{H}_{k}
\textbf{i}_{k} = \textbf{H}_{k}^{\text{T}} \textbf{R}_{k}^{-1} \textbf{z}_{k}
The information update now becomes a trivial sum.
\textbf{Y}_{k\mid k} = \textbf{Y}_{k\mid k-1} + \textbf{I}_{k}
\hat{\textbf{y}}_{k\mid k} = \hat{\textbf{y}}_{k\mid k-1} + \textbf{i}_{k}
The main advantage of the information filter is that N measurements can be filtered at each timestep simply by summing their information matrices and vectors.
\textbf{Y}_{k\mid k} = \textbf{Y}_{k\mid k-1} + \sum_{j=1}^N \textbf{I}_{k,j}
\hat{\textbf{y}}_{k\mid k} = \hat{\textbf{y}}_{k\mid k-1} + \sum_{j=1}^N \textbf{i}_{k,j}
To predict the information filter the information matrix and vector can be converted back to their state space equivalents, or alternatively the information space prediction can be used.
\textbf{M}_{k} = [\textbf{F}_{k}^{-1}]^{\text{T}} \textbf{Y}_{k-1\mid k-1} \textbf{F}_{k}^{-1}
\textbf{C}_{k} = \textbf{M}_{k} [\textbf{M}_{k}+\textbf{Q}_{k}^{-1}]^{-1}
\textbf{L}_{k} = I - \textbf{C}_{k}
\textbf{Y}_{k\mid k-1} = \textbf{L}_{k} \textbf{M}_{k} \textbf{L}_{k}^{\text{T}} + \textbf{C}_{k} \textbf{Q}_{k}^{-1} \textbf{C}_{k}^{\text{T}}
\hat{\textbf{y}}_{k\mid k-1} = \textbf{L}_{k} [\textbf{F}_{k}^{-1}]^{\text{T}}\hat{\textbf{y}}_{k-1\mid k-1}
Note that if F and Q are time invariant these values can be cached. Note also that F and Q need to be invertible.
## Fixed-lag smoother
The optimal fixed-lag smoother provides the optimal estimate of \hat{\textbf{x}}_{k-N \mid k} for a given fixed-lag N using the measurements from \textbf{z}_{1} to \textbf{z}_{k}. It can be derived using the previous theory via an augmented state, and the main equation of the filter is the following:
\begin{bmatrix} \hat{\textbf{x}}_{t\mid t} \\ \hat{\textbf{x}}_{t-1\mid t} \\ \vdots \\ \hat{\textbf{x}}_{t-N+1\mid t} \\ \end{bmatrix} = \begin{bmatrix} \textbf{I} \\ 0 \\ \vdots \\ 0 \\ \end{bmatrix} \hat{\textbf{x}}_{t\mid t-1} + \begin{bmatrix} 0 & \ldots & 0 \\ \textbf{I} & 0 & \vdots \\ \vdots & \ddots & \vdots \\ 0 & \ldots & I \\ \end{bmatrix} \begin{bmatrix} \hat{\textbf{x}}_{t-1\mid t-1} \\ \hat{\textbf{x}}_{t-2\mid t-1} \\ \vdots \\ \hat{\textbf{x}}_{t-N+1\mid t-1} \\ \end{bmatrix} + \begin{bmatrix} \textbf{K}^{(0)} \\ \textbf{K}^{(1)} \\ \vdots \\ \textbf{K}^{(N-1)} \\ \end{bmatrix} \textbf{y}_{t\mid t-1}
where:
• \hat{\textbf{x}}_{t\mid t-1} is estimated via a standard Kalman filter;
• \textbf{y}_{t\mid t-1} = \textbf{z}_t - \textbf{H}\hat{\textbf{x}}_{t\mid t-1} is the innovation produced considering the estimate of the standard Kalman filter;
• the various \hat{\textbf{x}}_{t-i\mid t} with i = 1,\ldots,N-1 are new variables, i.e. they do not appear in the standard Kalman filter;
• the gains are computed via the following scheme:
\textbf{K}^{(i)} = \textbf{P}^{(i)} \textbf{H}^{T} \left[ \textbf{H} \textbf{P} \textbf{H}^\mathrm{T} + \textbf{R} \right]^{-1}
and
\textbf{P}^{(i)} = \textbf{P} \left[ \left[ \textbf{F} - \textbf{K} \textbf{H} \right]^{T} \right]^{i}
where \textbf{P} and \textbf{K} are the prediction error covariance and the gains of the standard Kalman filter (i.e., \textbf{P}_{t\mid t-1} ).
If the estimation error covariance is defined so that
\textbf{P}_{i} := E \left[ \left( \textbf{x}_{t-i} - \hat{\textbf{x}}_{t-i\mid t} \right)^{*} \left( \textbf{x}_{t-i} - \hat{\textbf{x}}_{t-i\mid t} \right) \mid z_{1} \ldots z_{t} \right],
then we have that the improvement on the estimation of \textbf{x}_{t-i} is given by:
\textbf{P}-\textbf{P}_{i} = \sum_{j = 0}^{i} \left[ \textbf{P}^{(j)} \textbf{H}^{T} \left[ \textbf{H} \textbf{P} \textbf{H}^\mathrm{T} + \textbf{R} \right]^{-1} \textbf{H} \left( \textbf{P}^{(i)} \right)^\mathrm{T} \right]
## Fixed-interval smoothers
The optimal fixed-interval smoother provides the optimal estimate of \hat{\textbf{x}}_{k \mid n} (k < n) using the measurements from a fixed interval \textbf{z}_{1} to \textbf{z}_{n}. This is also called "Kalman Smoothing". There are several smoothing algorithms in common use.
### Rauch–Tung–Striebel
The Rauch–Tung–Striebel (RTS) smoother is an efficient two-pass algorithm for fixed interval smoothing.[26]
The forward pass is the same as the regular Kalman filter algorithm. These filtered a-priori and a-posteriori state estimates \hat{\textbf{x}}_{k\mid k-1}, \hat{\textbf{x}}_{k\mid k} and covariances \textbf{P}_{k\mid k-1}, \textbf{P}_{k\mid k} are saved for use in the backwards pass.
In the backwards pass, we compute the smoothed state estimates \hat{\textbf{x}}_{k\mid n} and covariances \textbf{P}_{k\mid n}. We start at the last time step and proceed backwards in time using the following recursive equations:
\hat{\textbf{x}}_{k\mid n} = \hat{\textbf{x}}_{k\mid k} + \textbf{C}_k ( \hat{\textbf{x}}_{k+1\mid n} - \hat{\textbf{x}}_{k+1\mid k} )
\textbf{P}_{k\mid n} = \textbf{P}_{k\mid k} + \textbf{C}_k ( \textbf{P}_{k+1\mid n} - \textbf{P}_{k+1\mid k} ) \textbf{C}_k^\mathrm{T}
where
\textbf{C}_k = \textbf{P}_{k\mid k} \textbf{F}_{k+1}^\mathrm{T} \textbf{P}_{k+1\mid k}^{-1} .
Note that \textbf{x}_{k\mid k} is the a-posteriori state estimate of timestep k and \mathbf{x}_{k+1\mid k} is the a-priori state estimate of timestep k+1. The same notation applies to the covariance.
### Modified Bryson–Frazier smoother
An alternative to the RTS algorithm is the modified Bryson–Frazier (MBF) fixed interval smoother developed by Bierman.[20] This also uses a backward pass that processes data saved from the Kalman filter forward pass. The equations for the backward pass involve the recursive computation of data which are used at each observation time to compute the smoothed state and covariance.
The recursive equations are
\tilde{\Lambda}_k = \textbf{H}_k^T \textbf{S}_k^{-1} \textbf{H}_k + \hat{\textbf{C}}_k^T \hat{\Lambda}_k \hat{\textbf{C}}_k
\hat{\Lambda}_{k-1} = \textbf{F}_k^T\tilde{\Lambda}_{k}\textbf{F}_k
\hat{\Lambda}_n = 0
\tilde{\lambda}_k = -\textbf{H}_k^T \textbf{S}_k^{-1} \textbf{y}_k + \hat{\textbf{C}}_k^T \hat{\lambda}_k
\hat{\lambda}_{k-1} = \textbf{F}_k^T\tilde{\lambda}_{k}
\hat{\lambda}_n = 0
where \textbf{S}_k is the residual covariance and \hat{\textbf{C}}_k = \textbf{I} - \textbf{K}_k\textbf{H}_k. The smoothed state and covariance can then be found by substitution in the equations
\textbf{P}_{k\mid n} = \textbf{P}_{k\mid k} - \textbf{P}_{k\mid k}\hat{\Lambda}_k\textbf{P}_{k\mid k}
\textbf{x}_{k\mid n} = \textbf{x}_{k\mid k} - \textbf{P}_{k\mid k}\hat{\lambda}_k
or
\textbf{P}_{k\mid n} = \textbf{P}_{k\mid k-1} - \textbf{P}_{k\mid k-1}\tilde{\Lambda}_k\textbf{P}_{k\mid k-1}
\textbf{x}_{k\mid n} = \textbf{x}_{k\mid k-1} - \textbf{P}_{k\mid k-1}\tilde{\lambda}_k.
An important advantage of the MBF is that it does not require finding the inverse of the covariance matrix.
### Minimum-variance smoother
The minimum-variance smoother can attain the best-possible error performance, provided that the models are linear, their parameters and the noise statistics are known precisely.[27] This smoother is a time-varying state-space generalization of the optimal non-causal Wiener filter.
The smoother calculations are done in two passes. The forward calculations involve a one-step-ahead predictor and are given by
\hat{\textbf{x}}_{k+1\mid k} = \textbf{(F}_{k}-\textbf{K}_{k}\textbf{H}_{k})\hat{\textbf{x}}_{k\mid k-1} + \textbf{K}_{k} \textbf{z}_{k}
{\alpha}_{k} = -\textbf{S}_k^{-1/2} \textbf{H}_{k}\hat{\textbf{x}}_{k\mid k-1} + \textbf{S}_k^{-1/2} \textbf{z}_{k}
The above system is known as the inverse Wiener-Hopf factor. The backward recursion is the adjoint of the above forward system. The result of the backward pass \beta_{k} may be calculated by operating the forward equations on the time-reversed \alpha_{k} and time reversing the result. In the case of output estimation, the smoothed estimate is given by
\hat{\textbf{y}}_{k\mid N} = \textbf{z}_{k} - \textbf{R}_{k}\beta_{k}
Taking the causal part of this minimum-variance smoother yields
\hat{\textbf{y}}_{k\mid k} = \textbf{z}_{k} - \textbf{R}_{k} \textbf{S}_k^{-1/2} \alpha_{k}
which is identical to the minimum-variance Kalman filter. The above solutions minimize the variance of the output estimation error. Note that the Rauch–Tung–Striebel smoother derivation assumes that the underlying distributions are Gaussian, whereas the minimum-variance solutions do not. Optimal smoothers for state estimation and input estimation can be constructed similarly.
A continuous-time version of the above smoother is described in.[28][29]
Expectation-maximization algorithms may be employed to calculate approximate maximum likelihood estimates of unknown state-space parameters within minimum-variance filters and smoothers. Often uncertainties remain within problem assumptions. A smoother that accommodates uncertainties can be designed by adding a positive definite term to the Riccati equation.[30]
In cases where the models are nonlinear, step-wise linearizations may be within the minimum-variance filter and smoother recursions (extended Kalman filtering).
## Frequency Weighted Kalman filters
Pioneering research on the perception of sounds at different frequencies was conducted by Fletcher and Munson in the 1930s. Their work led to a standard way of weighting measured sound levels within investigations of industrial noise and hearing loss. Frequency weightings have since been used within filter and controller designs to manage performance within bands of interest.
Typically, a frequency shaping function is used to weight the average power of the error spectral density in a specified frequency band. Let \textbf{y} - \hat{\textbf{y}} denote the output estimation error exhibited by a conventional Kalman filter. Also, let \textbf{W} denote a causal frequency weighting transfer function. The optimum solution which minimizes the variance of \textbf{W} ( \textbf{y} - \hat{\textbf{y}} ) arises by simply constructing \textbf{W}^{-1} \hat{\textbf{y}}.
The design of \textbf{W} remains an open question. One way of proceeding is to identify a system which generates the estimation error and setting \textbf{W} equal to the inverse of that system.[31] This procedure may be iterated to obtain mean-square error improvement at the cost of increased filter order. The same technique can be applied to smoothers.
## Non-linear filters
The basic Kalman filter is limited to a linear assumption. More complex systems, however, can be nonlinear. The non-linearity can be associated either with the process model or with the observation model or with both.
### Extended Kalman filter
In the extended Kalman filter (EKF), the state transition and observation models need not be linear functions of the state but may instead be non-linear functions. These functions are of differentiable type.
\textbf{x}_{k} = f(\textbf{x}_{k-1}, \textbf{u}_{k}) + \textbf{w}_{k}
\textbf{z}_{k} = h(\textbf{x}_{k}) + \textbf{v}_{k}
The function f can be used to compute the predicted state from the previous estimate and similarly the function h can be used to compute the predicted measurement from the predicted state. However, f and h cannot be applied to the covariance directly. Instead a matrix of partial derivatives (the Jacobian) is computed.
At each timestep the Jacobian is evaluated with current predicted states. These matrices can be used in the Kalman filter equations. This process essentially linearizes the non-linear function around the current estimate.
### Unscented Kalman filter
When the state transition and observation models—that is, the predict and update functions f and h—are highly non-linear, the extended Kalman filter can give particularly poor performance.[32] This is because the covariance is propagated through linearization of the underlying non-linear model. The unscented Kalman filter (UKF) [32] uses a deterministic sampling technique known as the unscented transform to pick a minimal set of sample points (called sigma points) around the mean. These sigma points are then propagated through the non-linear functions, from which the mean and covariance of the estimate are then recovered. The result is a filter which more accurately captures the true mean and covariance. (This can be verified using Monte Carlo sampling or through a Taylor series expansion of the posterior statistics.) In addition, this technique removes the requirement to explicitly calculate Jacobians, which for complex functions can be a difficult task in itself (i.e., requiring complicated derivatives if done analytically or being computationally costly if done numerically).
Predict
As with the EKF, the UKF prediction can be used independently from the UKF update, in combination with a linear (or indeed EKF) update, or vice versa.
The estimated state and covariance are augmented with the mean and covariance of the process noise.
\textbf{x}_{k-1\mid k-1}^{a} = [ \hat{\textbf{x}}_{k-1\mid k-1}^\mathrm{T} \quad E[\textbf{w}_{k}^\mathrm{T}] \ ]^\mathrm{T}
\textbf{P}_{k-1\mid k-1}^{a} = \begin{bmatrix} & \textbf{P}_{k-1\mid k-1} & & 0 & \\ & 0 & &\textbf{Q}_{k} & \end{bmatrix}
A set of 2L + 1 sigma points is derived from the augmented state and covariance where L is the dimension of the augmented state.
\chi_{k-1\mid k-1}^{0} = \textbf{x}_{k-1\mid k-1}^{a}
\chi_{k-1\mid k-1}^{i} =\textbf{x}_{k-1\mid k-1}^{a} + \left ( \sqrt{ (L + \lambda) \textbf{P}_{k-1\mid k-1}^{a} } \right )_{i}, \qquad i = 1,\ldots,L
\chi_{k-1\mid k-1}^{i} = \textbf{x}_{k-1\mid k-1}^{a} - \left ( \sqrt{ (L + \lambda) \textbf{P}_{k-1\mid k-1}^{a} } \right )_{i-L}, \qquad i = L+1,\dots{},2L
where
\left ( \sqrt{ (L + \lambda) \textbf{P}_{k-1\mid k-1}^{a} } \right )_{i}
is the ith column of the matrix square root of
(L + \lambda) \textbf{P}_{k-1\mid k-1}^{a}
using the definition: square root \textbf{A} of matrix \textbf{B} satisfies
\textbf{B} \triangleq \textbf{A} \textbf{A}^\mathrm{T}. \,
The matrix square root should be calculated using numerically efficient and stable methods such as the Cholesky decomposition.
The sigma points are propagated through the transition function f.
\chi_{k\mid k-1}^{i} = f(\chi_{k-1\mid k-1}^{i}) \quad i = 0,\dots,2L
where f : R^{L} \rightarrow R^{|\textbf{x}|} . The weighted sigma points are recombined to produce the predicted state and covariance.
\hat{\textbf{x}}_{k\mid k-1} = \sum_{i=0}^{2L} W_{s}^{i} \chi_{k\mid k-1}^{i}
\textbf{P}_{k\mid k-1} = \sum_{i=0}^{2L} W_{c}^{i}\ [\chi_{k\mid k-1}^{i} - \hat{\textbf{x}}_{k\mid k-1}] [\chi_{k\mid k-1}^{i} - \hat{\textbf{x}}_{k\mid k-1}]^\mathrm{T}
where the weights for the state and covariance are given by:
W_{s}^{0} = \frac{\lambda}{L+\lambda}
W_{c}^{0} = \frac{\lambda}{L+\lambda} + (1 - \alpha^2 + \beta)
W_{s}^{i} = W_{c}^{i} = \frac{1}{2(L+\lambda)}
\lambda = \alpha^2 (L+\kappa) - L\,\!
\alpha and \kappa control the spread of the sigma points. \beta is related to the distribution of x. Normal values are \alpha=10^{-3}, \kappa=0 and \beta=2. If the true distribution of x is Gaussian, \beta=2 is optimal.[33]
Update
The predicted state and covariance are augmented as before, except now with the mean and covariance of the measurement noise.
\textbf{x}_{k\mid k-1}^{a} = [ \hat{\textbf{x}}_{k\mid k-1}^\mathrm{T} \quad E[\textbf{v}_{k}^\mathrm{T}] \ ]^\mathrm{T}
\textbf{P}_{k\mid k-1}^{a} = \begin{bmatrix} & \textbf{P}_{k\mid k-1} & & 0 & \\ & 0 & &\textbf{R}_{k} & \end{bmatrix}
As before, a set of 2L + 1 sigma points is derived from the augmented state and covariance where L is the dimension of the augmented state.
\begin{align} \chi_{k\mid k-1}^{0} & = \textbf{x}_{k\mid k-1}^{a} \\[6pt] \chi_{k\mid k-1}^{i} & = \textbf{x}_{k\mid k-1}^{a} + \left ( \sqrt{ (L + \lambda) \textbf{P}_{k\mid k-1}^{a} } \right )_{i}, \qquad i = 1,\dots,L \\[6pt] \chi_{k\mid k-1}^{i} & = \textbf{x}_{k\mid k-1}^{a} - \left ( \sqrt{ (L + \lambda) \textbf{P}_{k\mid k-1}^{a} } \right )_{i-L}, \qquad i = L+1,\dots,2L \end{align}
Alternatively if the UKF prediction has been used the sigma points themselves can be augmented along the following lines
\chi_{k\mid k-1} := [ \chi_{k\mid k-1}^\mathrm{T} \quad E[\textbf{v}_{k}^\mathrm{T}] \ ]^\mathrm{T} \pm \sqrt{ (L + \lambda) \textbf{R}_{k}^{a} }
where
\textbf{R}_{k}^{a} = \begin{bmatrix} & 0 & & 0 & \\ & 0 & &\textbf{R}_{k} & \end{bmatrix}
The sigma points are projected through the observation function h.
\gamma_{k}^{i} = h(\chi_{k\mid k-1}^{i}) \quad i = 0..2L
The weighted sigma points are recombined to produce the predicted measurement and predicted measurement covariance.
\hat{\textbf{z}}_{k} = \sum_{i=0}^{2L} W_{s}^{i} \gamma_{k}^{i}
\textbf{P}_{z_{k}z_{k}} = \sum_{i=0}^{2L} W_{c}^{i}\ [\gamma_{k}^{i} - \hat{\textbf{z}}_{k}] [\gamma_{k}^{i} - \hat{\textbf{z}}_{k}]^\mathrm{T}
The state-measurement cross-covariance matrix,
\textbf{P}_{x_{k}z_{k}} = \sum_{i=0}^{2L} W_{c}^{i}\ [\chi_{k\mid k-1}^{i} - \hat{\textbf{x}}_{k\mid k-1}] [\gamma_{k}^{i} - \hat{\textbf{z}}_{k}]^\mathrm{T}
is used to compute the UKF Kalman gain.
K_{k} = \textbf{P}_{x_{k}z_{k}} \textbf{P}_{z_{k}z_{k}}^{-1}
As with the Kalman filter, the updated state is the predicted state plus the innovation weighted by the Kalman gain,
\hat{\textbf{x}}_{k\mid k} = \hat{\textbf{x}}_{k\mid k-1} + K_{k}( \textbf{z}_{k} - \hat{\textbf{z}}_{k} )
And the updated covariance is the predicted covariance, minus the predicted measurement covariance, weighted by the Kalman gain.
\textbf{P}_{k\mid k} = \textbf{P}_{k\mid k-1} - K_{k} \textbf{P}_{z_{k}z_{k}} K_{k}^\mathrm{T}
## Kalman–Bucy filter
The Kalman–Bucy filter (named after Richard Snowden Bucy) is a continuous time version of the Kalman filter.[34][35]
It is based on the state space model
\frac{d}{dt}\mathbf{x}(t) = \mathbf{F}(t)\mathbf{x}(t) + \mathbf{B}(t)\mathbf{u}(t) + \mathbf{w}(t)
\mathbf{z}(t) = \mathbf{H}(t) \mathbf{x}(t) + \mathbf{v}(t)
where \mathbf{Q}(t) and \mathbf{R}(t) represent the intensities of the two white noise terms \mathbf{w}(t) and \mathbf{v}(t), respectively.
The filter consists of two differential equations, one for the state estimate and one for the covariance:
\frac{d}{dt}\hat{\mathbf{x}}(t) = \mathbf{F}(t)\hat{\mathbf{x}}(t) + \mathbf{B}(t)\mathbf{u}(t) + \mathbf{K}(t) (\mathbf{z}(t)-\mathbf{H}(t)\hat{\mathbf{x}}(t))
\frac{d}{dt}\mathbf{P}(t) = \mathbf{F}(t)\mathbf{P}(t) + \mathbf{P}(t)\mathbf{F}^{T}(t) + \mathbf{Q}(t) - \mathbf{K}(t)\mathbf{R}(t)\mathbf{K}^{T}(t)
where the Kalman gain is given by
\mathbf{K}(t)=\mathbf{P}(t)\mathbf{H}^{T}(t)\mathbf{R}^{-1}(t)
Note that in this expression for \mathbf{K}(t) the covariance of the observation noise \mathbf{R}(t) represents at the same time the covariance of the prediction error (or innovation) \tilde{\mathbf{y}}(t)=\mathbf{z}(t)-\mathbf{H}(t)\hat{\mathbf{x}}(t); these covariances are equal only in the case of continuous time.[36]
The distinction between the prediction and update steps of discrete-time Kalman filtering does not exist in continuous time.
The second differential equation, for the covariance, is an example of a Riccati equation.
## Hybrid Kalman filter
Most physical systems are represented as continuous-time models while discrete-time measurements are frequently taken for state estimation via a digital processor. Therefore, the system model and measurement model are given by
\begin{align} \dot{\mathbf{x}}(t) &= \mathbf{F}(t)\mathbf{x}(t)+\mathbf{B}(t)\mathbf{u}(t)+\mathbf{w}(t), &\mathbf{w}(t) &\sim N\bigl(\mathbf{0},\mathbf{Q}(t)\bigr) \\ \mathbf{z}_k &= \mathbf{H}_k\mathbf{x}_k+\mathbf{v}_k, &\mathbf{v}_k &\sim N(\mathbf{0},\mathbf{R}_k) \end{align}
where
\mathbf{x}_k=\mathbf{x}(t_k).
Initialize
\hat{\mathbf{x}}_{0\mid 0}=E\bigl[\mathbf{x}(t_0)\bigr], \mathbf{P}_{0\mid 0}=Var\bigl[\mathbf{x}(t_0)\bigr]
Predict
\begin{align} &\dot{\hat{\mathbf{x}}}(t) = \mathbf{F}(t) \hat{\mathbf{x}}(t) + \mathbf{B}(t) \mathbf{u}(t) \text{, with } \hat{\mathbf{x}}(t_{k-1}) = \hat{\mathbf{x}}_{k-1\mid k-1} \\ \Rightarrow &\hat{\mathbf{x}}_{k\mid k-1} = \hat{\mathbf{x}}(t_k)\\ &\dot{\mathbf{P}}(t) = \mathbf{F}(t)\mathbf{P}(t)+\mathbf{P}(t)\mathbf{F}(t)^T+\mathbf{Q}(t) \text{, with } \mathbf{P}(t_{k-1}) = \mathbf{P}_{k-1\mid k-1}\\ \Rightarrow &\mathbf{P}_{k\mid k-1} = \mathbf{P}(t_k) \end{align}
The prediction equations are derived from those of continuous-time Kalman filter without update from measurements, i.e., \mathbf{K}(t)=0 . The predicted state and covariance are calculated respectively by solving a set of differential equations with the initial value equal to the estimate at the previous step.
Update
\mathbf{K}_{k} = \mathbf{P}_{k\mid k-1}\mathbf{H}_{k}^T\bigl(\mathbf{H}_{k}\mathbf{P}_{k\mid k-1}\mathbf{H}_{k}^T+\mathbf{R}_{k}\bigr)^{-1}
\hat{\mathbf{x}}_{k\mid k} = \hat{\mathbf{x}}_{k\mid k-1} + \mathbf{K}_k(\mathbf{z}_k-\mathbf{H}_k\hat{\mathbf{x}}_{k\mid k-1})
\mathbf{P}_{k\mid k} = (\mathbf{I} - \mathbf{K}_{k}\mathbf{H}_{k})\mathbf{P}_{k\mid k-1}
The update equations are identical to those of the discrete-time Kalman filter.
## Variants for the recovery of sparse signals
Recently the traditional Kalman filter has been employed for the recovery of sparse, possibly dynamic, signals from noisy observations. Both works[37] and[38] utilize notions from the theory of compressed sensing/sampling, such as the restricted isometry property and related probabilistic recovery arguments, for sequentially estimating the sparse state in intrinsically low-dimensional systems.
## References
1. ^ Kalman, R. E. (1960). "A New Approach to Linear Filtering and Prediction Problems". Journal of Basic Engineering 82: 35.
2. ^ Steffen L. Lauritzen. "Time series analysis in 1880. A discussion of contributions made by T.N. Thiele". International Statistical Review 49, 1981, 319–333. JSTOR 1402616
3. ^ Steffen L. Lauritzen, Thiele: Pioneer in Statistics, Oxford University Press, 2002. ISBN 0-19-850972-3.
4. ^ Mohinder S. Grewal and Angus P. Andrews
5. ^ Stratonovich, R. L. (1959). Optimum nonlinear systems which bring about a separation of a signal with constant parameters from noise. Radiofizika, 2:6, pp. 892–901.
6. ^ Stratonovich, R. L. (1959). On the theory of optimal non-linear filtering of random functions. Theory of Probability and its Applications, 4, pp. 223–225.
7. ^ Stratonovich, R. L. (1960) Application of the Markov processes theory to optimal filtering. Radio Engineering and Electronic Physics, 5:11, pp. 1–19.
8. ^ Stratonovich, R. L. (1960). Conditional Markov Processes. Theory of Probability and its Applications, 5, pp. 156–178.
9. ^ Ingvar Strid; Karl Walentin (April 2009). "Block Kalman Filtering for Large-Scale DSGE Models". Computational Economics (Springer) 33 (3): 277–304.
10. ^ Martin Møller Andreasen (2008). "Non-linear DSGE Models, The Central Difference Kalman Filter, and The Mean Shifted Particle Filter" (PDF).
11. ^ Roweis, S; Ghahramani, Z (1999). "A unifying review of linear gaussian models". Neural computation 11 (2): 305–45.
12. ^ Hamilton, J. (1994), Time Series Analysis, Princeton University Press. Chapter 13, 'The Kalman Filter'.
13. ^ Ishihara, J.Y.; Terra, M.H.; Campos, J.C.T. (2006). "Robust Kalman Filter for Descriptor Systems". IEEE Transactions on Automatic Control 51 (8): 1354.
14. ^ Terra, Marco H.; Cerri, Joao P.; Ishihara, Joao Y. (2014). "Optimal Robust Linear Quadratic Regulator for Systems Subject to Uncertainties". IEEE Transactions on Automatic Control 59 (9): 2586.
15. ^ Kelly, Alonzo (1994). "A 3D state space formulation of a navigation Kalman filter for autonomous vehicles" (PDF). DTIC Document: 13. 2006 Corrected Version
16. ^ Reid, Ian; Term, Hilary. "Estimation II" (PDF). www.robots.ox.ac.uk. Oxford University. Retrieved 6 August 2014.
17. ^ Three optimality tests with numerical examples are described in Peter, Matisko, (2012). "Optimality Tests and Adaptive Kalman Filter". 16th IFAC Symposium on System Identification. 16th IFAC Symposium on System Identification. p. 1523.
18. ^ Anderson, Brian D. O.; Moore, John B. (1979). Optimal Filtering. New York:
19. ^ a b Thornton, Catherine L. (15 October 1976). "Triangular Covariance Factorizations for Kalman Filtering" (PDF). (PhD thesis).
20. ^ a b c Bierman, G.J. (1977). "Factorization Methods for Discrete Sequential Estimation". Factorization Methods for Discrete Sequential Estimation (Academic Press).
21. ^ a b Bar-Shalom, Yaakov; Li, X. Rong; Kirubarajan, Thiagalingam (July 2001). Estimation with Applications to Tracking and Navigation. New York:
22. ^ Golub, Gene H.; Van Loan, Charles F. (1996). Matrix Computations. Johns Hopkins Studies in the Mathematical Sciences (Third ed.). Baltimore, Maryland:
23. ^ Higham, Nicholas J. (2002). Accuracy and Stability of Numerical Algorithms (Second ed.). Philadelphia, PA:
24. ^
25. ^ Lütkepohl, Helmut (1991). Introduction to Multiple Time Series Analysis. Heidelberg: Springer-Verlag Berlin,. p. 435.
26. ^ Rauch, H.E.; Tung, F.; Striebel, C. T. (August 1965). "Maximum likelihood estimates of linear dynamic systems". AIAA J 3 (8): 1445–1450.
27. ^ Einicke, G.A. (March 2006). "Optimal and Robust Noncausal Filter Formulations". IEEE Trans. Signal Processing 54 (3): 1069–1077.
28. ^ Einicke, G.A. (April 2007). "Asymptotic Optimality of the Minimum-Variance Fixed-Interval Smoother". IEEE Trans. Signal Processing 55 (4): 1543–1547.
29. ^ Einicke, G.A.; Ralston, J.C.; Hargrave, C.O.; Reid, D.C.; Hainsworth, D.W. (December 2008). "Longwall Mining Automation. An Application of Minimum-Variance Smoothing". IEEE Control Systems Magazine 28 (6): 28–37.
30. ^ Einicke, G.A. (December 2009). "Asymptotic Optimality of the Minimum-Variance Fixed-Interval Smoother". IEEE Trans. Automatic Control 54 (12): 2904–2908.
31. ^ Einicke, G.A. (December 2014). "Iterative Frequency-Weighted Filtering and Smoothing Procedures". IEEE Signal Processing Letters 21 (12): 1467–1470.
32. ^ a b Julier, Simon J.; Uhlmann, Jeffrey K. (1997). "A new extension of the Kalman filter to nonlinear systems" (PDF). Int. Symp. Aerospace/Defense Sensing, Simul. and Controls. Signal Processing, Sensor Fusion, and Target Recognition VI 3: 182.
33. ^ Wan, E.A.; Van Der Merwe, R. (2000). "The unscented Kalman filter for nonlinear estimation". Proceedings of the IEEE 2000 Adaptive Systems for Signal Processing, Communications, and Control Symposium (Cat. No.00EX373) (PDF). p. 153.
34. ^ Bucy, R.S. and Joseph, P.D., Filtering for Stochastic Processes with Applications to Guidance, John Wiley & Sons, 1968; 2nd Edition, AMS Chelsea Publ., 2005. ISBN 0-8218-3782-6
35. ^ Jazwinski, Andrew H., Stochastic processes and filtering theory, Academic Press, New York, 1970. ISBN 0-12-381550-9
36. ^ Kailath, T. (1968). "An innovations approach to least-squares estimation--Part I: Linear filtering in additive white noise". IEEE Transactions on Automatic Control 13 (6): 646–655.
37. ^ Carmi, Avishy; Gurfil, Pini; Kanevsky, Dimitri (2010). "Methods for sparse signal recovery using Kalman filtering with embedded pseudo-measurement norms and quasi-norms". IEEE Transactions on Signal Processing 58 (4): 2405–2409.
38. ^ Vaswani, Namrata (2008). "Kalman filtered Compressed Sensing". 2008 15th IEEE International Conference on Image Processing. p. 893.
39. ^ Vasebi, Amir; Partovibakhsh, Maral; Bathaee, S. Mohammad Taghi (2007). "A novel combined battery model for state-of-charge estimation in lead-acid batteries based on extended Kalman filter for hybrid electric vehicle applications". Journal of Power Sources 174: 30.
40. ^ Vasebi, A.; Bathaee, S.M.T.; Partovibakhsh, M. (2008). "Predicting state of charge of lead-acid batteries for hybrid electric vehicles by extended Kalman filter". Energy Conversion and Management 49: 75.
41. ^ Fruhwirth, R. (1987). "Application of Kalman filtering to track and vertex fitting". Nucl. Instrum. Meth. A262 (2–3): 444–450.
42. ^ Harvey, Andrew C. (1994). "Applications of the Kalman filter in econometrics". In
43. ^ Bock, Y.; Crowell, B.; Webb, F.; Kedar, S.; Clayton, R.; Miyahara, B. (2008). "Fusion of High-Rate GPS and Seismic Data: Applications to Early Warning Systems for Mitigation of Geological Hazards". American Geophysical Union 43: 01.
44. ^ Wolpert, D. M.; Miall, R. C. (1996). "Forward Models for Physiological Motor Control". Neural Netw. 9 (8): 1265–1279.
• Einicke, G.A. (2012). Smoothing, Filtering and Prediction: Estimating the Past, Present and Future. Rijeka, Croatia: Intech.
• Jinya Su; Baibing Li; Wen-Hua Chen (2015). "On existence, optimality and asymptotic stability of the Kalman filter with partially observed inputs". Automatica 53: 149–154.
• Gelb, A. (1974). Applied Optimal Estimation. MIT Press.
• Kalman, R.E. (1960). "A new approach to linear filtering and prediction problems" (PDF). Journal of Basic Engineering 82 (1): 35–45.
• Kalman, R.E.; Bucy, R.S. (1961). "New Results in Linear Filtering and Prediction Theory". Retrieved 2008-05-03.
• Harvey, A.C. (1990). Forecasting, Structural Time Series Models and the Kalman Filter. Cambridge University Press.
• Roweis, S.;
• Simon, D. (2006). Optimal State Estimation: Kalman, H Infinity, and Nonlinear Approaches. Wiley-Interscience.
• Stengel, R.F. (1994). Optimal Control and Estimation. Dover Publications.
• Bierman, G.J. (1977). Factorization Methods for Discrete Sequential Estimation. Mathematics in Science and Engineering 128 (Mineola, N.Y.: Dover Publications).
• Bozic, S.M. (1994). Digital and Kalman filtering. Butterworth–Heinemann.
• Haykin, S. (2002). Adaptive Filter Theory. Prentice Hall.
• Liu, W.; Principe, J.C. and Haykin, S. (2010). Kernel Adaptive Filtering: A Comprehensive Introduction. John Wiley.
• Manolakis, D.G. (1999). Statistical and Adaptive signal processing. Artech House.
• Welch, Greg; Bishop, Gary (1997). "SCAAT: incremental tracking with incomplete information" (PDF). SIGGRAPH '97 Proceedings of the 24th annual conference on Computer graphics and interactive techniques. ACM Press/Addison-Wesley Publishing Co. pp. 333–344.
• Jazwinski, Andrew H. (1970). Stochastic Processes and Filtering. Mathematics in Science and Engineering. New York:
• Maybeck, Peter S. (1979). Stochastic Models, Estimation, and Control. Mathematics in Science and Engineering. 141-1. New York:
• Moriya, N. (2011). Primer to Kalman Filtering: A Physicist Perspective. New York:
• Dunik, J.; Simandl M.; Straka O. (2009). "Methods for estimating state and measurement noise covariance matrices: Aspects and comparisons". Proceedings of 15th IFAC Symposium on System Identification (France): 372–377.
• Chui, Charles K.; Chen, Guanrong (2009). Kalman Filtering with Real-Time Applications. Springer Series in Information Sciences 17 (4th ed.). New York:
• Spivey, Ben; Hedengren, J. D. and Edgar, T. F. (2010). "Constrained Nonlinear Estimation for Industrial Process Fouling". Industrial & Engineering Chemistry Research 49 (17): 7824–7831.
This article was sourced from Creative Commons Attribution-ShareAlike License; additional terms may apply. World Heritage Encyclopedia content is assembled from numerous content providers, Open Access Publishing, and in compliance with The Fair Access to Science and Technology Research Act (FASTR), Wikimedia Foundation, Inc., Public Library of Science, The Encyclopedia of Life, Open Book Publishers (OBP), PubMed, U.S. National Library of Medicine, National Center for Biotechnology Information, U.S. National Library of Medicine, National Institutes of Health (NIH), U.S. Department of Health & Human Services, and USA.gov, which sources content from all federal, state, local, tribal, and territorial government publication portals (.gov, .mil, .edu). Funding for USA.gov and content contributors is made possible from the U.S. Congress, E-Government Act of 2002.
Crowd sourced content that is contributed to World Heritage Encyclopedia is peer reviewed and edited by our editorial staff to ensure quality scholarly research articles.
|
2019-11-18 16:29:50
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 5, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9362147450447083, "perplexity": 2555.575548702876}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-47/segments/1573496669809.82/warc/CC-MAIN-20191118154801-20191118182801-00020.warc.gz"}
|
http://math.stackexchange.com/questions/431646/linear-transformation-f
|
# Linear transformation $f$
I am tring to solve the following task:
Linear transformation $f: \mathbb R^2 \rightarrow \mathbb R^2$ is given by $f(\begin{bmatrix} x_1\\ x_2 \end{bmatrix}) = \begin{bmatrix} 2x_1-x_2\\ x_1+x_2 \end{bmatrix}$. Answer true or false to the following questions:
a) in some basis of transformation $\mathbb R^2$ transformation matrix of $f$ is $\begin{bmatrix} 1 & 0 \\ 0 & 1 \end{bmatrix}$
b) $f$ is a bijection
c) transformation matrix of $f$ in basis $([1,0]^T, [1,1]^T)$ is $\begin{bmatrix} 1 & 1 \\ -1 & 2 \end{bmatrix}$
Could you kindly give me any HINTS how to start it (not solution)?
-
a) How are the matrices of a linear transformation in different bases related. b) What criterion have you for a linear transformation to be a bijection? c) Calculate $f((1,0))$ and $f((1,1))$ and see whether the results match. – Daniel Fischer Jun 28 '13 at 14:02
a) If $A$ and $B$ are transformation matrices of $f$ with respect to different bases, then there is some matrix $X$ such that $XAX^{-1} = B$. What does this say if $A$ is the identity matrix?
Do you believe that the criterion is true? If so, in the standard basis, the matrix for $f$ is $\pmatrix{ 2 & -1 \\ 1 & 1 }$. Now, there are tons of criteria for a matrix being invertible, typically the easiest one to check being that the matrix has non-zero determinant, which is definitely the case for the matrix here. – fuglede Jun 28 '13 at 21:32
|
2016-05-06 04:32:23
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8000790476799011, "perplexity": 175.29815391668598}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2016-18/segments/1461861718132.40/warc/CC-MAIN-20160428164158-00081-ip-10-239-7-51.ec2.internal.warc.gz"}
|
https://sea-man.org/testy/tanker-questions-and-answers
|
.
Site categories
1/ Help
:
Welcome to the website where you can find answers for the Computer Based Test (CBT) also known as Crew Evaluation System (CES) on the subject «Oil Tanker Training System, Advanced». This site will help you as a marine specialist improve your knowledge with the help of open information, where you can find questions as well as answers for them. CES/CBT based on practical information and marine specialists experience.
CES & CBT tests developed for evaluating seaman basic knowledge by Seagull Company (rebranded as «OTG»), is an evaluating online-tool, used for revealing any professional preparation needed in specific fields of knowledge, defined by STCW.
CES tests have proven themselves as good tools for the selection and recruitment process, as well as advancing the level of knowledge of the current officers and crew. Ocean Technologies Group use various subjects for question creation, which includes:
• Crowd and Crisis Management;
• Ballast water management;
• Handling and Stowage;
• Vessel operation management and safety;
• Marine engineering;
• Maintenance and repair, etc.
Current test contains Seagull CES questions on the subject «Oil Tanker Training System, Advanced». Those questions can be used for competence verification specialist capable of preventing accidental situations related with transporting safety, or also for self-examination.
«Oil Tanker Training System, Advanced» subject includes theoretical and practical information about advanced training for work on oil tanker. Knowledge of this information directly shows employee’s competence who holds a relevant post on a vessel, provides to follow necessary storage conditions to prevent leakage, equipment and storage facilities damage while transporting oil and other substances in liquid form.
This page contains answers to Seagull CES CBT (Crew Evaluation System/Computer Based Test) test about Oil Tanker Training System, Advanced, and serve as a database of questions and answers, using which seafarer can prepare to exams for getting certificate of competence, or just to challenge yourself with knowledge in this theme.
Use the search below to find question.
Amount of questions: 48.
Right answers marked with this sign .
A system for continuous monitoring of the concentration of hydrocarbon gases in the cargo pump-rooms shall be fitted. What is the max pre-set level?
No higher than 5 % LFL.
No higher than 3 % LFL.
No higher than 8 % LFL.
No higher than 10 % LFL.
Any discharge into the sea of oil or oily mixtures from the cargo area of an oil tanker shall be prohibited except when:
The tanker is more than 100 nautical miles from nearest land.
The tanker is more than 20 nautical miles from nearest land.
The tanker is more than 30 nautical miles from nearest land.
The tanker is more than 50 nautical miles from nearest land.
As a general guidance to the suitability of an oil for crude oil washing the following criteria should be used. For aromatic crude oils whose kinematic viscosity is the temperature controlling characteristic, the kinematic viscosity of the oil used for COW should not exceed ___ centistokes at the oil wash medium temperature.
40 centistokes.
50 centistokes.
60 centistokes.
65 centistokes.
As a general guidance to the suitability of an oil for crude oil washing the following criteria should be used. For paraffinic crude oils whose pour point temperature is the controlling characteristic, the temperature of the cargo to be used for COW should exceed its cloud point temperature by at least ___ if excessive sludging is present and should only be used once in a “closed cycle” washing program.
15 °C.
5 °C.
10 °C.
20 °C.
Before any portable gas indicators are brought to the measuring spot, what is very important to do with these analysers first?
To do a full calibration.
Just renew the batteries.
Just check the filters.
Cargo hoses in service should have a documented inspection at least annually to confirm their suitability for continued use. What is the normal testing pressure to check for leakage?
1,5 times the Rated Working Pressure (RWP).
1,0 times the Rated Working Pressure (RWP).
2,5 times the Rated Working Pressure (RWP).
2,0 times the Rated Working Pressure (RWP).
For what purpose do we use inert gas on board oil carriers?
We use inert gas to held pressure in the cargo tanks when we are discharging.
We use inert gas to replace the oxygen in the cargo tanks to maintain a neutral atmosphere.
We use inert gas to clean cargo lines after discharging.
We use inert gas for tank stripping when discharging.
From where on board can you find out the different types of protection equipment regarding where placed, how much, how many?
The ship’s safety plan.
In cargo control room.
On the bridge.
In order to avoid excessive electrostatic generation in the washing process during COW how many meters do you have to discharge from a cargo tank before you can use the content as source of washing fluid?
2 m.
1 m.
0,5 m.
1,5 m.
In which Annex of MARPOL do we find regulation for the prevention of pollution by garbage from ships?
In Annex VI.
In Annex IV.
In Annex V.
In Annex III.
In which Annex of MARPOL do we find regulation for the prevention of pollution by oil?
In Annex II.
In Annex I.
In Annex III.
In Annex V.
In which way may intake of poisoning material occur?
By inhaling.
Skin penetrating and skin absorbing.
Swallowing.
Near by the access to shore a certain plan is to be kept in case of an emergency situation. What kind of plan?
Discharging plan.
Emergency plan.
Safety plan.
The fire control draft on board is called the “safety plan” and shall be posted on board. In port, a copy of this plan shall in addition be available from somewhere else. Where shall this copy be available?
At the gangway.
In the engine control room.
On the terminal.
Handed over to the surveyors.
What do you call the method used for bone-soft part injures?
ABC-method.
ICE-method.
REHAB-method.
First Aid-method.
What does OBO means?
Oil Bulk Oil.
Only Bulk Oil.
Ore Bulk Oil.
Only Basic Oil.
What does VLCC means?
Very large crude carrier.
Very large combination carrier.
Very large common carrier.
Very large crude combination carrier.
What does the abbreviation TLV means?
Total limit value.
Total level value.
Time limitation value.
Threshold limit value.
What is it called when it is possible to ignite the vapour above the oil?
Flash point.
Boiling pointy.
Ignition point.
Pour point.
What is the Permissible Exposure Limit (PEL) of hydrogen sulphide expressed as a Time Weighted Average (TWA)?
20 ppm.
30 ppm.
10 ppm.
5 ppm.
What is the common expression for all chemical compounds that includes carbon and hydrogen?
Alkanes.
Hydrocarbons.
Alkenes.
Arenes.
What is the flammable range (%) for Propane?
0,2-0,9.
8,5-1,9.
7,8-1,5.
2,2-9,5.
What is the main quencher for an oil tanker?
Foam.
Water.
Powder.
Carbonic acid.
What is the maximum oil content allowed in the arrival ballast water?
15 ppm.
10 ppm.
5 ppm.
0 ppm.
What is the maximum oil content in the ballast/washing water allowed to be pumped over board during a voyage?
20 litres pr. nautical mile.
30 litres pr. nautical mile.
40 litres pr. nautical mile.
60 litres pr. nautical mile.
What is the maximum volume percent oxygen allowed in the tank during COW?
Maximum volume percent oxygen shall be the same as LEL for actual cargo.
Maximum volume percent oxygen shall not exceed 8 %.
Maximum volume percent oxygen shall not exceed 10,8 %.
Maximum volume percent oxygen shall not exceed 1 %.
What is the meaning of LEL?
Limited explosive level.
Lower explosive level.
Lower explosive limit.
Lower exposure level.
What is the meaning of LOT?
Loss of Tugs.
What is the meaning of UEL?
Upper exposure level.
Upper explosion level.
Upper explosive limit.
Upper evaporation level.
What is the meaning of the abbreviation ABC due to first aid?
Air, Burning, Critic.
Air, Breathing, Circulation.
Abandon, Balance, Circulation.
Air, Breath, Concentration.
What is the name of the device used for supervising all the ballast water to be pumped over board?
Ballast Monitor (BM).
Oil Detection Monitoring Equipment (ODME).
Ballast Handling Monitor (BHM).
Ballast Supervising Monitor (BSM).
What is the normal clearance of the main suction (bellmouth) from the tank bottom to the stripping suction in the cargo tank?
15 cm.
20 cm.
10 cm.
5 cm.
What is the proper name of CH4?
Propane.
Ethane.
Pentane.
Methane.
What is the requirement to capacity of stripping system during bottom COW of the cargo tanks?
1,50 times the total through-put of all tank cleaning machines.
1,35 times the total through-put of all tank cleaning machines.
1,15 times the total through-put of all tank cleaning machines.
1,25 times the total through-put of all tank cleaning machines.
What is the requirement to capacity of the inert gas plant?
At least 125 % of the maximum discharge capacity.
At least 150 % of the maximum discharge capacity.
At least 110 % of the maximum discharge capacity.
At least 115 % of the maximum discharge capacity.
What is the requirement to number of drive units where the drive unit is not integral with the cleaning machine?
No drive unit need to be moved more than twice from its original position.
No drive unit need to be moved more than 3 times from its original position.
No drive unit need to be moved more than 1 time from its original position.
No drive unit need to be moved more than 2 times from its original position.
What is the total quantity of the particular cargo you can discharge into the sea from a new tanker?
1/45 000 of the total quantity of the particular cargo.
1/15 000 of the total quantity of the particular cargo.
1/30 000 of the total quantity of the particular cargo.
1/20 000 of the total quantity of the particular cargo.
What is the usual activation point for high level alarms in cargo tanks?
92 % of tank capacity.
95 % of tank capacity.
90 % of tank capacity.
98 % of tank capacity.
What kind of cargo can be carried with an O/O ship?
Only Oil.
Only Ore.
Either Ore or Oil.
Ore and Oil simultaneously.
What kind of extinguishing-remedy would you choose to put out electrical fire?
Water.
Dry extinguishing remedy.
Foam.
Combination of powder and water.
What kind of fixed extinguishing plant is installed in an oil tankers engine room and pump room?
Powder plant.
Foam plant.
CO2 plant.
Water spray plant.
When re-inerting a cargo tank before commence air venting, what is the maximum oxygen content in the supplied inert gas?
8 % by volume.
5 % by volume.
10 % by volume.
6 % by volume.
Where do we find regulation regarding venting system on tankers?
ISGOTT.
MARPOL.
SOLAS.
OILPOL.
Where venting is by high-velocity discharge valves, what is the minimum distance above the cargo tank deck?
2 m.
5 m.
3 m.
10 m.
Which oxygen content shall we measure before entering a tank or space after venting?
The oxygen content shall be measured to 19 % by volume.
The oxygen content shall be measured to 21 % by volume.
The oxygen content shall be measured to 10,8 % by volume.
We don’t need to measure the oxygen content.
You are discovering a small leak of oil at the manifold during discharging. What is your first action?
Try to stop the leak.
Call immediately the person in charge.
Call the terminal.
Do not bother.
You are going to enter a tank to lift out some sediments. What kind of permits are needed to be issued before entering?
Enclosed Space Entry Permit and Cold Work Permit.
Enclosed Space Entry Permit and Hot Work Permit.
Hot Work Permit and Cold Work Permit.
Enclosed Space Entry Permit only.
You find the element carbon in only two different natural conditions. Which?
Graphite and diamond.
Diamond and titanium.
Did you find mistake? Highlight and press CTRL+Enter
Август, 06, 2022 81 0
Favorite articles
• Список избранных статей пуст.
Here will store all articles, what you marked as "Favorite". Articles store in cookies, so don't remove it.
$${}$$
|
2022-08-16 23:12:45
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.4410797357559204, "perplexity": 8808.441674760199}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-33/segments/1659882572581.94/warc/CC-MAIN-20220816211628-20220817001628-00478.warc.gz"}
|
http://math.stackexchange.com/questions/70124/what-does-the-big-cap-notation-mean?answertab=votes
|
What does the big cap notation mean?
I'm trying to understand How can an ordered pair be expressed as a set? and I don't know what the big cap/cup notations mean when placed next to an ordered pair: $\bigcap(a,b)$ and $\bigcup(a,b)$.
-
It mean union $\cup$ or intersection $\cap$ of sets in $(a,b)$. – Sasha Oct 5 '11 at 18:29
@Sasha So it only makes sense if $a$ and $b$ are sets. :) – Paul Manta Oct 5 '11 at 18:30
The cap is intersection, the cup is union. Remember that the ordered pair $(a,b)$ is just shorthand notation for the set $\{ \{ a \} , \{a,b\} \}$. – Ragib Zaman Oct 5 '11 at 18:32
In the question you linked to $(a,b)$ represented a set of sets. Then $\cup (a,b)$ denoted the union of those sets. – Sasha Oct 5 '11 at 18:33
This is actually answered in the linked question, but for clarification, if by definition $(a,b)=\{\{a\},\{a,b\}\}$ then $$\bigcap(a,b) = \bigcap\{\{a\},\{a,b\}\} = \{a\} \cap \{a,b\} = \{a\}$$ and $$\bigcup(a,b) = \bigcup\{\{a\},\{a,b\}\} = \{a\} \cup \{a,b\} = \{a,b\}.$$
|
2016-05-03 20:52:50
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9307448863983154, "perplexity": 337.0313407716662}, "config": {"markdown_headings": false, "markdown_code": false, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2016-18/segments/1461860121776.48/warc/CC-MAIN-20160428161521-00074-ip-10-239-7-51.ec2.internal.warc.gz"}
|
https://github.com/mozilla/kuma/pull/2038
|
# Add basic math editing features #2038
Merged
merged 2 commits into from Feb 25, 2014
None yet
### 7 participants
Contributor
commented Feb 19, 2014
This solve bug 945228. Note: This is based on @fred-wang's Javascript (La)TeX to MathML parser that has a Firefox Add-ons that have been revised already. This add a plugin for CKEditor that add basic math editing features based on (La)TeX input. You can test it here using kuma's version of CKEditor and here using CKEditor4. There are at least two "minor bugs" that are related with kuma's version of CKEditor. You can find more information about it here and here.
Raniere Silva Add CKEditor-TeXZilla plugin Link to CKEditor-TeXZilla: https://github.com/r-gaia-cs/CKEditor-TeXZilla For this commit it was used c4e484c7f9fd71f5f313892a3da2919ee2b14722 4984e66
Owner
Thanks for the pull request! @darkwing and/or @openjck should check this out. And maybe @elchi3 since he has a working kuma box and has done some of the MathML editing on MDN.
Member
This is good from a code perspective but since @elchi3 is a world renowned MathML expert, I'd like him to see if this is useful.
Contributor
commented Feb 19, 2014
texzilla.js has some warnings (52) if I run it thru http://www.jshint.com/
Member
Unfortunately every CKEditor plugin will. One lesson I've learned is that you never modify the original code of an external library or plugin, so as long as it works and there are no security issues, I think we're fine.
Contributor
commented Feb 19, 2014
missing semicolon could at least be fixed imho :)
Contributor
commented Feb 20, 2014
One lesson I've learned is that you never modify the original code of an external library or plugin It looks like Raniere wrote the plugin himself. (Go Raniere!) I agree it would be nice to pass JSHint. It can be opinionated at times (though not as much as JSLint) but there are usually good reasons for it. Writing JS without J*Lint always feels like walking through a minefield to me. But we might have to agree to disagree on that. 😄
Contributor
Just my two cents on this: the core of the TeX-to-MathML parser is generated from Jison (http://zaach.github.io/jison/docs/). While it is certainly possible to fix the JSLint errors for parts that have been written by hand, I don't think it would be a good idea to try to fix the one generated by Jison (we'll have to do that each time we update the grammar). However, @r-gaia-cs could probably try to fix as many errors as possible...
Contributor
commented Feb 20, 2014
Thanks all for the comments. @openjck I will run JSLint and fix as many errors as possible.
Contributor
commented Feb 20, 2014
I just tested this on my box and it works great. I used some examples from MathJax (http://www.mathjax.org/demos/tex-samples/). The TeX source is stored in the element, so that you can edit the TeX again, once the MathML got generated; very cool! In a spot check, I haven't seen any (JS) errors, so I would say r+ on the functionality. Some layout ideas (optional, not needed per se): Above the box it could say: "Please insert your (LaTex) code:" and then below the box: "(Clicking outside of the textarea updates the preview)". Maybe the two checkboxes could have a "Options:" caption above and they could be next to each other as there is some room on the right. Thanks all, happy to see MDN becoming a mathematician and scientists friendly platform!
Owner
@r-gaia-cs, great work! do you prefer bugzilla bugs or GitHub issues? @Elchi3, can you file new bugs or GitHub issues for those follow-ups?
Contributor
I guess we can use GitHub issues for CKEditor-TeXZilla (I already reported some bugs): https://github.com/r-gaia-cs/CKEditor-TeXZilla/issues?state=open
Contributor
commented Feb 20, 2014
@groovecoder Since the plugin can be reused on others CKEditors instances I prefer GitHub issues.
Owner
@Elchi3 - are you good to merge this as-is and to file follow-up issues at https://github.com/r-gaia-cs/CKEditor-TeXZilla/issues?state=open ? I'd like to merge before this bit-rots.
Contributor
commented Feb 24, 2014
@groovecoder, @Elchi3 I have plans to update this PR in a few hours.
Contributor
commented Feb 25, 2014
I update this PR with the last @fred-wang last changes in TeXZilla and others.
Raniere Silva Update based on version 1.0 34c39e2
Owner
@Elchi3 - can you check this out again now with these updates?
Contributor
commented Feb 25, 2014
r+
merged commit f3a5fd8 into mozilla:master Feb 25, 2014
#### 1 check passed
default Jenkins build 'mdn-github' #2446 has succeeded
Details
deleted the rgaiacs:TeXZilla branch Mar 26, 2014
|
2017-04-30 11:16:10
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.5605955123901367, "perplexity": 3516.5641293649533}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-17/segments/1492917125074.20/warc/CC-MAIN-20170423031205-00183-ip-10-145-167-34.ec2.internal.warc.gz"}
|
https://www.solvermax.com/blog/production-mix-model-5-pyomo
|
#### 5 September 2022 (1,891 words)
In this article we continue the Python Production mix series, using the Pyomo library. Specifically, we build Model 5, which changes Model 4 to:
• Define the constraints and objective function using def function blocks.
• Output the slack values and dual prices (also known as shadow prices) for each constraint.
These changes give us more control over how the model is defined and provide more information about the solution.
### Articles in this series
Articles in the Python Production mix series:
The Python code and data for this model are available in the following files:
The "Jupyter notebook" file contains a formatted combination of Python code and markdown code – this file should be opened and run in Jupyter Lab. We describe setting up our Python environment, including Jupyter Lab and various Python libraries, in the article Set up a Python modelling environment.
The "Python code" file is a plain text file containing only the Python code of this model. Download this file if you have a non-Jupyter environment for running Python programs.
The "Data" file is a plain text file containing the model's data, in json format.
The model files are also available on GitHub.
### Formulation for Model 5
For this model, we're using the same general formulation that we used for Model 4, as shown in Figure 1.
### Model 5 Python code
#### Import dependencies
The first task is to import the libraries that are needed for our program. The dependencies, as shown in Figure 2, are the same as for Model 4.
#### Get data
The data for Model 5, as shown in Figure 3, is identical to the data for Model 4, except for the model's name.
Note that to edit a json file in Jupyter Lab, you'll need to right-click on the file and select Open with > Editor.
To load the json file, we use the os and json libraries, as shown in Figure 4. This code loads all the json file data into a single object, which we'll parse in the next section.
#### Declarations
The declarations, as shown in Figure 5, are identical to Model 4.
#### Define the model
The model definition is shown in Figure 6. Our definition of the Production variables is the same as for Model 4, but the definitions of the constraints and the objective function are very different.
Specifically, we use a def function to define each of the constraints and the objective function. In this model, each definition consists of three lines of code:
• First line: Declare a function. We need to give the function a name and pass at least the Model object to the function. In more complex models we would also pass indices and possibly other objects.
• Second line: We return a rule for the constraint or objective function. The rules are the same as the expressions we defined in Model 4. In more complex models, this line would expand to multiple lines, with logic to decide what to return. For example, a common task is to skip specific instances of a constraint.
• Third line: Here we call the function, indicating which rule to use and whether we're defining a pyo.Constraint or pyo.Objective. This line doesn't need to be immediately after the function definition, though it is common practice to do so. For complex models, which have long functions, it may be clearer to define the functions and then have their calls collated in a subsequent block of code.
The advantage of using def functions is that they give us much greater control over the definitions. For all but simple models, the standard way to define constraints and the objective function in Pyomo is to use def functions.
#### Solve model
As shown in Figure 7, the code for solving the model is almost the same as for Model 4. The exception is that we've added a line that tells the solver to collate Model.dual prices for the constraints, which we'll print in the output.
#### Process results
The code for processing the solver result, as shown in Figure 8, is the same as for Model 4.
#### Write output
The code for writing the output, as shown in Figure 9, is almost the same as for Model 4, except that:
• We define the default data frame format to have 4 decimal places (i.e., using pd.options.display.float_format), so we have additional precision for calculations that we do below.
• We've added a section to populate a ConstraintStatus data frame that contains the slack values and dual prices for each constraint.
When we find an optimal solution, the output is shown in Figure 10.
The table of slack values and dual prices can be interpreted as follows:
• lSlack is the difference between the constraint's lower bound and its value in the solution. None of our constarints have lower bounds, so the differences are infinite.
• uSlack is the difference between the constraint's upper bound and its value in the solution. For the PeopleHours constraint, uSlack = 41.6667. We can verify this value by substituting the solution's variable values into the constraint, so the left-hand side is: 12.50*6.4103 + 10.00*12.8205 = 208.3338, which is 41.6662 less than the right-hand side of 250 (give-or-take a small rounding difference). If we do a similar calculation for the MaterialsUsage and SalesRelationship constraints, the result is zero – indicating that those constraints are binding.
• The dual prices indicate the marginal change in the objective function for a unit change in a constraint's right-hand side, all else being equal. For the non-binding PeopleHours constraint, the dual price is zero because a change in the right-hand side value has no impact on the solution's objective function value. The other two constraints are binding, so their dual prices are non-zero. For example, if we change the available materials to 501 kg, and re-solve the model, then the objective function increases by 6.16 to 3,083.08 (again, allowing for a small rounding difference). Similarly, changing the available materials to 499 kg reduces the objective function value by 6.15 to 3,070.77.
### Evaluation of this model
Model 5 is our final concrete model in this series of articles.
This model contains the essential elements of a Pyomo optimization model, including importing data, creating data structures, defining the model, solving the model, processing the result, and producing outputs that display the solution and help us understand the solution.
Every model is different, so other models may need to alter or extend the design of this model. Nonetheless, this model provides a good template from which to develop other Pyomo models.
### Next steps
We could further extend this model, particularly by adding more error checking and data validation. For an operational model, such features may be important.
But we'll leave such details for another time. Instead, the next article will focus on an alternative implementation, using a Pyomo "abstract" model. Subsequent articles will then implement the Production mix model using other Python modelling libraries.
### Conclusion
This article continues our exploration of optimization modelling in Python. Compared with Model 4, we define the constraints and objective function using functions, which gives us much greater control over the model definition. We also output additional information about the solution, specifically the slack values and dual prices.
In the next article, we'll look at an alternative way of defining the model, via a Pyomo abstract model.
|
2022-09-24 15:43:12
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.5403934121131897, "perplexity": 867.2170148835633}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030331677.90/warc/CC-MAIN-20220924151538-20220924181538-00482.warc.gz"}
|
https://math.stackexchange.com/questions/1615800/lcm-confusion-question/1618510
|
# LCM confusion question
A section of soldiers are rehearsing for the march past for the National Day parade . If they march in pairs , one soldier will be without a partner . If they match in threes , fives or sevens , they will be a soldier short. Calculate the smallest possible number of soldiers for this section.
After I found LCM for 2,3,5,7 , why must I take the LCM number and minus it by 1 ?
• Have you ever heard of the Chinese Remainder Theorem? The problem statement gives you the system $\begin{cases} x\equiv 1\pmod{2}\\x\equiv 2\pmod{3}\\x\equiv 4\pmod{5}\\x\equiv 6\pmod{7}\end{cases}$ Applying chinese remainder theorem, there will be a unique solution modulo $2\cdot 3\cdot 5\cdot 7$. In this case, it is easy to see that since each of the equivalencies in the system are congruent to $-1$, that $2\cdot 3\cdot 5\cdot 7-1$ is a solution, and since the solution is unique that it must be the only one. – JMoravitz Jan 17 '16 at 17:16
## 1 Answer
Look at it this way. If you just had one more soldier, then the total number of soldiers would be divisible by 2, 3, 5, and 7. So if you just had one more soldier, the smallest number of soldiers you would have is the LCM of 2, 3, 5, and 7.
However, you don't have one more soldier. So the number you do have is the LCM - 1.
|
2019-11-15 02:25:20
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.6774196028709412, "perplexity": 177.41348411325683}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": false}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-47/segments/1573496668561.61/warc/CC-MAIN-20191115015509-20191115043509-00502.warc.gz"}
|
https://www.physicsforums.com/threads/differential-equations-i-cant-understand-a-textbook-example.245169/
|
# Homework Help: Differential equations, I can't understand a textbook example.
1. Jul 15, 2008
### itunescape
1. The problem statement, all variables and given/known data
Find the recurrence relation and the general term for the solution:
y'' - xy' - y = 0 xo=1
2. Relevant equations
y= sum (n=0 to infinity) an (x-1)^n
3. The attempt at a solution
i get:
y= sum (n=0 to infinity) an x^n
y'= sum (n=0 to infinity) (n+1)an+1 (x-1)^n
y'' = sum (n=0 to infinity) (n+2)(n+1)an+2(x-1)^n
here all of the sums start at zero and the powers of (x-1) all are n, but within the orginal equation there is an +and this is where the textbook doesn't make sense:
the book sets x= 1+ (x-1). Why do you do this? i don't understand.
I thought that: xy' = sum (n=0 to infinity) (n+1) an+1 (x-1)^n+1
but i'm not sure what to do from here...bc the indexes and powers need to be the same before calculating the recurrence relation right?
2. Jul 16, 2008
### Mute
Why are you using $y = \sum a_n (x-1)^n$ as opposed to $y = \sum a_n x^n$? If you just use x^n instead of (x-1)^n, you get
$$\sum_{n=0}^{\infty} n(n+1)a_n x^{n-2} - x\sum_{n=0}^{\infty}a_n n x^{n-1} - \sum_{n=0}^{\infty} a_n x^n = 0$$
In the second term you multiply the x through to get x^n in the sum. You then have two sums with x^n and one with x^{n-2}. So, you change dummy indices in the first summation to get it to be x^n as well. Do you know how to do that?
There may be a reason for using (x-1) instead, such as if for some reason the series doesn't converge well for x < 0 or something. In that case,
$$\sum_{n=0}^{\infty} n(n+1)a_n (x-1)^{n-2} - x\sum_{n=0}^{\infty}a_n n (x-1)^{n-1} - \sum_{n=0}^{\infty} a_n (x-1)^n = 0$$
In this case, you want the (x-1)^(n-1) in the second term to absorb the factor of x out front of it, but you can't do this directly as you have x(x-1)^(n-1), whereas in the above example I wrote you have x x^(n-1) = x^n. So, instead you write x = 1 + (x-1)so that you can multiply that into the second term to get one term that still looks like (x-1)^(n-1) and another that looks like (x-1)^n:
$$\sum_{n=0}^{\infty} n(n+1)a_n (x-1)^{n-2} - \sum_{n=0}^{\infty}a_n n (x-1)^{n-1} - \sum_{n=0}^{\infty}a_n n (x-1)^{n} - \sum_{n=0}^{\infty} a_n (x-1)^n = 0$$
Now you can do the usual business of shifting indices to get your series in terms of powers of (x-1).
Last edited: Jul 16, 2008
3. Jul 16, 2008
### HallsofIvy
Do you mean y(0)= 1 or are you specifically asked to find a series solution expanded around x0= 1?
If you want an expansion about x0= 1, then you mean (x-1)n.
You do understand that x does equal 1+ (x-1) don't you? And they do this because in the xy you need to to have (x-1) to multiply into the sum.
Why would you think that?
$$x y'= x\sum_{n=0}^\infty (n+1)a_n (x-1)^n= \sum_{n=0}^\infty (n+1)a_n x(x-1)^n$$
of course- you can't multiply x times (x-1)n and get (x-1)n+1!
Change the "dummy indices" in the different sums so that you have the same powers.
4. Jul 16, 2008
### itunescape
Thank you guys for helping me clarify what I've been doing wrong.
The series needs to be expanded around xo=1 so it becomes (x-1) from (x-xo).
I've tried doing the work over again and this is what i obtained:
y=sum (n=0 to infinity) an (x-1)^n
y= ao+a1(x-1)+a2(x-1)^2+a3(x-1)^3...an(x-1)^n
y'= sum(n=1 to infinity) n*an (x-1)^n-1
y'=a1+ 2a2(x-1)+3a3(x-1)^2...n*an(x-1)^n-1
(x-1)y'= sum(n=1 to infinity) n*an (x-1)^n
(x-1)y'= a1(x-1)+ 2a2(x-1)^2+ 3a3(x-1)^3...n*an (x-1)^n
y''= sum(n=2 to infinity) (n-1)(n)*an (x-1)^n-2
y''= 2a2+3a3(x-1)+...(n-1)(n)*an (x-1)^n-2
y''= sum(n=0 to infinity) (n+1)(n+2) an+2 (x-1)^n
I need to add up the sums to try and get a recurrence, but i have to change the indexes to n=1 so:
y= ao + sum(n=1 to infinity) an (x-1)^n
y''= 2a2 + sum(n=1 to infinity) (n+1)(n+2)an+2(x-1)^n
2a2 + ao=0
recurrence:
{ (n+1)(n+2)an+2 -(n)an -an=0}
Is all this right so far? the text book has the recurrence as:
{(n+2)an+2 -an+1 -an=0}
is it suppose to be written this way?
Share this great discussion with others via Reddit, Google+, Twitter, or Facebook
|
2018-09-19 19:11:01
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.804469108581543, "perplexity": 1876.0232777865695}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": false}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-39/segments/1537267156270.42/warc/CC-MAIN-20180919180955-20180919200955-00481.warc.gz"}
|
https://tug.org/pipermail/xetex/2008-July/010431.html
|
# [XeTeX] set trimbox and cropbox and artbox?
Tue Jul 29 14:08:09 CEST 2008
On May 23, 2006, at 11:28 AM, William Adams wrote:
> Would it be possible to extend XeTeX so as to be able to set values
> for these?
>
> Or could it be done using a \special?
I've finally reached a point where I really need to be able to do
this, preferably w/o post-processing the .pdf --- xetex doesn't grok
pdfliteral AFAICT, so I tried:
\begin{document}\special{/TrimBox[9.0 9.0 621.0 801.0]}
which didn't work.
What would work?
William
--
|
2023-04-01 11:48:25
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9391016364097595, "perplexity": 4853.405750197382}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2023-14/segments/1679296949958.54/warc/CC-MAIN-20230401094611-20230401124611-00186.warc.gz"}
|
https://zbmath.org/authors/?q=ai%3Aluca.florian
|
# zbMATH — the first resource for mathematics
## Luca, Florian
Compute Distance To:
Author ID: luca.florian Published as: Luca, F.; Luca, Florian Homepage: https://www.wits.ac.za/staff/academic-a-z-listing/l/florianlucawitsacza/ External Links: MGP · Math-Net.Ru · Wikidata · ORCID · dblp · GND
Documents Indexed: 705 Publications since 1993, including 7 Books Reviewing Activity: 306 Reviews
all top 5
all top 5
#### Serials
50 Journal of Number Theory 42 Acta Arithmetica 27 The Fibonacci Quarterly 25 International Journal of Number Theory 21 Colloquium Mathematicum 21 Journal of Integer Sequences 18 Publicationes Mathematicae 18 Integers 14 Indagationes Mathematicae. New Series 13 Periodica Mathematica Hungarica 13 Monatshefte für Mathematik 13 Boletín de la Sociedad Matemática Mexicana. Third Series 12 Glasnik Matematički. Serija III 11 Bulletin of the Australian Mathematical Society 11 Annales Mathematicae et Informaticae 10 Rocky Mountain Journal of Mathematics 10 Proceedings of the American Mathematical Society 10 Journal de Théorie des Nombres de Bordeaux 9 Glasgow Mathematical Journal 8 Mathematics of Computation 8 Archiv der Mathematik 8 Bulletin Mathématique de la Société des Sciences Mathématiques de Roumanie. Nouvelle Série 8 The Ramanujan Journal 8 Uniform Distribution Theory 7 Mathematica Slovaca 6 Functiones et Approximatio. Commentarii Mathematici 6 Proceedings of the Edinburgh Mathematical Society. Series II 6 The New York Journal of Mathematics 6 Journal of Combinatorics and Number Theory 5 American Mathematical Monthly 5 Annales des Sciences Mathématiques du Québec 5 Canadian Mathematical Bulletin 5 Proceedings of the Japan Academy. Series A 5 Revista Colombiana de Matemáticas 5 Congressus Numerantium 5 Smarandache Notions Journal 4 Lithuanian Mathematical Journal 4 Bulletin of the London Mathematical Society 4 International Journal of Mathematics and Mathematical Sciences 4 Mathematika 4 Quaestiones Mathematicae 4 Mathematica Bohemica 4 Proceedings of the Indian Academy of Sciences. Mathematical Sciences 4 Acta Academiae Paedagogicae Agriensis. Nova Series. Sectio Matematicae 4 Journal of the Australian Mathematical Society 4 Portugaliae Mathematica. Nova Série 3 Mathematical Proceedings of the Cambridge Philosophical Society 3 Abhandlungen aus dem Mathematischen Seminar der Universität Hamburg 3 Indian Journal of Mathematics 3 Journal für die Reine und Angewandte Mathematik 3 Studia Scientiarum Mathematicarum Hungarica 3 Transactions of the American Mathematical Society 3 Annales Universitatis Scientiarum Budapestinensis de Rolando Eötvös Nominatae. Sectio Computatorica 3 IMRN. International Mathematics Research Notices 3 Aequationes Mathematicae 3 Divulgaciones Matemáticas 3 Mathematical Communications 3 Revista Matemática Complutense 3 Bulletin of the Malaysian Mathematical Sciences Society. Second Series 3 Communications in Mathematics 3 Moscow Journal of Combinatorics and Number Theory 3 Research in Number Theory 2 Houston Journal of Mathematics 2 Archivum Mathematicum 2 Journal of Algebra 2 Michigan Mathematical Journal 2 Publications de l’Institut Mathématique. Nouvelle Série 2 Results in Mathematics 2 Bulletin of the Greek Mathematical Society 2 Bulletin of the Korean Mathematical Society 2 Acta Mathematica Hungarica 2 Forum Mathematicum 2 Journal of the Ramanujan Mathematical Society 2 Elemente der Mathematik 2 Analele Ştiinţifice ale Universităţii Al. I. Cuza din Iaşi. Serie Nouă. Matematică 2 Applicable Algebra in Engineering, Communication and Computing 2 Experimental Mathematics 2 Bulletin of the Belgian Mathematical Society - Simon Stevin 2 Turkish Journal of Mathematics 2 Finite Fields and their Applications 2 Analele Ştiinţifice ale Universităţii “Ovidius” Constanţa. Seria: Matematică 2 Annals of Combinatorics 2 Acta Mathematica Sinica. English Series 2 The Quarterly Journal of Mathematics 2 La Gaceta de la Real Sociedad Matemática Española 2 Nieuw Archief voor Wiskunde. Vijfde Serie 2 Comptes Rendus. Mathématique. Académie des Sciences, Paris 2 JP Journal of Algebra, Number Theory and Applications 2 Missouri Journal of Mathematical Sciences 2 Mediterranean Journal of Mathematics 2 Algebra & Number Theory 2 Albanian Journal of Mathematics 2 Annales Mathématiques du Québec 1 Communications in Algebra 1 Discrete Mathematics 1 IEEE Transactions on Information Theory 1 Indian Journal of Pure & Applied Mathematics 1 Revue Roumaine de Mathématiques Pures et Appliquées 1 Acta Scientiarum Mathematicarum 1 Ars Combinatoria ...and 64 more Serials
all top 5
#### Fields
689 Number theory (11-XX) 25 Combinatorics (05-XX) 9 Field theory and polynomials (12-XX) 9 Group theory and generalizations (20-XX) 8 Algebraic geometry (14-XX) 5 Information and communication theory, circuits (94-XX) 4 General and overarching topics; collections (00-XX) 4 $$K$$-theory (19-XX) 3 History and biography (01-XX) 2 Functions of a complex variable (30-XX) 2 Sequences, series, summability (40-XX) 2 Geometry (51-XX) 1 Order, lattices, ordered algebraic structures (06-XX) 1 Commutative algebra (13-XX) 1 Linear and multilinear algebra; matrix theory (15-XX) 1 Associative rings and algebras (16-XX) 1 Category theory; homological algebra (18-XX) 1 Real functions (26-XX) 1 Partial differential equations (35-XX) 1 Dynamical systems and ergodic theory (37-XX) 1 Difference and functional equations (39-XX) 1 Algebraic topology (55-XX) 1 Probability theory and stochastic processes (60-XX) 1 Systems theory; control (93-XX)
#### Citations contained in zbMATH Open
462 Publications have been cited 1,740 times in 927 Documents Cited by Year
On a conjecture about repdigits in $$k$$-generalized Fibonacci sequences. Zbl 1274.11035
Bravo, Jhon J.; Luca, Florian
2013
Powers of two in generalized Fibonacci sequences. Zbl 1353.11020
Bravo, Jhon J.; Luca, Florian
2012
On the complexity of algebraic numbers. Zbl 1119.11019
Adamczewski, Boris; Bugeaud, Yann; Luca, Florian
2004
Fibonacci and Lucas numbers with only one distinct digit. Zbl 0958.11007
Luca, F.
2000
Linear combinations of factorials and $$S$$-units in a binary recurrence sequence. Zbl 1361.11007
Sanchez, Sergio Guzmán; Luca, Florian
2014
Analytic number theory. Exploring the anatomy of integers. Zbl 1247.11001
De Koninck, Jean-Marie; Luca, Florian
2012
Fibonacci numbers at most one away from a perfect power. Zbl 1156.11008
Bugeaud, Yann; Luca, Florian; Mignotte, Maurice; Siksek, Samir
2008
Powers of two as sums of two $$k$$-Fibonacci numbers. Zbl 1389.11041
Bravo, Jhon J.; Gómez, Carlos A.; Luca, Florian
2016
On the equation $$x^2 + 2^a \cdot 3^b = y^n$$. Zbl 1085.11021
Luca, Florian
2002
Repdigits as sums of three Fibonacci numbers. Zbl 1305.11008
Luca, Florian
2012
On some problems of Mąkowski-Schinzel and Erdős concerning the arithmetical functions $$\varphi$$ and $$\sigma$$. Zbl 1027.11007
Luca, Florian; Pomerance, Carl
2002
17 lectures on Fermat numbers. From number theory to geometry. With a foreword by Alena Šolcová. Zbl 1010.11002
Křížek, Michal; Luca, Florian; Somer, Lawrence
2001
Distinct digits in base $$b$$ expansions of linear recurrence sequences. Zbl 1030.11004
Luca, Florian
2000
Coincidences in generalized Fibonacci sequences. Zbl 1272.11028
Bravo, Jhon J.; Luca, Florian
2013
On numbers $$n$$ dividing the $$n$$th term of a linear recurrence. Zbl 1262.11015
Alba González, Juan José; Luca, Florian; Pomerance, Carl; Shparlinski, Igor E.
2012
Fibonacci numbers which are sums of two repdigits. Zbl 1287.11021
2011
A generalization of a classical zero-sum problem. Zbl 1123.11012
Luca, Florian
2007
Some relationships between poly-Cauchy numbers and poly-Bernoulli numbers. Zbl 1289.11021
Komatsu, Takao; Luca, Florian
2013
On the largest prime factor of the $$k$$-Fibonacci numbers. Zbl 1292.11034
Bravo, Jhon J.; Luca, Florian
2013
Character sums and congruences with $$n!$$. Zbl 1060.11046
Garaev, Moubariz Z.; Luca, Florian; Shparlinski, Igor E.
2004
On a Diophantine equation. Zbl 0997.11027
Luca, Florian
2000
On a problem of Pillai with Fibonacci numbers and powers of 2. Zbl 1421.11017
Ddamulira, Mahadi; Luca, Florian; Rakotomalala, Mihaja
2017
On the $$x$$-coordinates of Pell equations which are rep-digits. Zbl 1389.11076
Dossavi-Yovo, Appolinaire; Luca, Florian; Togbé, Alain
2016
On the Diophantine equation $$x^2+2^a\cdot 5^b=y^n$$. Zbl 1231.11041
Luca, Florian; Togbé, Alain
2008
Diophantine equations with products of consecutive terms in Lucas sequences. Zbl 1081.11023
Luca, F.; Shorey, T. N.
2005
Values of the Euler $$\varphi$$-function not divisible by a given odd prime, and the distribution of Euler-Kronecker constants for cyclotomic fields. Zbl 1294.11164
Ford, Kevin; Luca, Florian; Moree, Pieter
2014
Irreducible radical extensions and Euler-function chains. Zbl 1172.11029
Luca, Florian; Pomerance, Carl
2007
On $$X$$-coordinates of Pell equations that are repdigits. Zbl 1459.11082
2018
On the $$x$$-coordinates of Pell equations which are Fibonacci numbers. Zbl 1416.11027
Luca, Florian; Togbé, Alain
2018
On the $$X$$-coordinates of Pell equations which are Tribonacci numbers. Zbl 1410.11007
Luca, Florian; Montejano, Amanda; Szalay, Laszlo; Togbé, Alain
2017
Functional graphs of polynomials over finite fields. Zbl 1327.05323
Konyagin, Sergei V.; Luca, Florian; Mans, Bernard; Mathieson, Luke; Sha, Min; Shparlinski, Igor E.
2016
The distribution of self-Fibonacci divisors. Zbl 1390.11119
Luca, Florian; Tron, Emanuele
2015
On the exponential local-global principle. Zbl 1330.11019
Bartolome, Boris; Bilu, Yuri; Luca, Florian
2013
An exponential Diophantine equation related to powers of two consecutive Fibonacci numbers. Zbl 1253.11046
Luca, Florian; Oyono, Roger
2011
Almost powers in the Lucas sequence. Zbl 1204.11030
Bugeaud, Yann; Luca, Florian; Mignotte, Maurice; Siksek, Samir
2008
Fibonacci numbers of the form $$p^a\pm p^b+1$$. Zbl 1228.11021
Luca, Florian; Szalay, László
2007
On the lower bound of the linear complexity over $$\mathbb F_p$$ of Sidelnikov sequences. Zbl 1296.94073
Garaev, Moubariz Z.; Luca, Florian; Shparlinski, Igor; Winterhof, Arne
2006
On Pillai’s Diophantine equation. Zbl 1136.11026
Bugeaud, Yann; Luca, Florian
2006
On the exponent of the group of points on elliptic curves in extension fields. Zbl 1082.11041
Luca, Florian; Shparlinski, Igor E.
2005
Divisibility of class numbers: enumerative approach. Zbl 1072.11084
Bilu, Yuri F.; Luca, Florian
2005
On Pillai’s problem with tribonacci numbers and powers of 2. Zbl 1379.11013
Bravo, Jhon J.; Luca, Florian; Yazán, Karina
2017
$$p$$-adic quotient sets. Zbl 1428.11023
Garcia, Stephan Ramon; Hong, Yu Xuan; Luca, Florian; Pinsker, Elena; Sanna, Carlo; Schechter, Evan; Starr, Adam
2017
Quotients of Fibonacci numbers. Zbl 1391.11027
Garcia, Stephan Ramon; Luca, Florian
2016
Rational products of singular moduli. Zbl 1400.11113
Bilu, Yuri; Luca, Florian; Pizarro-Madariaga, Amalia
2016
Pell and Pell-Lucas numbers with only one distinct digit. Zbl 1349.11023
2015
Generalized balancing numbers. Zbl 1239.11035
Liptai, Kálmán; Luca, Florian; Pintér, Ákos; Szalay, László
2009
Fibonacci Diophantine triples. Zbl 1218.11020
Luca, Florian; Szalay, László
2008
Catalan and Apéry numbers in residue classes. Zbl 1101.11010
Garaev, Moubariz Z.; Luca, Florian; Shparlinski, Igor E.
2006
The Diophantine equation $$P(x)=n!$$ and a result of M. Overholt. Zbl 1085.11023
Luca, Florian
2002
On shifted primes with large prime factors and their products. Zbl 1370.11110
Luca, Florian; Menares, Ricardo; Pizarro-Madariaga, Amalia
2015
On stable quadratic polynomials. Zbl 1241.11027
Ahmadi, Omran; Luca, Florian; Ostafe, Alina; Shparlinski, Igor E.
2012
On composite integers $$n$$ for which $$\varphi(n)\mid n-1$$. Zbl 1294.11005
Luca, Florian; Pomerance, Carl
2011
Some additive combinatorics problems in matrix rings. Zbl 1208.11037
Ferguson, Ron; Hoffman, Corneliu; Luca, Florian; Ostafe, Alina; Shparlinski, Igor E.
2010
Common values of the arithmetic functions $$\varphi$$ and $$\sigma$$. Zbl 1205.11010
Ford, Kevin; Luca, Florian; Pomerance, Carl
2010
Some results on Oppenheim’s “factorisatio numerorum” function. Zbl 1213.11020
2010
On the Diophantine equation $$x^2 + 2^{\alpha}5^{\beta}13^{\gamma} = y^n$$. Zbl 1232.11130
Goins, Edray; Luca, Florian; Togbé, Alain
2008
On the Diophantine equation $$x^2 + 5^a 13^b = y^n$$. Zbl 1186.11016
Abu Muriefah, Fadwa S.; Luca, Florian; Togbé, Alain
2008
On a diophantine equation related to a conjecture of Erdös and Graham. Zbl 1132.11319
Luca, F.; Walsh, P. G.
2007
On factorials which are products of factorials. Zbl 1132.11017
Luca, Florian
2007
Fibonacci numbers with the Lehmer property. Zbl 1112.11007
Luca, Florian
2007
On the maximal order of numbers in the “factorisatio numerorum” problem. Zbl 1169.11043
Klazar, Martin; Luca, Florian
2007
Exponential sums and congruences with factorials. Zbl 1071.11051
Garaev, Moubariz Z.; Luca, Florian; Shparlinski, Igor E.
2005
Average order in cyclic groups. Zbl 1079.11003
von zur Gathen, Joachim; Knopfmacher, Arnold; Luca, Florian; Lucht, Lutz G.; Shparlinski, Igor E.
2004
MOV attack in various subgroups on elliptic curves. Zbl 1072.11094
Luca, Florian; Mireles, David Jose; Shparlinski, Igor E.
2004
On the largest prime factor of $$(ab+1)(ac+1)(bc+1)$$. Zbl 1108.11030
Hernández, Santos; Luca, Florian
2003
The number of non-zero digits of $$n$$! Zbl 1043.11008
Luca, Florian
2002
Some remarks on Heron triangles. Zbl 1062.11019
Kramer, Alpar-Vajk; Luca, Florian
2000
On a problem of Pillai with $$k$$-generalized Fibonacci numbers and powers of 2. Zbl 1437.11051
Ddamulira, Mahadi; Gómez, Carlos A.; Luca, Florian
2018
On perfect powers that are sums of two Fibonacci numbers. Zbl 06865878
Luca, Florian; Patel, Vandita
2018
Powers of two as sums of three Pell numbers. Zbl 1433.11032
Bravo, Jhon J.; Faye, Bernadette; Luca, Florian
2017
On the $$x$$-coordinates of Pell equations which are Fibonacci numbers. II. Zbl 1420.11037
Kafle, Bir; Luca, Florian; Togbé, Alain
2017
Powers of two as sums of two Lucas numbers. Zbl 1358.11026
Bravo, Jhon J.; Luca, Florian
2014
Control of coupled parabolic systems and Diophantine approximations. Zbl 1272.93029
Luca, Florian; De Teresa, Luz
2013
On equal values of power sums of arithmetic progressions. Zbl 1330.11020
Bazsó, András; Kreso, Dijana; Luca, Florian; Pintér, Ákos
2012
On the number of isogeny classes of pairing-friendly elliptic curves and statistics of MNT curves. Zbl 1329.11063
Jiménez-Urroz, Jorge; Luca, Florian; Shparlinski, Igor E.
2012
On the number of factorizations of an integer. Zbl 1245.11100
Balasubramanian, Ramachandran; Luca, Florian
2011
Fibonacci numbers which are sums of three factorials. Zbl 1259.11038
Bollman, Mark; Hernández Hernández, Santos; Luca, Florian
2010
On factorials expressible as sums of at most three Fibonacci numbers. Zbl 1253.11048
Luca, Florian; Siksek, Samir
2010
Fibonacci numbers of the form $$p^a\pm p^b$$. Zbl 1273.11030
Luca, Florian; Stănică, Pantelimon
2009
Non-holonomicity of sequences defined via elementary functions. Zbl 1189.11007
Bell, Jason P.; Gerhold, Stefan; Klazar, Martin; Luca, Florian
2008
On the Diophantine equation $$x^2+7^{2k}=y^n$$. Zbl 1221.11091
Luca, Florian; Togbé, Alain
2007
Composite integers $$n$$ for which $$\varphi (n)\mid n-1$$. Zbl 1215.11094
Banks, William D.; Luca, Florian
2007
Perfect powers from products of terms in Lucas sequences. Zbl 1137.11011
Bugeaud, Yann; Luca, Florian; Mignotte, Maurice; Siksek, Samir
2007
Elliptic curves with low embedding degree. Zbl 1133.14303
Luca, Florian; Shparlinski, Igor E.
2006
Diophantine $$m$$-tuples for primes. Zbl 1085.11019
Dujella, Andrej; Luca, Florian
2005
On shifted products which are powers. Zbl 1123.11011
Luca, Florian
2005
Fibonacci numbers that are not sums of two prime powers. Zbl 1113.11011
Luca, Florian; Stănică, Pantelimon
2005
A quantitative lower bound for the greatest prime factor of $$(ab+1)(bc+1)(ca+1)$$. Zbl 1122.11060
Bugeaud, Yann; Luca, Florian
2004
The Diophantine equation $$x^2=p^a\pm p^b+1$$. Zbl 1067.11016
Luca, Florian
2004
On the prime power factorization of $$n$$! Zbl 1049.11092
Luca, Florian; Stănică, Pantelimon
2003
Palindromes in Lucas sequences. Zbl 1027.11012
Luca, Florian
2003
On the convergence of series of reciprocals of primes related to the Fermat numbers. Zbl 1026.11011
Křížek, Michal; Luca, Florian; Somer, Lawrence
2002
Squares in Lehmer sequences and some Diophantine applications. Zbl 1006.11011
Luca, Florian; Walsh, P. G.
2001
On the $$x$$-coordinates of Pell equations which are $$k$$-generalized Fibonacci numbers. Zbl 1447.11025
2020
Repdigits as sums of four Pell numbers. Zbl 1455.11019
Luca, Florian; Normenyo, Benedict Vasco; Togbé, Alain
2019
On cyclotomic factors of polynomials related to modular forms. Zbl 1450.11041
Heim, Bernhard; Luca, Florian; Neuhauser, Markus
2019
$$X$$-coordinates of Pell equations as sums of two tribonacci numbers. Zbl 1424.11036
Bravo, Eric F.; Gómez Ruiz, Carlos Alexis; Luca, Florian
2018
Repdigits as sums of three Pell numbers. Zbl 1413.11008
Normenyo, Benedict Vasco; Luca, Florian; Togbé, Alain
2018
Polynomial values of sums of products of consecutive integers. Zbl 1442.11055
Bazsó, A.; Bérczes, A.; Hajdu, L.; Luca, F.
2018
Every positive integer is a sum of three palindromes. Zbl 1441.11016
Cilleruelo, Javier; Luca, Florian; Baxter, Lewis
2018
On the $$x$$-coordinates of Pell equations which are $$k$$-generalized Fibonacci numbers. Zbl 1447.11025
2020
Zeckendorf representations with at most two terms to $$x$$-coordinates of Pell equations. Zbl 1455.11031
Gómez, Carlos A.; Luca, Florian
2020
Primitive root bias for twin primes. II: Schinzel-type theorems for totient quotients and the sum-of-divisors function. Zbl 1450.11002
Garcia, Stephan Ramon; Luca, Florian; Shi, Kye; Udell, Gabe
2020
Products of $$k$$-Fibonacci numbers which are rep-digits. Zbl 07287377
2020
Correction to: $$X$$-coordinates of Pell equations as sums of two tribonacci numbers. Zbl 1449.11051
Bravo, Eric F.; Gómez Ruiz, Carlos Alexis; Luca, Florian
2020
Trinomials with given roots. Zbl 07152833
Bilu, Yuri; Luca, Florian
2020
Multiplicative dependence between $$k$$-Fibonacci and $$k$$-Lucas numbers. Zbl 07301185
Gómez, Carlos A.; Gómez, Jhonny C.; Luca, Florian
2020
The $$x$$-coordinates of Pell equations and sums of two Fibonacci numbers. II. Zbl 07271355
2020
On members of Lucas sequences which are products of factorials. Zbl 07242544
Laishram, Shanta; Luca, Florian; Sias, Mark
2020
Lucas factoriangular numbers. Zbl 07217178
Kafle, Bir; Luca, Florian; Togbé, Alain
2020
On a Diophantine equation involving powers of Fibonacci numbers. Zbl 07192786
Gueth, Krisztián; Luca, Florian; Szalay, László
2020
Pell and Pell-Lucas numbers as sums of two repdigits. Zbl 1452.11020
Adegbindin, Chèfiath; Luca, Florian; Togbé, Alain
2020
On $$Y$$-coordinates of Pell equations which are members of a fixed binary recurrence. Zbl 07179055
2020
On certain sums concerning the gcd’s and lcm’s of $$k$$ positive integers. Zbl 1452.11007
Hilberdink, Titus; Luca, Florian; Tóth, László
2020
Repdigits as sums of four Pell numbers. Zbl 1455.11019
Luca, Florian; Normenyo, Benedict Vasco; Togbé, Alain
2019
On cyclotomic factors of polynomials related to modular forms. Zbl 1450.11041
Heim, Bernhard; Luca, Florian; Neuhauser, Markus
2019
$$X$$-coordinates of Pell equations which are Lucas numbers. Zbl 07137965
Kafle, Bir; Luca, Florian; Togbé, Alain
2019
Primitive root bias for twin primes. Zbl 1450.11001
Garcia, Stephan Ramon; Kahoro, Elvis; Luca, Florian
2019
Repdigits as sums of three Lucas numbers. Zbl 1459.11041
Luca, Florian; Normenyo, Benedict Vasco; Togbé, Alain
2019
$$x$$-Coordinates of Pell equations which are Tribonacci numbers. II. Zbl 1449.11030
Kafle, Bir; Luca, Florian; Togbé, Alain
2019
Lucas numbers as sums of two repdigits. Zbl 1427.11013
Adegbindin, Chèfiath; Luca, Florian; Togbé, Alain
2019
On Pillai’s problem with the Fibonacci and Pell sequences. Zbl 1440.11018
Hernández Hernández, Santos; Luca, Florian; Rivera, Luis Manuel
2019
On the $$x$$-coordinates of Pell equations which are products of two Fibonacci numbers. Zbl 1420.11061
Kafle, Bir; Luca, Florian; Montejano, Amanda; Szalay, László; Togbé, Alain
2019
On Pillai’s problem with $$X$$-coordinates of Pell equations and powers of 2. Zbl 07081797
Erazo, Harold S.; Gómez, Carlos A.; Luca, Florian
2019
Recurrence relations for polynomials obtained by arithmetic functions. Zbl 1435.11055
Heim, Bernhard; Luca, Florian; Neuhauser, Markus
2019
On the discriminator of Lucas sequences. Zbl 1456.11021
Faye, Bernadette; Luca, Florian; Moree, Pieter
2019
On the zero-multiplicity of a fifth-order linear recurrence. Zbl 1450.11009
Gómez Ruiz, Carlos Alexis; Luca, Florian
2019
An exponential Diophantine equation related to the difference between powers of two consecutive Balancing numbers. Zbl 1449.11036
Rihane, Salah Eddine; Faye, Bernadette; Luca, Florian; Togbé, Alain
2019
Product of consecutive tribonacci numbers with only one distinct digit. Zbl 1455.11027
Bravo, Eric F.; Gómez, Carlos A.; Luca, Florian
2019
Diophantine equations with the Ramanujan $$\tau$$ function of factorials, Fibonacci numbers, and Catalan numbers. Zbl 1456.11031
Luca, Florian; Mabaso, Sibusiso
2019
On the exponential Diophantine equation $$P_n^x+P_{n+1}^x=P_m$$. Zbl 1455.11056
Rihane, Salah Eddine; Faye, Bernadette; Luca, Florian; Togbé, Alain
2019
On the typical size and cancellations among the coefficients of some modular forms. Zbl 1450.11100
Luca, Florian; Radziwiłł, Maksym; Shparlinski, Igor E.
2019
Linear independence of powers of singular moduli of degree three. Zbl 1451.11069
Luca, Florian; Riffaut, Antonin
2019
On $$X$$-coordinates of Pell equations that are repdigits. Zbl 1459.11082
2018
On the $$x$$-coordinates of Pell equations which are Fibonacci numbers. Zbl 1416.11027
Luca, Florian; Togbé, Alain
2018
On a problem of Pillai with $$k$$-generalized Fibonacci numbers and powers of 2. Zbl 1437.11051
Ddamulira, Mahadi; Gómez, Carlos A.; Luca, Florian
2018
On perfect powers that are sums of two Fibonacci numbers. Zbl 06865878
Luca, Florian; Patel, Vandita
2018
$$X$$-coordinates of Pell equations as sums of two tribonacci numbers. Zbl 1424.11036
Bravo, Eric F.; Gómez Ruiz, Carlos Alexis; Luca, Florian
2018
Repdigits as sums of three Pell numbers. Zbl 1413.11008
Normenyo, Benedict Vasco; Luca, Florian; Togbé, Alain
2018
Polynomial values of sums of products of consecutive integers. Zbl 1442.11055
Bazsó, A.; Bérczes, A.; Hajdu, L.; Luca, F.
2018
Every positive integer is a sum of three palindromes. Zbl 1441.11016
Cilleruelo, Javier; Luca, Florian; Baxter, Lewis
2018
On the difference in values of the Euler totient function near prime arguments. Zbl 1455.11127
Garcia, Stephan Ramon; Luca, Florian
2018
Repdigits as sums of four Fibonacci or Lucas numbers. Zbl 1453.11006
Normenyo, Benedict Vasco; Luca, Florian; Togb, Alain
2018
Primitive root biases for prime pairs. I: Existence and non-totality of biases. Zbl 1431.11112
Garcia, Stephan Ramon; Luca, Florian; Schaaff, Timothy
2018
On Pillai’s problem with Pell numbers and powers of 2. Zbl 1448.11042
Ouamar Hernane, Mohand; Luca, Florian; Rihane, Salah; Togbé, Alain
2018
Diophantine triples and $$k$$-generalized Fibonacci sequences. Zbl 1447.11054
Fuchs, Clemens; Hutle, Christoph; Luca, Florian; Szalay, László
2018
Random ordering in modulus of consecutive Hecke eigenvalues of primitive forms. Zbl 1444.11071
Bilu, Yuri F.; Deshouillers, Jean-Marc; Gun, Sanoli; Luca, Florian
2018
Markov equation with Fibonacci components. Zbl 1458.11056
Luca, Florian; Srinivasan, Anitha
2018
A Diophantine equation in $$k$$-Fibonacci numbers and repdigits. Zbl 1407.11026
Bravo, Jhon J.; Gómez, Carlos A.; Luca, Florian
2018
Arithmetic properties of coefficients of $$L$$-functions of elliptic curves. Zbl 1423.11169
Güloğlu, Ahmet M.; Luca, Florian; Yalçiner, Aynur
2018
Denominators of Bernoulli polynomials. Zbl 1415.11121
Bordellès, Olivier; Luca, Florian; Moree, Pieter; Shparlinski, Igor E.
2018
A variation on the theme of Nicomachus. Zbl 1410.11079
Luca, Florian; Polanco, Geremías; Zudilin, Wadim
2018
On the error term of a lattice counting problem. Zbl 1380.11085
Bordellès, Olivier; Luca, Florian; Shparlinski, Igor E.
2018
On a problem of Pillai with Fibonacci numbers and powers of 2. Zbl 1421.11017
Ddamulira, Mahadi; Luca, Florian; Rakotomalala, Mihaja
2017
On the $$X$$-coordinates of Pell equations which are Tribonacci numbers. Zbl 1410.11007
Luca, Florian; Montejano, Amanda; Szalay, Laszlo; Togbé, Alain
2017
On Pillai’s problem with tribonacci numbers and powers of 2. Zbl 1379.11013
Bravo, Jhon J.; Luca, Florian; Yazán, Karina
2017
$$p$$-adic quotient sets. Zbl 1428.11023
Garcia, Stephan Ramon; Hong, Yu Xuan; Luca, Florian; Pinsker, Elena; Sanna, Carlo; Schechter, Evan; Starr, Adam
2017
Powers of two as sums of three Pell numbers. Zbl 1433.11032
Bravo, Jhon J.; Faye, Bernadette; Luca, Florian
2017
On the $$x$$-coordinates of Pell equations which are Fibonacci numbers. II. Zbl 1420.11037
Kafle, Bir; Luca, Florian; Togbé, Alain
2017
On Diophantine quadruples of Fibonacci numbers. Zbl 1386.11065
Fujita, Yasutsugu; Luca, Florian
2017
On arithmetic lattices in the plane. Zbl 1358.11074
Fukshansky, Lenny; Guerzhoy, Pavel; Luca, Florian
2017
Diversity in parametric families of number fields. Zbl 1414.11150
Bilu, Yuri; Luca, Florian
2017
Only finitely many Tribonacci Diophantine triples exist. Zbl 1440.11042
Fuchs, Clemens; Hutle, Christoph; Irmak, Nurettin; Luca, Florian; Szalay, László
2017
Fibonacci factoriangular numbers. Zbl 1373.11011
Gómez Ruiz, Carlos Alexis; Luca, Florian
2017
The $$r$$th moment of the divisor function: an elementary approach. Zbl 1366.11103
Luca, Florian; Tóth, László
2017
Palindromes in several sequences. Zbl 1429.11018
Luca, Florian
2017
On the number of non-zero digits of integers in multi-base representations. Zbl 1399.11027
Bertók, Cs.; Hajdu, L.; Luca, F.; Sharma, D.
2017
Generalized incomplete poly-Bernoulli polynomials and generalized incomplete poly-Cauchy polynomials. Zbl 1409.11015
Komatsu, Takao; Luca, Florian
2017
Counting terms $$U_n$$ of third order linear recurrences with $$u_n = u^2 + nv^2$$. Zbl 1425.11024
Ciolan, Alexandru; Luca, Florian; Moree, Pieter
2017
Local behavior of the composition of the aliquot and co-totient functions. Zbl 1421.11076
Luca, Florian; Pomerance, Carl
2017
Number fields in fibers: the geometrically abelian case with rational critical values. Zbl 1413.12001
Bilu, Yuri; Luca, Florian
2017
Diophantine triples with values in the sequences of Fibonacci and Lucas numbers. Zbl 1417.11011
Luca, Florian; Munagi, Augustine O.
2017
Corrigendum to “Positive integers divisible by the product of their nonzero digits”, portugaliae math. 64 (2007), 1: 75 – 85. Zbl 1434.11195
De Koninck, Jean-Marie; Luca, Florian
2017
Lucas numbers with the Lehmer property. Zbl 1389.11043
2017
On polynomials whose roots have rational quotient of differences. Zbl 1435.11061
Luca, Florian
2017
On two functions arising in the study of the Euler and Carmichael quotients. Zbl 1425.11009
Luca, Florian; Sha, Min; Shparlinski, Igor E.
2017
Collinear CM-points. Zbl 1432.11061
Bilu, Yuri; Luca, Florian; Masser, David
2017
Pell numbers with the Lehmer property. Zbl 1439.11015
2017
Monotonic phinomial coefficients. Zbl 1437.11006
Luca, Florian; Stănică, Pantelimon
2017
Powers of two as sums of two $$k$$-Fibonacci numbers. Zbl 1389.11041
Bravo, Jhon J.; Gómez, Carlos A.; Luca, Florian
2016
On the $$x$$-coordinates of Pell equations which are rep-digits. Zbl 1389.11076
Dossavi-Yovo, Appolinaire; Luca, Florian; Togbé, Alain
2016
Functional graphs of polynomials over finite fields. Zbl 1327.05323
Konyagin, Sergei V.; Luca, Florian; Mans, Bernard; Mathieson, Luke; Sha, Min; Shparlinski, Igor E.
2016
Quotients of Fibonacci numbers. Zbl 1391.11027
Garcia, Stephan Ramon; Luca, Florian
2016
Rational products of singular moduli. Zbl 1400.11113
Bilu, Yuri; Luca, Florian; Pizarro-Madariaga, Amalia
2016
Fibonacci numbers which are products of two Pell numbers. Zbl 1400.11040
Ddamulira, Mahadi; Luca, Florian; Rakotomalala, Mihaja
2016
An explicit bound for the number of partitions into roots. Zbl 1343.05031
Luca, Florian; Ralaivaosaona, Dimbinaina
2016
Visual properties of generalized Kloosterman sums. Zbl 1338.11077
Burkhardt, Paula; Chan, Alice Zhuo-Yu; Currier, Gabriel; Garcia, Stephan Ramon; Luca, Florian; Suh, Hong
2016
On the Diophantine equation $$F_n + F_m=2^a$$. Zbl 1419.11024
Bravo, Jhon J.; Luca, Florian
2016
Sylvester’s theorem and the non-integrality of a certain binomial sum. Zbl 1400.11061
López-Aguayo, Daniel; Luca, Florian
2016
Multiplicative independence in $$k$$-generalized Fibonacci sequences. Zbl 1355.11010
Gómez Ruiz, Carlos Alexis; Luca, Florian
2016
Diophantine triples of Fibonacci numbers. Zbl 1359.11028
He, Bo; Luca, Florian; Togbé, Alain
2016
Cyclotomic coefficients: gaps and jumps. Zbl 1405.11032
Camburu, Oana-Maria; Ciolan, Emil-Alexandru; Luca, Florian; Moree, Pieter; Shparlinski, Igor E.
2016
A note on odd perfect numbers. Zbl 1400.11010
Dris, Jose Arnaldo B.; Luca, Florian
2016
Repdigits as Euler functions of Lucas numbers. Zbl 1389.11040
Bravo, Jhon J.; Faye, Bernadette; Luca, Florian; Tall, Amandou
2016
An elliptic sequence is not a sampled linear recurrence sequence. Zbl 1367.11020
Luca, F.; Ward, T.
2016
On a divisibility relation for Lucas sequences. Zbl 1402.11023
Bilu, Yuri F.; Komatsu, Takao; Luca, Florian; Pizarro-Madariaga, Amalia; Stănică, Pantelimon
2016
Carmichael numbers in the sequence $$(2^{n} k+1)_{n\geq 1}$$. Zbl 1400.11017
Cilleruelo, Javier; Luca, Florian; Pizarro-Madariaga, Amalia
2016
The distribution of self-Fibonacci divisors. Zbl 1390.11119
Luca, Florian; Tron, Emanuele
2015
Pell and Pell-Lucas numbers with only one distinct digit. Zbl 1349.11023
2015
On shifted primes with large prime factors and their products. Zbl 1370.11110
Luca, Florian; Menares, Ricardo; Pizarro-Madariaga, Amalia
2015
...and 362 more Documents
all top 5
#### Cited by 878 Authors
210 Luca, Florian 46 Shparlinski, Igor E. 27 Togbé, Alain 22 Pollack, Paul 21 Hajdu, Lajos 19 Bugeaud, Yann 16 Sanna, Carlo 14 Bravo, Jhon Jairo 14 Marques, Diego 14 Pomerance, Carl Bernard 14 Szalay, László 13 Komatsu, Takao 12 De Koninck, Jean-Marie 12 Dubickas, Artūras 12 Dujella, Andrej 12 Gómez, Carlos Alexis 12 Stănică, Pantelimon 11 Chen, Yonggao 11 Garaev, Moubariz Z. 10 Adamczewski, Boris 10 Ddamulira, Mahadi 10 Keskin, Refik 10 Pink, István 10 Ziegler, Volker 9 Bennett, Michael A. 9 Bilu, Yuri F. 9 Cilleruelo, Javier 9 Gómez Ruiz, Carlos Alexis 9 Siksek, Samir 9 Soydan, Gokhan 8 Banks, William D. 8 Fuchs, Clemens 8 Garcia, Stephan Ramon 8 Moree, Pieter 8 Shorey, Tarlok Nath 7 Bérczes, Attila 7 Heim, Bernhard Ernst 7 Neuhauser, Markus 7 Sha, Min 7 Tengely, Szabolcs 7 Tijdeman, Robert 7 Ulas, Maciej 6 Bazsó, András 6 Faye, Bernadette 6 Ford, Kevin B. 6 Grau, José María 6 Laishram, Shanta 6 Mignotte, Maurice 6 Panda, Gopal Krishna 6 Somer, Lawrence E. 6 Winterhof, Arne 5 Adhikari, Sukumar Das 5 Broughan, Kevin A. 5 Erduvan, Fatih 5 Irmak, Nurettin 5 Kátai, Imre 5 Křížek, Michal 5 Le, Maohua 5 Leonetti, Paolo 5 Miska, Piotr 5 Pappalardi, Francesco 5 Rout, Sudhansu Sekhar 4 Abu Muriefah, Fadwa S. 4 Bell, Jason P. 4 Berrizbeitia, Pedro 4 Bertók, Csanád 4 Bordellès, Olivier 4 Chattopadhyay, Jaitra 4 Chim, Kwok Chi 4 Coons, Michael 4 Drungilas, Paulius 4 Elsholtz, Christian 4 Hernández Hernández, Santos 4 Hu, Su 4 Kafle, Bir 4 Kihel, Omar 4 Oller-Marcén, Antonio M. 4 Pizarro-Madariaga, Amalia 4 Raggi-Cárdenas, Alberto Gerardo 4 Rihane, Salah Eddine 4 Roy, Bidisha 4 Sarkar, Subha 4 Shallit, Jeffrey O. 4 Sun, Xuegong 4 Thangadurai, Ravindrananathan 4 Trojovský, Pavel 4 Wu, Jie 4 Zhu, Huilin 3 Altassan, Alaa 3 Balasubramanian, Ramachandran 3 Berend, Daniel 3 Brandstätter, Nina 3 Bridy, Andrew 3 Bubboloni, Daniela 3 Chakraborty, Kalyan 3 Dai, Lixia 3 Deshouillers, Jean-Marc 3 Freiberg, Tristan 3 Fu, Ruiqin 3 Fujita, Yasutsugu ...and 778 more Authors
all top 5
#### Cited in 190 Serials
126 Journal of Number Theory 56 International Journal of Number Theory 28 Proceedings of the American Mathematical Society 25 Periodica Mathematica Hungarica 24 Mathematics of Computation 24 Monatshefte für Mathematik 22 Journal de Théorie des Nombres de Bordeaux 22 Journal of Integer Sequences 21 Bulletin of the Australian Mathematical Society 21 Indagationes Mathematicae. New Series 20 Integers 19 The Ramanujan Journal 18 Acta Arithmetica 15 Rocky Mountain Journal of Mathematics 15 Proceedings of the Indian Academy of Sciences. Mathematical Sciences 14 Boletín de la Sociedad Matemática Mexicana. Third Series 13 Archiv der Mathematik 11 Lithuanian Mathematical Journal 11 Functiones et Approximatio. Commentarii Mathematici 11 Transactions of the American Mathematical Society 11 Research in Number Theory 10 Finite Fields and their Applications 10 Comptes Rendus. Mathématique. Académie des Sciences, Paris 9 Acta Mathematica Hungarica 8 Turkish Journal of Mathematics 8 Bulletin of the Malaysian Mathematical Sciences Society. Second Series 7 Mathematical Proceedings of the Cambridge Philosophical Society 7 Mathematica Slovaca 7 Experimental Mathematics 7 Science China. Mathematics 6 Colloquium Mathematicum 6 Journal of Algebra 6 Mathematika 6 Proceedings of the Japan Academy. Series A 6 Journal of the Australian Mathematical Society 6 Bulletin of the Brazilian Mathematical Society. New Series 5 Discrete Mathematics 5 Proceedings of the Edinburgh Mathematical Society. Series II 5 European Journal of Combinatorics 5 Communications in Mathematics 5 Research in the Mathematical Sciences 4 Information Processing Letters 4 Glasgow Mathematical Journal 4 Journal of Combinatorial Theory. Series A 4 Mathematische Zeitschrift 4 Quaestiones Mathematicae 4 Journal of Inequalities and Applications 3 Indian Journal of Pure & Applied Mathematics 3 Israel Journal of Mathematics 3 Mathematical Notes 3 Ukrainian Mathematical Journal 3 Chaos, Solitons and Fractals 3 Abhandlungen aus dem Mathematischen Seminar der Universität Hamburg 3 Canadian Mathematical Bulletin 3 Czechoslovak Mathematical Journal 3 Manuscripta Mathematica 3 Revista Matemática Iberoamericana 3 Journal of the Ramanujan Mathematical Society 3 Elemente der Mathematik 3 Glasnik Matematički. Serija III 3 Abstract and Applied Analysis 3 Acta Mathematica Sinica. English Series 3 Portugaliae Mathematica. Nova Série 3 Mediterranean Journal of Mathematics 3 Revista de la Real Academia de Ciencias Exactas, Físicas y Naturales. Serie A: Matemáticas. RACSAM 3 European Journal of Mathematics 2 Discrete Applied Mathematics 2 Journal of Mathematical Analysis and Applications 2 Applied Mathematics and Computation 2 Archivum Mathematicum 2 Bulletin of the London Mathematical Society 2 Illinois Journal of Mathematics 2 Inventiones Mathematicae 2 Mathematische Annalen 2 Rendiconti del Circolo Matemàtico di Palermo. Serie II 2 Results in Mathematics 2 Advances in Applied Mathematics 2 Constructive Approximation 2 Forum Mathematicum 2 International Journal of Foundations of Computer Science 2 Designs, Codes and Cryptography 2 Geometric and Functional Analysis. GAFA 2 Aequationes Mathematicae 2 Bulletin of the American Mathematical Society. New Series 2 Expositiones Mathematicae 2 Applicable Algebra in Engineering, Communication and Computing 2 The Electronic Journal of Combinatorics 2 The New York Journal of Mathematics 2 Integral Transforms and Special Functions 2 Journal of Difference Equations and Applications 2 Annals of Combinatorics 2 LMS Journal of Computation and Mathematics 2 Journal of the European Mathematical Society (JEMS) 2 Communications of the Korean Mathematical Society 2 RAIRO. Theoretical Informatics and Applications 2 Acta et Commentationes Universitatis Tartuensis de Mathematica 2 Journal of High Energy Physics 2 Journal of Mathematical Cryptology 2 Algebra & Number Theory 2 Afrika Matematika ...and 90 more Serials
all top 5
#### Cited in 38 Fields
872 Number theory (11-XX) 60 Combinatorics (05-XX) 38 Algebraic geometry (14-XX) 26 Group theory and generalizations (20-XX) 23 Computer science (68-XX) 21 Information and communication theory, circuits (94-XX) 17 Dynamical systems and ergodic theory (37-XX) 13 Field theory and polynomials (12-XX) 9 Functions of a complex variable (30-XX) 8 Commutative algebra (13-XX) 8 $$K$$-theory (19-XX) 8 Special functions (33-XX) 8 Probability theory and stochastic processes (60-XX) 6 Systems theory; control (93-XX) 5 Linear and multilinear algebra; matrix theory (15-XX) 5 Real functions (26-XX) 5 Partial differential equations (35-XX) 5 Numerical analysis (65-XX) 4 Measure and integration (28-XX) 4 Difference and functional equations (39-XX) 4 Approximations and expansions (41-XX) 3 Sequences, series, summability (40-XX) 3 Operator theory (47-XX) 3 Geometry (51-XX) 2 Ordinary differential equations (34-XX) 2 Harmonic analysis on Euclidean spaces (42-XX) 2 Convex and discrete geometry (52-XX) 2 Statistics (62-XX) 2 Mechanics of deformable solids (74-XX) 2 Quantum theory (81-XX) 2 Mathematics education (97-XX) 1 History and biography (01-XX) 1 Mathematical logic and foundations (03-XX) 1 Associative rings and algebras (16-XX) 1 Integral transforms, operational calculus (44-XX) 1 Functional analysis (46-XX) 1 Fluid mechanics (76-XX) 1 Relativity and gravitational theory (83-XX)
#### Wikidata Timeline
The data are displayed as stored in Wikidata under a Creative Commons CC0 License. Updates and corrections should be made in Wikidata.
|
2021-06-15 05:11:15
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.525269091129303, "perplexity": 9780.583742457366}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-25/segments/1623487616657.20/warc/CC-MAIN-20210615022806-20210615052806-00326.warc.gz"}
|
http://math.stackexchange.com/questions/267697/convergence-of-sum-n-2-infty-frac1n-alpha-ln-beta-n
|
# Convergence of $\sum_{n=2}^\infty \frac{1}{n^\alpha \ln^\beta (n)}$
Study the convergence of the following series: $$\sum_{n=2}^\infty \frac{1}{n^\alpha \cdot\ln^\beta(n)} \text{ where }\alpha,\beta \geq 0$$
Applying d'Alembert criterion I have that $$\lim_{n\to\infty} \frac{n^\alpha \ln^\beta(n)}{(n+1)^\alpha \ln^\beta (n+1)} = \lim_{n\to\infty} \left(\frac{n}{n+1}\right)^\alpha\left(\frac{\ln(n)}{\ln (n+1)}\right)^\beta = 1$$ so the nature of the series is inconclusive.
If $\alpha = \beta = 0$, then the series diverges, since $\sum_{n=2}^\infty 1 = \infty$.
Should I study the rest of the cases (i.e. if $\alpha = 0, \beta > 0$ the root test and the ratio test are also inconclusive). What is the best form to study the series?.
-
You can apply Cauchy's condensation test. Moving to $$\sum_{n = 2}^{\infty} \frac{2^n}{2^{n\alpha} n^{\beta}} = \sum_{n = 2}^{\infty} \left( 2^{1-\alpha} \right)^n \frac{1}{n^{\beta}},$$ the series surrenders to the ratio test when $\alpha \neq 1$. When $\alpha=1$, this is a well-known series.
Sorry for commenting in an old post but I think that by the ratio test if $\alpha > 1$ the series converges. – user50554 Jan 28 '13 at 18:37
Yeah, which is consistent with the ratio test as $\frac{\left(2^{1-\alpha}\right)^{n+1}\frac{1}{\left(n+1\right)^{\beta}}}{\left(2^{1-\alpha}\right)^{n}\frac{1}{n^{\beta}}} = 2^{1 - \alpha}\left(\frac{n}{n+1}\right)^{\beta} \to 2^{1 - \alpha} < 1$ if $\alpha > 1$. – levap Jan 28 '13 at 22:19
I think you should use the integral test. In order to establish the convergence/divergence of $$\int_2^{\infty} \frac{dx}{x^a\log^b{x}}$$ I suggest you to use the substitution $x=e^t$ which transforms the integral into $$\int_{\log 2}^{\infty} \frac{dt}{e^{(a-1)t}t^b}$$ which is easier. I think you should be able to conclude by yourself, now.
|
2015-10-04 13:04:50
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9302396178245544, "perplexity": 206.08032214649387}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2015-40/segments/1443736673632.3/warc/CC-MAIN-20151001215753-00226-ip-10-137-6-227.ec2.internal.warc.gz"}
|
https://aliquote.org/micro/2019-06-29-09-00-38/
|
Is there a style convention for Common Lisp recursive helper functions?. Interesting SO thread of GP in writing CL. #lisp
|
2019-07-21 16:48:54
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.38690686225891113, "perplexity": 14839.626093158578}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-30/segments/1563195527089.77/warc/CC-MAIN-20190721164644-20190721190644-00321.warc.gz"}
|
https://www.aimsciences.org/article/doi/10.3934/dcds.2004.10.769
|
# American Institute of Mathematical Sciences
July 2004, 10(3): 769-786. doi: 10.3934/dcds.2004.10.769
## Non-existence and behaviour at infinity of solutions of some elliptic equations
1 School of Mathematics, University of Minnesota–Twin Cities, Minneapolis, MN 55455, United States
Received August 2002 Revised July 2003 Published January 2004
When $\alpha\le 2\beta$, we will prove the non-existence of solutions of the equation $\Delta v+\alpha e^v+\beta (x\cdot\nabla v)e^v=0$ in $R^2$ which satisfy $\gamma =\int_{R^2}e^vdx/(2\pi) <\infty$ and $|x|^2e^{v(x)}\le C_1$ in $R^2$ for some constant $C_1>0$. When $\alpha>2\beta$, we will prove that if $v$ is a solution of the above equation, then there exist constants $0<\tau\le 1$ and $a_1$ such that $v(x)=-\gamma \log |x|+a_1+O(|x|^{-\tau})$ as $|x|\to\infty$ where $\gamma=(\alpha -2\beta)\gamma$. We will also show that $\gamma$ satisfies $\gamma>2$ and $\gamma<\alpha$.
Citation: Shu-Yu Hsu. Non-existence and behaviour at infinity of solutions of some elliptic equations. Discrete & Continuous Dynamical Systems - A, 2004, 10 (3) : 769-786. doi: 10.3934/dcds.2004.10.769
[1] Ran Zhang, Shengqiang Liu. On the asymptotic behaviour of traveling wave solution for a discrete diffusive epidemic model. Discrete & Continuous Dynamical Systems - B, 2021, 26 (2) : 1197-1204. doi: 10.3934/dcdsb.2020159 [2] Yukihiko Nakata. Existence of a period two solution of a delay differential equation. Discrete & Continuous Dynamical Systems - S, 2021, 14 (3) : 1103-1110. doi: 10.3934/dcdss.2020392 [3] Izumi Takagi, Conghui Zhang. Existence and stability of patterns in a reaction-diffusion-ODE system with hysteresis in non-uniform media. Discrete & Continuous Dynamical Systems - A, 2020 doi: 10.3934/dcds.2020400 [4] Julian Tugaut. Captivity of the solution to the granular media equation. Kinetic & Related Models, , () : -. doi: 10.3934/krm.2021002 [5] Jianhua Huang, Yanbin Tang, Ming Wang. Singular support of the global attractor for a damped BBM equation. Discrete & Continuous Dynamical Systems - B, 2020 doi: 10.3934/dcdsb.2020345 [6] Feifei Cheng, Ji Li. Geometric singular perturbation analysis of Degasperis-Procesi equation with distributed delay. Discrete & Continuous Dynamical Systems - A, 2021, 41 (2) : 967-985. doi: 10.3934/dcds.2020305 [7] Mengting Fang, Yuanshi Wang, Mingshu Chen, Donald L. DeAngelis. Asymptotic population abundance of a two-patch system with asymmetric diffusion. Discrete & Continuous Dynamical Systems - A, 2020, 40 (6) : 3411-3425. doi: 10.3934/dcds.2020031 [8] Mohammad Ghani, Jingyu Li, Kaijun Zhang. Asymptotic stability of traveling fronts to a chemotaxis model with nonlinear diffusion. Discrete & Continuous Dynamical Systems - B, 2021 doi: 10.3934/dcdsb.2021017 [9] Zhihua Liu, Yayun Wu, Xiangming Zhang. Existence of periodic wave trains for an age-structured model with diffusion. Discrete & Continuous Dynamical Systems - B, 2020 doi: 10.3934/dcdsb.2021009 [10] Zhilei Liang, Jiangyu Shuai. Existence of strong solution for the Cauchy problem of fully compressible Navier-Stokes equations in two dimensions. Discrete & Continuous Dynamical Systems - B, 2020 doi: 10.3934/dcdsb.2020348 [11] Thabet Abdeljawad, Mohammad Esmael Samei. Applying quantum calculus for the existence of solution of $q$-integro-differential equations with three criteria. Discrete & Continuous Dynamical Systems - S, 2020 doi: 10.3934/dcdss.2020440 [12] Ahmad El Hajj, Hassan Ibrahim, Vivian Rizik. $BV$ solution for a non-linear Hamilton-Jacobi system. Discrete & Continuous Dynamical Systems - A, 2020 doi: 10.3934/dcds.2020405 [13] Yichen Zhang, Meiqiang Feng. A coupled $p$-Laplacian elliptic system: Existence, uniqueness and asymptotic behavior. Electronic Research Archive, 2020, 28 (4) : 1419-1438. doi: 10.3934/era.2020075 [14] Tong Yang, Seiji Ukai, Huijiang Zhao. Stationary solutions to the exterior problems for the Boltzmann equation, I. Existence. Discrete & Continuous Dynamical Systems - A, 2009, 23 (1&2) : 495-520. doi: 10.3934/dcds.2009.23.495 [15] Ryuji Kajikiya. Existence of nodal solutions for the sublinear Moore-Nehari differential equation. Discrete & Continuous Dynamical Systems - A, 2021, 41 (3) : 1483-1506. doi: 10.3934/dcds.2020326 [16] Joel Kübler, Tobias Weth. Spectral asymptotics of radial solutions and nonradial bifurcation for the Hénon equation. Discrete & Continuous Dynamical Systems - A, 2020, 40 (6) : 3629-3656. doi: 10.3934/dcds.2020032 [17] Christian Clason, Vu Huu Nhu, Arnd Rösch. Optimal control of a non-smooth quasilinear elliptic equation. Mathematical Control & Related Fields, 2020 doi: 10.3934/mcrf.2020052 [18] Hai-Feng Huo, Shi-Ke Hu, Hong Xiang. Traveling wave solution for a diffusion SEIR epidemic model with self-protection and treatment. Electronic Research Archive, , () : -. doi: 10.3934/era.2020118 [19] Karoline Disser. Global existence and uniqueness for a volume-surface reaction-nonlinear-diffusion system. Discrete & Continuous Dynamical Systems - S, 2021, 14 (1) : 321-330. doi: 10.3934/dcdss.2020326 [20] Qing Li, Yaping Wu. Existence and instability of some nontrivial steady states for the SKT competition model with large cross diffusion. Discrete & Continuous Dynamical Systems - A, 2020, 40 (6) : 3657-3682. doi: 10.3934/dcds.2020051
2019 Impact Factor: 1.338
|
2021-01-15 14:30:44
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.6825354695320129, "perplexity": 4910.157606041829}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-04/segments/1610703495901.0/warc/CC-MAIN-20210115134101-20210115164101-00118.warc.gz"}
|
https://math.gatech.edu/seminars-and-colloquia-by-series?series_tid=38&page=32
|
Seminars and Colloquia by Series
Monday, October 26, 2009 - 14:00 , Location: Skiles 269 , Shea Vela-Vick , Columbia University , Organizer: John Etnyre
To each three-component link in the 3-dimensional sphere we associate a characteristic map from the 3-torus to the 2-sphere, and establish a correspondence between the pairwise and Milnor triple linking numbers of the link and the Pontryagin invariants that classify its characteristic map up to homotopy. This can be viewed as a natural extension of the familiar fact that the linking number of a two-component link is the degree of its associated Gauss map from the 2-torus to the 2-sphere.In the case where the pairwise linking numbers are all zero, I will present an integral formula for the triple linking number analogous to the Gauss integral for the pairwise linking numbers. The integrand in this formula is geometrically natural in the sense that it is invariant under orientation-preserving rigid motions of the 3-sphere.
Monday, October 19, 2009 - 14:00 , Location: Skiles 269 , Inanc Baykur , Brandeis University , Organizer: John Etnyre
We will introduce new constructions of infinite families of smooth structures on small 4-manifolds and infinite families of smooth knottings of surfaces.
Monday, October 12, 2009 - 14:05 , Location: Skiles 269 , , UTexas Austin , , Organizer: Stavros Garoufalidis
The deformation variety is similar to the representation variety inthat it describes (generally incomplete) hyperbolic structures on3-manifolds with torus boundary components. However, the deformationvariety depends crucially on a triangulation of the manifold: theremay be entire components of the representation variety which can beobtained from the deformation variety with one triangulation but notanother, and it is unclear how to choose a "good" triangulation thatavoids these problems. I will describe the "extended deformationvariety", which deals with many situations that the deformationvariety cannot. In particular, given a manifold which admits someideal triangulation we can construct a triangulation such that we canrecover any irreducible representation (with some trivial exceptions)from the associated extended deformation variety.
Monday, October 5, 2009 - 14:00 , Location: - , - , - , Organizer: John Etnyre
Monday, September 28, 2009 - 14:00 , Location: Skiles 269 , Vera Vertesi , MSRI , Organizer: John Etnyre
Legendrian knots are knots that can be described only by their projections(without having to separately keep track of the over-under crossinginformation): The third coordinate is given as the slope of theprojections. Every knot can be put in Legendrian position in many ways. Inthis talk we present an ongoing project (with Etnyre and Ng) of thecomplete classification of Legendrian representations of twist knots.
Monday, September 21, 2009 - 14:00 , Location: Skiles 269 , Doug LaFountain , SUNY - Buffalo , Organizer: John Etnyre
The uniform thickness property (UTP) is a property of knots embeddedin the 3-sphere with the standard contact structure. The UTP was introduced byEtnyre and Honda, and has been useful in studying the Legendrian and transversalclassification of cabled knot types. We show that every iterated torus knotwhich contains at least one negative iteration in its cabling sequence satisfiesthe UTP. We also conjecture a complete UTP classification for iterated torusknots, and fibered knots in general.
Monday, September 14, 2009 - 15:00 , Location: Skiles 269 , , UC Berkeley , , Organizer: Stavros Garoufalidis
A closed hyperbolic 3-manifold $M$ determines a fundamental classin the algebraic K-group $K_3^{ind}(C)$. There is a regulator map$K_3^{ind}(C)\to C/4\Pi^2Z$, which evaluated on the fundamental classrecovers the volume and Chern-Simons invariant of $M$. The definition of theK-groups are very abstract, and one is interested in more concrete models.The extended Bloch is such a model. It is isomorphic to $K_3^{ind}(C)$ andhas several interesting properties: Elements are easy to produce; thefundamental class of a hyperbolic manifold can be constructed explicitly;the regulator is given explicitly in terms of a polylogarithm.
Monday, September 14, 2009 - 14:00 , Location: Skiles 269 , Dishant M. Pancholi , International Centre for Theoretical Physics, Trieste, Italy , Organizer: John Etnyre
After reviewing a few techniques from the theory of confoliation in dimension three we will discuss some generalizations and certain obstructions in extending these techniques to higher dimensions. We also will try to discuss a few questions regarding higher dimensional confoliations.
Monday, September 7, 2009 - 14:00 , Location: - , - , - , Organizer: John Etnyre
Monday, August 31, 2009 - 14:01 , Location: Skiles 269 , , Section de Mathématiques Université de Genève , , Organizer: Stavros Garoufalidis
Not yet!
|
2019-02-21 04:35:08
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9001870155334473, "perplexity": 1481.8677348219692}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-09/segments/1550247499009.48/warc/CC-MAIN-20190221031117-20190221053117-00002.warc.gz"}
|
https://em.geosci.xyz/content/maxwell1_fundamentals/solving_maxwells_equations.html
|
# Solving Maxwell’s Equations¶
Purpose
Here, we provide a general overview on how to solve Maxwell’s equations in practice. The approaches used to solve specific problems are covered later in EM GeoSci.
The practice of solving Maxwell’s equations for an applied problem can be broken into three parts:
1. Defining the problem: here, Maxwell’s equations are modified, reformulated or approximated to suite a particular physical problem.
2. Setting boundary and initial conditions: these are invoked so that solutions to Maxwell’s equations are uniquely solved for a particular application.
3. Solving with analytic or numerical approaches: once the problem, boundary conditions and initial conditions have been defined, the final solution is obtained through analytic or numerical approaches.
## Defining the Problem¶
In “Overview of Maxwell’s Equations”, we presented general formulations for Maxwell’s equations. In most cases however, Maxwell’s equations can be modified, reformulated or approximated to simplify an applied problem. Failure to choose an appropriate formulation may result in the problem being difficult or impossible to solve using current means.
Examples (to be discussed in more detail throughout EM GeoSci):
\begin{split}\begin{align} \nabla \cdot \sigma \phi &= \nabla \cdot \mathbf{J_s}\\ \mathbf{E} =& - \nabla \phi \end{align}\end{split}
where $$\mathbf{J_s}$$ is an electrical current source and $$\phi$$ is a scalar potential.
\begin{split}\begin{align} \nabla\times \mu^{-1} \nabla\times\mathbf{E} + i\omega \sigma \mathbf{E} -& \omega^2 \varepsilon \mathbf{E} = - i\omega \mathbf{J_s}\\ \nabla\times\mathbf{E} + i\omega \mathbf{B} &= 0 \end{align}\end{split}
where $$\mathbf{J_s}$$ is an electrical current source. For some problems, we may be able to work in the quasi-static ($$\sigma \gg \omega \varepsilon$$) or wave ($$\sigma \ll \omega \varepsilon$$) regimes; allowing us to neglect terms involving $$\varepsilon$$. In many geological environments, the impact of the Earth’s magnetic properties is negligible (i.e. $$\mu\approx \mu_0$$). In this case, we can take $$\mu$$ out of the curl-curl system. In the case of a magnetic source, we would need to solve a different system.
\begin{split}\begin{align} \nabla\times \mu^{-1} \nabla\times\mathbf{e} + \sigma \frac{\partial \mathbf{e}}{\partial t} +& \varepsilon \frac{\partial^2 \mathbf{e}}{\partial t^2}= - \frac{\partial \mathbf{j_s}}{\partial t}\\ \nabla\times\mathbf{e} + \frac{\partial \mathbf{b}}{\partial t} &= 0 \end{align}\end{split}
where $$\mathbf{j_s}$$ is an electrical current source. This equation is the time-dependent equivalent to the one used in frequency domain electromagnetics. For some problems, we may be able to work in the quasi-static or wave regimes; allowing us to neglect terms involving $$\varepsilon$$. If the impact of the Earth’s magnetic properties is negligible (i.e. $$\mu\approx \mu_0$$), we can take $$\mu$$ out of the curl-curl system. In the case of a magnetic source, we would need to solve a different system.
## Boundary and Initial Conditions¶
### Boundary Conditions¶
Fig. 76 Illustration of domain and boundary.
Boundary conditions ensure that a the problem is well-posed; that is, it has a unique solution. This is necessary when using Maxwell’s equations to solve applied problems in electromagnetic geosciences. Differential equations corresponding to a physical problem are defined within a region, or “domain” (denoted by $$\Omega$$). To make the problem well-posed, boundary conditions are applied to the edges of this domain (denoted by $$\partial \Omega$$). There are three types of boundary conditions, which are listed below:
Dirichlet Boundary Conditions: Dirichlet boundary conditions are by far the most straightforward and easy to implement. Dirichlet conditions directly define the value of the solution on the boundary, i.e.:
$\mathbf{F(r)} \, \Big |_{\partial \Omega} = \mathbf{g(r)}$
where $$\mathbf{F}$$ is some vector field/flux defined within the domain, $$\mathbf{g}$$ is a spatially-dependent function and $$\mathbf{r} = (x,y,z)$$. In many cases, the Dirichlet condition is given as a constant value; such as, all fields go to zero at the boundary.
Neumann Boundary Conditions: Neumann boundary conditions are imposed by specifying the spatial derivatives of the solution on the boundary. Commonly, Neumann conditions define the directional derivatives normal to the surface of the domain, i.e.:
$\frac{\partial F_n}{\partial \mathbf{n}} \bigg |_{\partial \Omega} = \mathbf{g(r)}$
where $$\mathbf{n}$$ is the unit vector direction perpendicular to the domain boundary $$\partial \Omega$$, $$F_n = \mathbf{F \cdot \hat n}\;$$ is the component of a vector field/flux $$\mathbf{F}$$ along $$\mathbf{n}$$, $$\mathbf{g}$$ is a spatially-dependent function and $$\mathbf{r} = (x,y,z)$$. Physically, Neumann conditions are used to define the rate of flux through the boundary. This is frequently applied to problems which behave according to the heat equation.
Robin (Mixed) Boundary Conditions: Robin boundary conditions are a linear combination of Dirichlet and Neumann conditions, i.e.:
$\bigg [ a\mathbf{F(r)} + b\frac{\partial F_n}{\partial \mathbf{n}} \bigg ] \Bigg |_{\partial \Omega} = \mathbf{g(r)}$
where $$a$$ and $$b$$ are constants, $$\mathbf{n}$$ is the unit vector direction perpendicular to the domain boundary $$\partial \Omega$$, $$F_n = \mathbf{F \cdot \hat n}\;$$ is the component of a vector field/flux $$\mathbf{F}$$ along $$\mathbf{n}$$, $$\mathbf{g}$$ is a spatially-dependent function and $$\mathbf{r} = (x,y,z)$$. Robin conditions are used when the rate of flux leaving the domain is dependent on the value of the field at the boundary.
### Initial Conditions¶
Initial conditions, in addition to boundary conditions, are required to solve time-dependent problems. Because the solutions to problems in the physical sciences are causal, the fields and fluxes at a particular time depend on the fields and fluxes at earlier times. Generally, we set initial conditions to define the solution at $$t=0$$ and we are interested in the bahaviour of the fields and fluxes at $$t\geq 0$$. If the differential equation being solved has only first order derivatives in time, initial conditions may be given by:
$\mathbf{F}(\mathbf{r},t) \big |_{t=0} = \mathbf{F_0}(\mathbf{r})$
where $$\mathbf{F}$$ is a vector field/flux and $$\mathbf{F_0}$$ is the vector field/flux at $$t=0$$. This type of condition would be needed to solve the time-domain electromagnetic equation presented in “Defining the Problem”.
Multiple Variables and Higher Order Time-Derivatives
If the differential equation contains multiple variables and higher order time-derivatives, we cannot solve the problem by simply setting initial conditions on the fields/fluxes at $$t=0$$. Where $$k$$ is the highest order time-derivative found in the problem and $$n$$ is the number of time-dependent variables, we would require $$nk$$ total initial conditions to solve the problem. These initial conditions would take the form:
$\frac{\partial^k \mathbf{F}}{\partial t^k} \bigg |_{t=0} = \mathbf{g^k(r)}$
where $$\mathbf{F}$$ is the vector field/flux associated with variable $$n$$, and $$\mathbf{g^k}$$ is a time-dependent function defined throughout the entire domain for the $$k^{th}$$ derivative. An example of this is the time-dependent wave equation presented in “Defining the Problem”, which requires initial conditions on both $$\mathbf{e}$$ and its first-order time-derivative $$\partial \mathbf{e}/\partial t$$.
## Analytic and Numeric Solutions¶
Having formulated Maxwell’s equations appropriately, as well as implementing boundary conditions and initial conditions, we can now solve the problem. There are two ways in which meaningful solutions can be obtained: analytically and numerically.
### Analytic Solutions¶
Ideally, one would derive an analytic solution. The problem becomes even more tractable if the solution is a closed-form expression; can be evaluated using a finite number of simple operations. Analytic solutions are generally only possible if the problem is simplified or exhibits a sufficient degree of geometric symmetry. We prefer analytic solutions because they are rapid to compute and explicitly show how the solution depends on its input variables.
Some solutions may be called semi-analytic. Semi-analytic solutions generally require the numerical evaluation of one or more integral functions, infinite series and/or limits. In this case, the solution is not a closed form expression. However, semi-analytic solutions can be very useful in practice.
### Numerical Solutions¶
Numerical solutions are used to approximate the fields and fluxes to a desired level of accuracy. Numerical approaches are able to solve problems without relying on geometric symmetries. The process of obtaining a numerical solution can be broken down into three parts:
1. Discretizing the Domain
2. Defining Fields and Fluxes
3. Applying Computer Algorithms
A conceptual understanding of the aforementioned steps will be provided here. However, we will not present all the required background for solving these problems in practice; as it is extensive.
Discretizing the Domain:
In order to obtain a numerical solution, the domain is first discretized; i.e. subdivided into a collection of small volumes/regions. The collection of these volumes is called a ‘mesh’. The physical properties within each volume are considered constant. The size and shape of each volume depends on the geometry of the problem, the size of the domain and the quantity of available computer memory. In Fig. 77 a, we see a 1D discretization. The 1D discretization works well when, locally, the Earth displays a layered structure. For problems with irregular geometries, we may choose to use a 2D or 3D discretization (Fig. 77 b). As a rule, the finer the discretization (as the dimensions of the cells decrease), the better our numerical solution will approximate the true solution to our problem.
Fig. 77 Discretization of Earth’s structure. (a) 1D discretization. (b) 3D discretization.
Defining Fields, Fluxes and Potentials
Fig. 78 Definition of fields ($$\mathbf{E}$$), fluxes ($$\mathbf{B}$$) and potentials $$\phi$$ on a cubic cell.
The fields, fluxes and/or potentials pertaining to a particular problem are defined throughout the entire domain. Once the domain has been discretized however, evaluation of these quantities is only possible at a finite number of locations. The fields, fluxes and/or potentials being computed depend on the formulation of Maxwell’s equations. The locations of these quantities for each cell depend both on the problem and the corresponding interface conditions.
As an example, consider Fig. 78 where:
• the potential $$\phi$$ is defined on the cell nodes.
• cartesian components of the electric field $$\mathbf{E}$$ are defined on the cell edges.
• cartesian components of the magnetic flux density $$\mathbf{B}$$ are defined on the cell faces.
• physical properties $$\sigma$$ and $$\mu$$ are defined at the cell centers.
For problems involving $$\mathbf{E}$$ and $$\mathbf{B}$$, this approach is ideal because it naturally respects the interface conditions for electromagnetic fields. Recall from “Interface Conditions” that tangential components of the electric field and normal components of the magnetic flux are continuous are continuous across interfaces.
Applying Computer Algorithms:
As a final step, the numerical problem is commonly written as a linear system and solved using computer algorithms. The system can be formed using finite difference, finite volume or finite element methods. It generally takes the form:
$\mathbf{A(m)u=q_s}$
where $$\mathbf{u}$$ contains the fields and/or fluxes at discrete locations throughout the domain, $$\mathbf{q_s}$$ is a vector corresponding to the source term and $$\mathbf{A(m)}$$ is a linear operator that depends on the physical properties ($$\sigma,\mu,\varepsilon$$). Collectively, the physical properties defining each cell make up a physical property model $$\mathbf{m}$$. In electromagnetic geosciences, we are frequently interested in the “inverse problem”. That is, can we recover the physical property model $$\mathbf{m}$$ if $$\mathbf{u}$$ and $$\mathbf{q_s}$$ are known?
|
2018-12-16 10:46:25
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 3, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9275142550468445, "perplexity": 387.0390541035426}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-51/segments/1544376827639.67/warc/CC-MAIN-20181216095437-20181216121437-00304.warc.gz"}
|
https://oar.princeton.edu/handle/88435/pr1vk0s?mode=full
|
# Two critical positions in zinc finger domains are heavily mutated in three human cancer types.
## Author(s): Munro, Daniel; Ghersi, Dario; Singh, Mona
To refer to this page use: http://arks.princeton.edu/ark:/88435/pr1vk0s
DC FieldValueLanguage
dc.contributor.authorMunro, Daniel-
dc.contributor.authorGhersi, Dario-
dc.contributor.authorSingh, Mona-
dc.date.accessioned2021-10-08T19:47:27Z-
dc.date.available2021-10-08T19:47:27Z-
dc.date.issued2018-06-28en_US
dc.identifier.citationMunro, Daniel, Ghersi, Dario, Singh, Mona. (2018). Two critical positions in zinc finger domains are heavily mutated in three human cancer types.. PLoS computational biology, 14 (6), e1006290 - ?. doi:10.1371/journal.pcbi.1006290en_US
dc.identifier.issn1553-734X-
dc.identifier.urihttp://arks.princeton.edu/ark:/88435/pr1vk0s-
dc.description.abstractA major goal of cancer genomics is to identify somatic mutations that play a role in tumor initiation or progression. Somatic mutations within transcription factors are of particular interest, as gene expression dysregulation is widespread in cancers. The substantial gene expression variation evident across tumors suggests that numerous regulatory factors are likely to be involved and that somatic mutations within them may not occur at high frequencies across patient cohorts, thereby complicating efforts to uncover which ones are cancer-relevant. Here we analyze somatic mutations within the largest family of human transcription factors, namely those that bind DNA via Cys2His2 zinc finger domains. Specifically, to hone in on important mutations within these genes, we aggregated somatic mutations across all of them by their positions within Cys2His2 zinc finger domains. Remarkably, we found that for three classes of cancers profiled by The Cancer Genome Atlas (TCGA)-Uterine Corpus Endometrial Carcinoma, Colon and Rectal Adenocarcinomas, and Skin Cutaneous Melanoma-two specific, functionally important positions within zinc finger domains are mutated significantly more often than expected by chance, with alterations in 18%, 10% and 43% of tumors, respectively. Numerous zinc finger genes are affected, with those containing Krüppel-associated box (KRAB) repressor domains preferentially targeted by these mutations. Further, the genes with these mutations also have high overall missense mutation rates, are expressed at levels comparable to those of known cancer genes, and together have biological process annotations that are consistent with roles in cancers. Altogether, we introduce evidence broadly implicating mutations within a diverse set of zinc finger proteins as relevant for cancer, and propose that they contribute to the widespread transcriptional dysregulation observed in cancer cells.en_US
dc.format.extente1006290 - ?en_US
dc.languageengen_US
dc.language.isoen_USen_US
dc.relation.ispartofPLoS computational biologyen_US
dc.rightsFinal published version. This is an open access article.en_US
dc.titleTwo critical positions in zinc finger domains are heavily mutated in three human cancer types.en_US
dc.typeJournal Articleen_US
dc.identifier.doidoi:10.1371/journal.pcbi.1006290-
dc.identifier.eissn1553-7358-
pu.type.symplectichttp://www.symplectic.co.uk/publications/atom-terms/1.0/journal-articleen_US
Files in This Item:
File Description SizeFormat
|
2021-12-03 09:42:31
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.42978063225746155, "perplexity": 11369.157288088712}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-49/segments/1637964362619.23/warc/CC-MAIN-20211203091120-20211203121120-00268.warc.gz"}
|
http://billchen.org/publications/1990_9635_algorithm/1990_9635_algorithm.html
|
Citations of
William Y.C. Chen, A general bijective algorithm for trees, Proc. Natl. Acad. Sci. USA., 87 (1990) 9635-9639.
1. F. Bergeron, G. Labelle and P. Leroux, Combinatorial species and tree-like structures, No. 67. Cambridge University Press, 1998.
2. F. Bergeron, G. Labelle and P. Leroux, Introduction to the theory of species of structures, 2008.
3. O. Bernardi, B. Duplantier and P. Nadeau, A bijection between well-labelled positive paths and matchings, Séminaire Lotharingien de Combinatoire 63(2010) B63e.
4. S. Caminiti, E.G. Fusco and R. Petreschi, A bijective code for k-trees with linear time encoding and decoding, combinatorics, algorithms, probabilistic and experimental methodologies, Springer Berlin Heidelberg, 4614(2007) 408-420.
5. S. Caminiti, E.G. Fusco and R. Petreschi, Bijective linear time coding and decoding for k-trees, Theory Comput. Syst. 46 (2010) 284-300.
6. S. Caminiti and R. Petreschi, Parallel algorithms for encoding and decoding blob code, International Workshop on Algorithms and Computation, Springer, 2010.
7. S. Caminiti and R. Petreschi, Unified parallel encoding and decoding algorithms for Dandelion-like codes, J. Parallel Distrib. Comput. 70(2010) 1119-1127.
8. W.Y.C. Chen, Context-free grammars, differential operators and formal power series, Theor. Comput. Sci. 117(1993) 113-129.
9. W.Y.C. Chen, The theory of compositionals, Discrete Math. 122(1993) 59-87.
10. W.Y.C. Chen, A coding algorithm for Rényi trees, J. Combin. Theory Ser. A 63(1993) 11-25.
11. W.Y.C. Chen, E.Y.P. Deng and L.L.M. Yang, Riordan paths and derangements, Discrete Math. 308(2008) 2222-2227.
12. W.Y.C. Chen, E. Deutsch and S. Elizalde, Old and young leaves on plane trees, European J. Combin. 27(2006) 414-427.
13. W.Y.C. Chen, L.W. Shapiro and L.L.M. Yang, Parity reversing involutions on plane trees and 2-Motzkin paths, European J. Combin. 27(2006) 283-289.
14. W.Y.C. Chen and L.L.M. Yang, Parity Reversing Involutions on Plane Trees.
15. R.X.F. Chen and C.M. Reidys, Narayana polynomials and some generalizations, arXiv:1411.2530.
16. R.X.F. Chen and C.M. Reidys, A note on the Harer-Zagier formula and the Lehman-Walsh formula, arXiv:1510.05038.
17. L. Clark, Multiplicities of integer arrays, Integers 10(2010) 187-199.
18. N. Dershowitz and S. Zaks, More patterns in trees: up and down, young and old, odd and even, SIAM J. Discrete Math. 23(2009) 447-465.
19. T. Došlić and D. Veljan, Secondary structures, plane trees and Motzkin numbers, Math. Commun. 12(2007) 163-169.
20. R. Ehrenborg and M. Méndez, Schröder parenthesizations and chordates, J. Combin. Theory Ser. A 67(1994) 127-139.
21. I.M. Gessel, B.E. Sagan and Y.N. Yeh, Enumeration of trees by inversions, J. Graph Theory 19(1995) 435-459.
22. M. Jani, R.G. Rieper and M. Zeleke, Enumeration of k-trees and applications, Ann. Comb. 6(2002) 375-382.
23. Y. Jin and C. Liu, The enumeration of labelled spanning trees of ${K_{m,n}}$, Australas. J. Combin. 28(2003) 73-80.
24. A.N. Kirillov, On some quadratic algebras I 1/2: combinatorics of dunkl and gaudin elements, Schubert, Grothendieck, Fuss-Catalan, Universal Tutte and Reduced Polynomials, SIGMA 12 (2016) #P172 .
25. M. Klazar, On trees and noncrossing partitions, Discrete Appl. Math. 82(1998) 263-269.
26. C.P. Lenart, Combinatorial models for certain structures in algebraic topology and formal group theory, University of Manchester, 1996.
27. N.Y. Li and T. Mansour, An identity involving Narayana numbers, European J. Combin. 29(2008) 672-675.
28. F. Liu, Hook length polynomials for plane forests of a certain type, Ann. Comb. 13(2009) 315-322.
29. L. Lv and S.X.M. Pang, A decomposition algorithm for noncrossing trees, Electron. J. Combin. 21(2014) #P1.5.
30. T. Mansour and Y. Sun, Dyck paths and partial Bell polynomials, Australas. J. Combin. 42(2008) 285-297.
31. M.A .Méndez, Koszul duality for monoids and the operad of enriched rooted trees, Adv. in Appl. Math. 44(2010) 261-297.
32. M.A. Méndez and J.C. Liendo, An antipode formula for the natural Hopf algebra of a set operad, Adv. in Appl. Math. 53(2014) 112-140.
33. I.O. Okoth, Combinatorics of oriented trees and tree-like structures, Stellenbosch University, 2015.
34. J. Rukavicka, A note on divisors of multinomial coefficients, Arch. Math. 104(2015) 531-537.
35. W.R. Schmitt and M.S. Waterman, Linear trees and RNA secondary structure, Discrete Appl. Math. 51(1994) 317-323.
36. Y. Sun, The Star of David rule, Linear Algebra Appl. 429(2008) 1954-1961.
37. Y. Sun, Potential polynomials and Motzkin paths, Discrete Math. 309(2009) 2640-2648.
38. Z. Toroczkai, Topological classification of binary trees using the Horton-Strahler index, Phys. Rev. E 65(2002) 016130:1-016130:10.
39. B. Vance, Counting ordered trees by permuting their parts, Amer. Math. Monthly 113(2006) 329-335.
back to homepage
|
2018-10-17 17:12:00
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 1, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.45008260011672974, "perplexity": 12121.798031861661}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 5, "enable": false}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-43/segments/1539583511203.16/warc/CC-MAIN-20181017153433-20181017174933-00527.warc.gz"}
|
https://lakschool.com/index.php/en/math/real-functions/non-rational-functions
|
Math Real functions Non-rational functions
# Non-rational functions
In non-rational functions , besides the rational ones, there are also additional arithmetic operations (for example square roots, logharithms, etc.).
You should know the following non-rational functions:
Square root functions
Trigonometric functions
Exponential functions
Logarithmic functions
### Examples
Other examples of non-rational functions are:
• $f(x)=5\cdot\sin(x)$
• $f(x)=\sqrt{x}$
• $f(x)=12\cdot2^{x-1}$
|
2021-06-21 01:38:59
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8261062502861023, "perplexity": 7602.483834738822}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-25/segments/1623488259200.84/warc/CC-MAIN-20210620235118-20210621025118-00169.warc.gz"}
|
http://math.stackexchange.com/questions/108495/category-of-trees-as-sub-category-of-category-of-graphs/108573
|
# Category of Trees as sub-category of Category of Graphs
A tree (like a binary search tree) is a direct graph with some limitations (no cycles, connected). How can I express the category of trees as "sub-category" of a graphs? There is a way? I'm not sure the term "sub-category" is correct.
-
What do you mean by "express"? You can just say that $\mathcal{T}$ is a category with objects being trees and morphisms being homomorphisms of trees (graphs -- it's a full subcategory of the category $\mathcal{G}$ of graphs). – Damian Sobota Feb 12 '12 at 13:27
"define" is better than "express" – tyranitar Feb 12 '12 at 13:41
So I wrote you an answer. What do you need more? – Damian Sobota Feb 12 '12 at 13:55
I think the term you're looking for is "full subcategory". If you have a category $C$, and a set $d$ of objects in $C$, then the full subcategory of $C$ defined by $d$ is the category $D$ whose objects are the elements of $d$, and whose morphisms are all the morphisms of $C$ whose domain and codomain are in $d$. Thus, you can simply say:
"Define the category Tree as the full subcategory of Gph whose objects are trees."
-
That's exactly what I said in a comment. – Damian Sobota Feb 12 '12 at 21:14
@DamianSobota: you could make that a full answer rather than a comment. – Mitch Feb 13 '12 at 2:34
Defining what a tree is is a subtle thing and depends on what you want to use the trees for. Let us assume that you are interested in the notion of an "undirected, rooted, finite non-planar tree". The definition 'a connected undirected graph with no cycles, and with a chosen leaf as root' captures that notion and gives rise to a category of trees as the full subcategory of the category of undirected graphs spanned by these trees. However, the definition 'a contractible 1-dimensional compact connected and simply connected space with a chosen boundary point' also captures the the same notion and thus gives rise to another category of trees, the full subcategory of the category of topological spaces spanned by those trees.
Another possibile definition is 'a finite poset with a smallest element and such that for every element the down-set determined by it is linearly ordered'. It too captures the same notion of "tree" as above and yields yet another category of trees, the full subcategory of the category of posets spanned by those trees. There is also the dendroidal category $\Omega$ whose objects are trees and can be defined in at least three different ways.
More possible definitions can be found in http://ncatlab.org/nlab/show/tree.
The point is that all of these categories are radically different. In particular the notion of subtree is highly sensitive to which of these categories you consider.
All this just goes to show that much care is needed with such formulations, even if you wish your trees form a subcategory of graphs, depending on your choice of axiomatization of the tree concept you might get very different categories.
-
|
2016-06-25 03:35:11
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.7788295745849609, "perplexity": 193.68415172669222}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2016-26/segments/1466783392069.78/warc/CC-MAIN-20160624154952-00185-ip-10-164-35-72.ec2.internal.warc.gz"}
|
http://www.gradesaver.com/textbooks/math/algebra/intermediate-algebra-6th-edition/chapter-5-section-5-3-polynomials-and-polynomial-functions-exercise-set-page-280/78
|
## Intermediate Algebra (6th Edition)
Published by Pearson
# Chapter 5 - Section 5.3 - Polynomials and Polynomial Functions - Exercise Set: 78
#### Answer
$-2x^{3}+8x^{2}-19x-2$
#### Work Step by Step
We are asked to subtract $(9x+8)$ from the sum of $(3x^{2}-2x-x^{3}+2)$ and $(5x^{2}-8x-x^{3}+4)$. This is equivalent to the expression $(3x^{2}-2x-x^{3}+2)+(5x^{2}-8x-x^{3}+4)-(9x+8)$. We can subtract the third term from the first two terms by adding the opposite of the third term to the first two. $(3x^{2}-2x-x^{3}+2)+(5x^{2}-8x-x^{3}+4)+(-9x-8)$ Next, we can combine like terms. $(-x^{3}-x^{3})+(3x^{2}+5x^{2})+(-2x-8x-9x)+(2+4-8)=-2x^{3}+8x^{2}-19x-2$
After you claim an answer you’ll have 24 hours to send in a draft. An editor will review the submission and either publish your submission or provide feedback.
|
2017-09-20 18:40:00
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.6312001943588257, "perplexity": 451.7132018222031}, "config": {"markdown_headings": true, "markdown_code": false, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": false}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-39/segments/1505818687428.60/warc/CC-MAIN-20170920175850-20170920195850-00194.warc.gz"}
|
http://www.usaco.org/current/data/sol_prob2_silver_open22.html
|
(Analysis by Danny Mittal)
A starting point for this problem is to notice that if $s$ and $t$ are equal when restricted to some subset $X$ of letters, then they must also be equal when restricted to any subset $Y$ of $X$. In particular, this means that if $s$ and $t$ are equal when restricted to some subset $X$ of letters, they are also equal when restricted to any two letters in $X$.
Now consider the case where $s$ and $t$ are not equal when restricted to $X$. Again let $s'$ and $t'$ be $s$ and $t$ restricted to $X$. One possibility is that $s'$ and $t'$ are of different lengths, which means that $s$ and $t$ have differing amounts of the letters in $X$.
The other possibility is that $s'$ and $t'$ have different letters at some position. In this case, consider the first position at which $s'$ and $t'$ differ, and let $x$ and $y$ be the two letters that $s'$ and $t'$ have at this position. Since $s'$ and $t'$ are completely the same up to this position, if we remove all letters other than $x$ and $y$ from $s'$ and $t'$ to create new strings $s''$ and $t''$, the portion before this position will become the same (smaller) portion at the beginning of $s''$ and $t''$, and so the $x$ and $y$ at the same position in $s'$ and $t'$ will still be in the same position in $s''$ and $t''$. This means that $s''$ and $t''$ are not the same, and so, since $s''$ and $t''$ are actually just $s$ and $t$ restricted to the two letters $x$ and $y$, we can conclude that $s$ and $t$ are not the same when restricted to $x$ and $y$.
We have therefore shown that if $s$ and $t$ are not equal when restricted to $X$, then either they do not have the same amount of letters in $X$, or they are not the same when restricted to some pair of letters in $X$. However, we already know that if $s$ and $t$ are equal when restricted to $X$, then they must be the same when restricted to any pair of letters in $X$, and in this case they clearly must also have the same amount of letters in $X$.
This means that $s$ and $t$ being equal when restricted to $X$ is actually equivalent to having the same amount of letters in $X$ and being equal when restricted to any pair of letters in $X$.
We can use this fact to write our algorithm. We need to be able to quickly compute, for a given subset of letters $X$, whether $s$ and $t$ have the same amount of letters in $X$, and whether $s$ and $t$ are the same when restricted to any pair of letters in $X$.
We can make checking the first condition easy by precomputing the frequencies of each letter in each of $s$ and $t$. This takes linear time. Then, for a given $X$, we simply add up the frequencies of all letters in $X$ for each of $s$ and $t$ and check if the sums are equal.
We can make checking the second condition easy by just precomputing the answer for each pair of letters. For each pair, we can find the answer in linear time by simply reducing $s$ and $t$ to the strings $s'$ and $t'$ that only have the letters in the pair, then checking whether $s'$ and $t'$ are equal. Then, for a given $X$, we simply loop through all pairs of letters in $X$ and check our precomputed answers.
The overall runtime becomes $\mathcal O(\text{number of pairs} \cdot (|s| + |t| + Q))$ due to the linear time precomputation for each pair and then checking each pair in constant time for each query. Since there are only $18$ letters we need to consider, there are only $\frac {18 \cdot 17} 2 = 153$ pairs, which is small enough for this algorithm to be reasonably efficient.
Java code:
import java.io.BufferedReader;
import java.io.IOException;
import java.io.InputStreamReader;
public class SubsetEquality {
public static void main(String[] args) throws IOException {
BufferedReader in = new BufferedReader(new InputStreamReader(System.in));
char[] s = in.readLine().toCharArray();
char[] t = in.readLine().toCharArray();
int[] freqsS = new int[26];
int[] freqsT = new int[26];
for (char x = 'a'; x <= 'z'; x++) {
for (char letter : s) {
if (letter == x) {
freqsS[x - 'a']++;
}
}
for (char letter : t) {
if (letter == x) {
freqsT[x - 'a']++;
}
}
}
boolean[][] compatible = new boolean[26][26];
for (char y = 'a'; y <= 'z'; y++) {
for (char x = 'a'; x < y; x++) {
StringBuilder sRestricted = new StringBuilder();
StringBuilder tRestricted = new StringBuilder();
for (char letter : s) {
if (letter == x || letter == y) {
sRestricted.append(letter);
}
}
for (char letter : t) {
if (letter == x || letter == y) {
tRestricted.append(letter);
}
}
compatible[x - 'a'][y - 'a'] = sRestricted.toString().equals(tRestricted.toString());
}
}
StringBuilder out = new StringBuilder();
for (int q = Integer.parseInt(in.readLine()); q > 0; q--) {
char[] subset = in.readLine().toCharArray();
char answer = 'Y';
int sSum = 0;
int tSum = 0;
for (char x : subset) {
sSum += freqsS[x - 'a'];
tSum += freqsT[x - 'a'];
}
if (sSum != tSum) {
answer = 'N';
}
for (int j = 0; j < subset.length; j++) {
for (int k = j + 1; k < subset.length; k++) {
if (!compatible[subset[j] - 'a'][subset[k] - 'a']) {
answer = 'N';
}
}
}
out.append(answer);
}
System.out.println(out);
}
}
|
2022-07-06 19:18:03
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8356215953826904, "perplexity": 270.9369808559214}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": false}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-27/segments/1656104676086.90/warc/CC-MAIN-20220706182237-20220706212237-00235.warc.gz"}
|
https://www.adamsautoadvice.com/tag/chevy-volt/page/2/
|
## Chevy Volt on display at Inner Harbor
A few weeks ago I was walking around the Inner Harbor. When I was walking by the National Aquarium I saw something interesting… a Chevy Volt! The Chevy Volt is GM's media-darling electric car that also has a regular gas engine. It can go for 40 miles on the electric engine, but then you can use the gas engine to go further. It is great place because of the high foot traffic. (Of course, I was the only person interested enough to stop and look at the car.) If you want to check-out the Chevy Volt, go down to the Inner Harbor. I am not sure if it is still there though!
|
2019-04-23 20:47:11
|
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8065189123153687, "perplexity": 2950.3759228532963}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-18/segments/1555578613603.65/warc/CC-MAIN-20190423194825-20190423220825-00038.warc.gz"}
|
https://applying-maths-book.com/chapter-8/matricesQM-C.html
|
Continuous basis sets#
# import all python add-ons etc that will be needed later on
%matplotlib inline
# %matplotlib notebook
import numpy as np
import matplotlib.pyplot as plt
from sympy import *
init_printing() # allows printing of SymPy results in typeset maths format
plt.rcParams.update({'font.size': 14}) # set font size for plots
Example#
Suppose a wavefunction $$R$$ represents the radial part of the ground state of a hydrogen atom then a continuous basis set is needed because the wavefunction depends on spatial coordinates, and in particular on the radial distance from the nucleus, $$r$$. At this point, $$R$$ is just the idea of the wavefunction as no basis or representation has yet been determined. The wavefunction can now be defined by left multiplying with a bra; the result is the value of the function at $$r$$,
$\displaystyle r|R\rangle = R(r)$
If $$R$$ were a cosine, the expression would be $$\langle r | \cos\rangle = \cos(r)$$ where $$\cos$$ is the operator that converts, or maps $$r$$ into $$\cos(r)$$. This is not the way we usually think of functions and it seems to be ‘back to front’. It may be useful therefore to think of $$\langle r| R\rangle = R(r)$$ as extracting the coefficient of $$R$$ at the point $$r$$, which is the value of the function itself, $$R(r)$$. The equivalent process with a discrete basis set is equation (32), which extracts the coefficient $$a$$ from $$\varphi$$.
If a basis set is defined as the set of an orthogonal polynomial $$P_n$$ then this basis is $$\begin{bmatrix}P_0 & P_1& P_2 &\cdots & P_\infty )\end{bmatrix}$$ and the bra is
$\displaystyle \langle r| = \begin{bmatrix}P_0(r)& P_1(r)& P_2(r) &\cdots & P_\infty (r)\end{bmatrix}$
where the $$P_0(r)$$ etc. are functions of $$r$$. For example, if $$P$$ are the Legendre polynomials, then
$\displaystyle \langle r|=\begin{bmatrix}1 &\displaystyle \frac{3x^2 -1}{2} &\displaystyle\frac{5x^3 -3x}{2} & \cdots \end{bmatrix}$
and the terms are the polynomials. The corresponding ket is
$\displaystyle |P_n〉=\begin{bmatrix}0 &0 &\cdots & 1 &\cdots\end{bmatrix}$
where the $$1$$ is in the $$n^{th}$$ position for polynomial $$P_n$$. For example, the product
$\displaystyle \langle r|P_2\rangle = P_2(r) = \frac{3x^2 - 1}{2}$
and is the value of the function at $$r$$, and this is consistent with the notion that $$P_2$$ operates on $$r$$ to produce the function $$P_2(r)$$.
It would have been just as easy to choose a basis in momentum $$p$$ which is $$[p]$$ rather than $$r$$, then the wavefunction would be $$\langle p|R\rangle = R(p)$$, which is now a function of $$p$$ the momentum.
To represent the whole wavefunction $$\psi$$ with its radial and angular parts then it is necessary to define a different basis set to encompass all these three coordinates, $$(r, \theta, \varphi)$$ which can be represented as $$(r, \theta, \varphi)$$ where there are three indices to each basis vector but $$|(r, \theta, \varphi)\rangle$$ is still an infinite length column vector and therefore, $$\langle (r, \theta, \varphi)|\psi\rangle = \psi(r, \theta, \varphi)$$ extracts the value at $$r, \theta, \varphi$$.
At this point, however, it becomes unnecessary and rather complicated to continue with the bra-ket form and it is simpler to revert to normal functions but still holding onto the idea of basis sets. The basis set could comprise almost any set of functions provided that they can be made orthogonal over the range of values needed, alternatively the known orthogonal polynomials, Legendre, Hermite, Chebychev, etc. could be used. A function $$f(x)$$ can be expanded as a linear combination of orthogonal functions and if these from a set of wavefunctions $$\psi$$ then
$\displaystyle f(x)= c_0\psi_0(x) +c_1\psi_1(x) +\cdots +\tag{33}$
with coefficients $$c_n$$. The wavefunction need not be specified yet, but whatever it is it must be orthogonal given different quantum numbers. This is exactly what is done in the general Fourier series described in Chapter 9. There it is shown that the coefficients are
$\displaystyle c_n=\int\psi_n(x)f(x)dx \tag{34}$
If the function $$f$$ is represented as the ket $$| f\rangle$$, suppose that left multiplying by the bra $$\langle n |$$ will extract the coefficient $$c_n$$ in the same manner as for a discrete basis set. However, in a continuous basis the bra-ket represents an integral, thus $$\langle n | f\rangle \equiv \int \psi_n(x)f(x)dx$$ . Equation (34) provides a method of calculating the coefficients of the expansion provided $$f$$ has the same range as the wavefunction. To illustrate this, particle in a box wavefunctions are used to form the target function
$\displaystyle f(x) = 64 - (2 - x)^6$
where the length of the box $$L = 4$$, or $$0 \le x \le 4$$. The calculation is in the algorithm below. The series $$w(x)$$ approximates $$f(x)$$ in the basis set of the $$\psi$$ as
$w(x)= \psi_0(x)\int \psi_0(x)f(x)dx +\psi_1(x)\int \psi_1(x)f(x)dx+\cdots +$
The first eight terms ($$n=0\to 7$$) are shown in figure 5 where thee reproduction of the function $$64 - (2 - x)^6$$ is approximated by weighted particle in a box functions
$\displaystyle \psi_n(x)=\sqrt{\frac{2}{L}}\sin\left(n\pi\frac{x}{L} \right)$
The code below takes no account of the fact that the integral with $$n=0,2,4\cdots$$ is zero because of the property of the sine so that very few terms produce a quite good approximation.
Figure 5. Comparison of a series made up of weighted sine functions equation (8.33) and the target function $$64 - (2 - x)^6$$. A far better fit is obtained if more terms are added.
Why this method works can be appreciated by realizing that at larger values of $$n$$ the sine function oscillates more rapidly and so allows for the rapid rising and falling part of the curves. The coefficients automatically adjust the proportion of each sine wave to describe the target equation. Other target functions can be tried quite easily by, changing $$f$$, however, the less the function looks like a sine wave, such as $$\exp(-x)$$, the greater will be the number of terms that are needed to produce a good description of the function, $$\gt 100$$ in that case.
# Algorithm to calculate eqn 33
num = 8 # number of terms in series fig 8.5
L = 4
m = 100
wf = np.zeros(m,dtype = float) # array to hold calculated wavefunction
x = np.linspace(0,L,m) # make m x values 0 to L
psi = lambda x,n: np.sqrt(2/L)*np.sin(n*np.pi*x/L)
f = lambda x:64-(2-x)**6; # target function
q = lambda x,n: psi(x,n)*f(x) # function to integrate using quad()
for n in range(num): # sum over numer of terms
coef = quad( q ,0, L, args = (n))[0] # calculate coefficients c eqn (8.34)
wf[:]= wf[:] + psi(x[:],n)*coef # add up over all x
#plt.plot(x,wf,color='red') # plot fig 5
#plt.plot(x,f(x) ,color='green')
#plt.show()
|
2022-09-26 10:02:00
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9093632102012634, "perplexity": 295.4213190284628}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030334855.91/warc/CC-MAIN-20220926082131-20220926112131-00046.warc.gz"}
|
http://math.stackexchange.com/questions/407272/finding-a-direct-basis-for-tangent-space-of-piece-with-boundary-of-an-oriented-m
|
# Finding a direct basis for tangent space of piece with boundary of an oriented manifold.
I have the following definition (from Hubbard's vector calculus book) for an oriented boundary of piece with boundary of an oriented manifold:
Let $M$ be a $k$ dimensional manifold oriented by $\Omega$ and $P$ a piece with boundary of $M$. Let $x$ be a point of the smooth boundary $\partial^{ \ S}_MP$ and let $\vec{V}_{\text{out}}\in T_xM$ be an outward pointing bector. Then the function $\Omega^\partial : \mathcal{B}(T_x\partial P)\to\left\{+1,-1\right\}$ given by $$\Omega_x^\partial(\vec{v}_1,...,\vec{v}_{k-1}) = \Omega_x(\vec{V}_{\text{out}},\vec{v}_1,...,\vec{v}_{k-1})$$ defines an orientation on the smooth boundary $\partial_M^{ \ S}P,$ where $\vec{v}_1,...,\vec{v}_{k-1}$ is an ordered basis of $T_x\partial_M^{ \ S}P$.
I'm working on a problem that asks me to find a basis for the $T_x\partial P$ that is direct to a certain orientation (given by an elementary 3-form). My question is this:
When I choose a basis for $T_x\partial P$, does this basis also need to lie in $T_xM$? Also, are there restrictions to how I should choose $\vec{V}_{\text{out}}$? In other words, does $\vec{V}_{\text{out}}$ need only lie in $T_xM$ and not in $T_x\partial P$?
-
Yes, $T_x\partial P$ is a hyperplane in $T_xM$. $\vec V_{\text{out}}$ is by definition the outward-pointing normal to $\partial P$. This means that if $\vec v_1,\dots,\vec v_{k-1}$ are chosen as a basis for $T_x\partial P$, then $\vec V_{\text{out}},\vec v_1,\dots,\vec v_{k-1}$ will give you a basis for $T_xM$. The whole point of this orientation stuff is that when you pick $\vec v_1,\dots,\vec v_{k-1}$ so that $\vec V_{\text{out}},\vec v_1,\dots,\vec v_{k-1}$ gives you a positively-oriented basis for $T_xM$, then you win: You have achieved a positively-oriented basis for $T_x\partial P$.
|
2015-07-31 16:10:40
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.936255931854248, "perplexity": 88.100975900996}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2015-32/segments/1438042988308.23/warc/CC-MAIN-20150728002308-00017-ip-10-236-191-2.ec2.internal.warc.gz"}
|
http://math.stackexchange.com/questions/62120/euler-summation-and-its-transformation
|
Euler summation and its transformation
The following results:
For any function $f \in C^1[a,b]$ and any $q \in \mathbb{N}$, $$\sum_{a<k \leq b, (k,q)=1} f(k)=\frac{\varphi(q)}{q} \int_a^b f(x) dx + O(\tau(q) (\sup_{x \in [a,b]} |f(x)|+\int_a^b |f'(x)| dx)).$$
And for any function $f \in C^1[a,b]$, $$\sum_{a<k \leq b}\frac{\varphi(k)}{k} f(k)=\frac{1}{\zeta(2)} \int_a^b f(x) dx + O(\log{b} (\sup_{x \in [a,b]} |f(x)|+\int_a^b |f'(x)| dx)).$$
These two looks like some kind of transformation related to Euler summation, but I have no idea. Could anyone point to me where can I read about things like these?
-
Where did you read about them in the first place? – Gerry Myerson Sep 6 '11 at 3:00
|
2016-07-26 14:24:21
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8802363276481628, "perplexity": 170.02127096125798}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2016-30/segments/1469257824994.73/warc/CC-MAIN-20160723071024-00234-ip-10-185-27-174.ec2.internal.warc.gz"}
|
https://zbmath.org/?q=an:1027.39003
|
zbMATH — the first resource for mathematics
Oscillation of first order neutral delay difference equations. (English) Zbl 1027.39003
Consider the neutral delay difference equation of the form $\Delta(y_n+ h_n y_{n-k})+\delta q_n f(y_{n-1}),\quad n= 0,1,\dots,\tag{$$*$$}$ where $$\delta= \pm 1$$, $$(h_n)$$ and $$(q_n)$$ are positive real sequences and $$f: \mathbb{R}\to\mathbb{R}$$ is a continuous function such that $$uf(u)> 0$$ for $$u\neq 0$$. Under some additional suppositions all solutions to $$(*)$$ are oscillatory.
MSC:
39A11 Stability of difference equations (MSC2000)
Full Text:
|
2021-10-23 05:58:10
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.5194739103317261, "perplexity": 583.526362036807}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-43/segments/1634323585561.4/warc/CC-MAIN-20211023033857-20211023063857-00638.warc.gz"}
|
https://socratic.org/questions/how-solve-it
|
# How solve it?
## Momentum physics On a particle of 2 kg there is a net force of 200 NEWTONS calculate A) the momentum developed by force in 10 seconds B) the final speed of the particle THANK YOU
Feb 10, 2017
The impulse-momentum theorem states that:
"The change in momentum of an object is equal to the impulse applied to it.
This is also an equivalent statement of Newton's second law of motion.
$\text{Impulse"="Applied Force " Fxx"Time } \Delta t$
$\text{Change in momentum"="Mass " mxx"Change in velocity } \Delta v$
Equating the two we get
$F \times \Delta t = m \times \Delta v$
(A) Plugging in given values we get
$\text{Momentum developed } = 200 \times 10 = 2000$ $\text{N" cdot "s}$
B) Change of speed of the particle$= \frac{2000}{2} = 1000 {\text{ m"cdot"s}}^{-} 1$
${\text{Final speed" ="Initial Speed"+1000" m"cdot"s}}^{-} 1$
|
2019-05-20 14:27:45
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 7, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.6934038400650024, "perplexity": 582.0424573485975}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-22/segments/1558232256040.41/warc/CC-MAIN-20190520142005-20190520164005-00425.warc.gz"}
|
http://math.stackexchange.com/questions/6340/how-do-you-show-that-an-lp-entire-holomorphic-on-the-complex-plane-function?answertab=oldest
|
# How do you show that an $L^p$ entire (holomorphic on the complex plane) function is $0$?
Just to clarify, I want to show that:
If $f$ is entire and $\int_{\mathbb{C}} |f|^p dxdy <\infty$, then $f=0$.
I think I can show that this is the case for $p=2$, but I'm not sure about other values of $p$...
-
Use Hölder's inequality and Cauchy's integral formula to show the function and its derivatives all vanishes at zero.
-
+1, but what about 0<p<1? – Jonas Meyer Oct 8 '10 at 19:49
Right! Thank you, I didn't think about using Hölder's.
In short,
$$|f(0)| = \left|\int_{|z|=R} \frac{f(z)dz}{z} \right| < \left(\int_{|z|=R} |f(z)|^p |dz|\right)^{1/p} \left(\int_{|z|=R} |z|^{-q} |dz|\right)^{1/q}$$
And $\left(\int_{|z|=R} |z|^{-q} |dz|\right)^{1/q} = R^{-1+1/q} (2\pi)^{1/q} = R^{-1/p} (2\pi)^{1/q}$
$$\int_0^\infty \left((2\pi)^{1/q} |f(0)| R^{1/p}\right)^p dR < || f ||_p^p <\infty$$
And the only way for this to fly is for $f(0)=0$, for $p\le 1$.
-
If $f: \RR \to \RR$ is an analytic unbounded Lebesgue integrable function, then it must in particular satisfy this property (because it is continuous): $\forall \e,M > 0 \; \exists x_0 : |x| > |x_0| \Rightarrow \lbg(\{x \in \RR : |f(x)| > M\}) < \e,$ where $\lbg$ is the usual Lebesgue measure.
Then you just use that $f$ is analytic iff $\forall \text{ compact } K \subset \RR \; \exists C > 0 : x \in K, n \in \NN \Rightarrow \left\vert\frac{\partial^n f}{\partial x^n}(x)\right\vert \leq C^{n+1}n!$
But it is easy to see that this is impossible since $|f'|$ will have to be arbitrarily large - or in other words that you can go far enough out to the right on the x-axis that $|f|$ has to go from being larger than some big $M$ to smaller than some small $m$ on a very small interval.
This is missing quite a few details, but it should be pretty easy to fill out.
-
|
2015-12-01 06:23:02
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 2, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 2, "x-ck12": 0, "texerror": 0, "math_score": 0.9491541385650635, "perplexity": 101.68564202652334}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2015-48/segments/1448398464536.35/warc/CC-MAIN-20151124205424-00034-ip-10-71-132-137.ec2.internal.warc.gz"}
|
http://mathhelpforum.com/calculus/42153-series-again.html
|
1. ## Series, again.
Hi again. I really need full solutions of the following, because it seems that my ways of solving are wrong or just too short.
1) Find the sum of the series. Posted it before, but I just didn't understand how to continue mr fantastic's solution and also I don't know anything about gamma-function, which Krizalid used in his solution. Anwser is 1-ln2, by the way.
$\displaystyle \sum_{n=1}^{\infty}\frac{6^n}{n(n+1)x^n}$ x=12
2) Check if series is convergent. I know that it does not converge but can't prove it.
$\displaystyle \sum_{n=3}^{\infty}\frac{n+1}{\sqrt[3]{n^4}\ln^4(n+1)}$
3) Find all the values of x for which the series converges. By my way of solving itm the answer is $\displaystyle x\in[-3;3)$
$\displaystyle \sum_{n=2}^{\infty}n^4\bigg(\frac{x^3n+4}{27n+\sin (2n)}\bigg)^n$
2. Originally Posted by Rist
Hi again. I really need full solutions of the following, because it seems that my ways of solving are wrong or just too short.
1) Find the sum of the series. Posted it before, but I just didn't understand how to continue mr fantastic's solution and also I don't know anything about gamma-function, which Krizalid used in his solution. Anwser is 1-ln2, by the way.
$\displaystyle \sum_{n=1}^{\infty}\frac{6^n}{n(n+1)x^n}$
2) Check if series is convergent. I know that it does not converge but can't prove it.
$\displaystyle \sum_{n=3}^{\infty}\frac{n+1}{\sqrt[3]{n^4}\ln^4(n+1)}$
3) Find all the values of x for which the series converges. By my way of solving itm the answer is $\displaystyle x\in[-3;3)$
$\displaystyle \sum_{n=2}^{\infty}n^4\bigg(\frac{x^3n+4}{27n+\sin (2n)}\bigg)^n$
I will help with the second and third, but first tell me what you hae done?
The hint for the second is the limit comparison test
and the third's hint is Root test all the way.
3. Hello
Originally Posted by Rist
Hi again. I really need full solutions of the following, because it seems that my ways of solving are wrong or just too short.
1) Find the sum of the series. Posted it before, but I just didn't understand how to continue mr fantastic's solution and also I don't know anything about gamma-function, which Krizalid used in his solution. Anwser is 1-ln2, by the way.
$\displaystyle \sum_{n=1}^{\infty}\frac{6^n}{n(n+1)x^n}$
$\displaystyle \frac{6^n}{n(n+1)x^n}=\left(\frac 6x\right)^n \frac{1}{n(n+1)}=\left(\frac 6x\right)^n \cdot \left(\frac 1n-\frac{1}{n+1}\right)$
The sum is now :
$\displaystyle S=\sum_{n=1}^{\infty} \left(\frac 6x\right)^n \cdot \frac 1n-\sum_{n=1}^{\infty} \left(\frac 6x\right)^n \cdot \frac{1}{n+1}$
--------------------------
Changing the indice of the second one, we get :
$\displaystyle \sum_{n=1}^{\infty} \left(\frac 6x\right)^n \cdot \frac{1}{n+1}=\sum_{n=2}^{\infty} \left(\frac 6x\right)^{n-1} \cdot \frac 1n$.
For the first one, we're going to rewrite in order to start from the same n.
$\displaystyle \sum_{\color{red}n=1}^{\infty} \left(\frac 6x\right)^n \cdot \frac 1n=\frac 6x \cdot \frac 11 +\sum_{\color{red}n=2}^{\infty} \left(\frac 6x\right)^n \cdot \frac 1n$$\displaystyle =\frac 6x+\sum_{n=2}^{\infty} \left(\frac 6x\right)^n \cdot \frac 1n$
--------------------------
$\displaystyle S=\frac 6x+\sum_{n=2}^{\infty} \left(\frac 6x\right)^n \cdot \frac 1n-\sum_{n=2}^{\infty} \left(\frac 6x\right)^{n-1} \cdot \frac 1n$
$\displaystyle S=\frac 6x+\sum_{n=2}^{\infty} \frac 1n \left(\left(\frac 6x\right)^n-\left(\frac 6x\right)^{n-1}\right)$
$\displaystyle S=\frac 6x+\sum_{n=2}^{\infty} \frac 1n \cdot \left(\frac 6x\right)^n \cdot \left(1-\frac x6\right)$
$\displaystyle S=\frac 6x+\left(1-\frac x6\right) \cdot \sum_{n=2}^{\infty} \frac 1n \cdot \left(\frac 6x\right)^n$
Then, recognize the power series :
$\displaystyle -\ln(1-t)=\sum_{n=1}^{\infty} \frac{t^n}{n}=t+\sum_{\color{red}n=2}^{\infty} \frac{t^n}{n}$
$\displaystyle \implies \sum_{\color{red}n=2}^{\infty} \frac{t^n}{n}=-\ln(1-t)-t$
Here, $\displaystyle t=\frac 6x$
So :
$\displaystyle S=\frac 6x+\left(1-\frac x6\right) \cdot \left(-\ln \left(1-\frac 6x\right)-\frac 6x \right)$
$\displaystyle S=\frac 6x-\ln \left(1-\frac 6x\right)-\frac 6x+\frac x6 \cdot \ln \left(1-\frac 6x\right)+1$
$\displaystyle S=-\ln \left(1-\frac 6x\right)+\frac x6 \cdot \ln \left(1-\frac 6x\right)+1$
$\displaystyle S=\ln \left(1-\frac 6x\right) \cdot \left(\frac x6-1\right)+1$
Check it, because I'm quite tired and not really sure
Edit : I just saw that you wanted x=12...
$\displaystyle S(12)=\ln \left(1-\frac 12\right) \cdot (2-1)+1=\ln \left(\frac 12\right)+1=1-\ln 2 \quad \blacksquare$
!!!!!
4. 2nd:
$\displaystyle \sum_{n=3}^{\infty}\frac{n+1}{\sqrt[3]{n^4}\ln^4(n+1)}\sim\sum_{n=3}^{\infty}\frac{n}{n^ \frac{4}{3}\ln^4n}=\sum_{n=3}^{\infty}\frac{1}{n^\ frac{1}{3}\ln^4n}$
I don't know whether it more or less than $\displaystyle \frac{1}{n}$
3rd:
$\displaystyle \sum_{n=2}^{\infty}n^4\bigg(\frac{x^3n+4}{27n+\sin (2n)}\bigg)^n$
For $\displaystyle x\neq0$:
$\displaystyle \sum_{n=2}^{\infty}n^4\bigg(\frac{x^3n+4}{27n+\sin (2n)}\bigg)^n\sim\sum_{n=2}^{\infty}n^4\bigg(\frac {x^3n}{27n}\bigg)^n$
$\displaystyle \lim_{x\to+\infty}\sqrt[n]{|a_{n}|}=\lim_{x\to+\infty}\bigg|n^\frac{4}{n}*\f rac{x^3n}{27n}\bigg|=\bigg|\frac{x^3}{27}\bigg|$
$\displaystyle -3 < x < 3$
For x=3:
$\displaystyle \sum_{n=2}^{\infty}n^4\bigg(\frac{27n+4}{27n+\sin( 2n)}\bigg)^n\sim\sum_{n=2}^{\infty}n^4$
This one divergent
For x=-3:
$\displaystyle \sum_{n=2}^{\infty}n^4\bigg(\frac{-27n+4}{27n+\sin(2n)}\bigg)^n\sim\sum_{n=2}^{\infty }n^4\bigg(\frac{4}{27n}\bigg)^n$
Convergent.
For x=0:
The same as for x=-3.
Answer is $\displaystyle x\in[-3;3)$
5. Originally Posted by Rist
2nd:
$\displaystyle \sum_{n=3}^{\infty}\frac{n+1}{\sqrt[3]{n^4}\ln^4(n+1)}\sim\sum_{n=3}^{\infty}\frac{n}{n^ \frac{4}{3}\ln^4n}=\sum_{n=3}^{\infty}\frac{1}{n^\ frac{1}{3}\ln^4n}$
I don't know whether it more or less than $\displaystyle \frac{1}{n}$
Bertrand Series
At the bottom of the page, there is the result
6. Hi
Otherwise you can show it :
$\displaystyle \frac{\frac{1}{n^{\frac{1}{3}}\ln^4n}}{\frac{1}{n} }=\frac{n^{\frac{2}{3}}}{\ln^4n}\underset{\infty}{ \to}\infty$
This implies that for $\displaystyle n$ sufficiently large $\displaystyle \frac{\frac{1}{n^{\frac{1}{3}}\ln^4n}}{\frac{1}{n} }>1$ thus $\displaystyle \frac{1}{n^{\frac{1}{3}}\ln^4n}>\frac{1}{n}$.
7. Originally Posted by Rist
(...) and also I don't know anything about gamma-function, which Krizalid used in his solution. Anwser is 1-ln2, by the way.
$\displaystyle \sum_{n=1}^{\infty}\frac{6^n}{n(n+1)x^n}$
Let's see another method:
We'll use the Maclaurin expansion for $\displaystyle \ln(1-x)$ and the integral parameter $\displaystyle \frac1{n+1}=\int_0^1x^n\,dx.$ Put these together:
\displaystyle \begin{aligned} \sum\limits_{n\,=\,1}^{\infty }{\frac{1}{n(n+1)2^{n}}}&=\sum\limits_{n\,=\,1}^{\ infty }{\Bigg\{ \frac{1}{n2^{n}}\int_{0}^{1}{x^{n}\,dx} \Bigg\}} \\ & =\int_{0}^{1}{\left\{ \sum\limits_{n\,=\,1}^{\infty }{\frac{1}{n}\bigg( \frac{x}{2} \bigg)^{n}} \right\}\,dx} \\ & =-\int_{0}^{1}{\ln \bigg( 1-\frac{x}{2} \bigg)\,dx}, \end{aligned}
where the last integral can be calculated by standard techniques.
8. What about the 3rd one?
9. Originally Posted by Rist
For the last one we see that the root test gives
$\displaystyle |x|<3$
For $\displaystyle x=3$ try the n-th term test, actually try it for both.
10. $\displaystyle \sum_{n=2}^{\infty}n^4\bigg(\frac{x^3n+4}{27n+\sin (2n)}\bigg)^n\sim\sum_{n=2}^{\infty}n^4\bigg(\frac {4}{27n}\bigg)^n$
It's for all negative x. Is it correct? Because if it is, the answer changes to $\displaystyle x\in(-\infty;3)$
11. Originally Posted by Rist
$\displaystyle \sum_{n=2}^{\infty}n^4\bigg(\frac{x^3n+4}{27n+\sin (2n)}\bigg)^n\sim\sum_{n=2}^{\infty}n^4\bigg(\frac {4}{27n}\bigg)^n$
It's for all negative x. Is it correct? Because if it is, the answer changes to $\displaystyle x\in(-\infty;3)$
You messed up on the asymptotic equivalence, you inexplicably dropped the n in the numerator in the quantity raised to the nth power.
|
2018-04-26 01:05:54
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9986765384674072, "perplexity": 4884.940844242107}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-17/segments/1524125948029.93/warc/CC-MAIN-20180425232612-20180426012612-00259.warc.gz"}
|
https://www.gradesaver.com/textbooks/math/geometry/CLONE-68e52840-b25a-488c-a775-8f1d0bdf0669/chapter-3-section-3-2-corresponding-parts-of-congruent-triangles-exercises-page-138/29
|
## Elementary Geometry for College Students (6th Edition)
First, we need to prove $\triangle FED\cong\triangle GED$ by SSS. Then we can deduce $\angle DEF\cong\angle DEG$. Then prove $\angle DEF$ and $\angle DEG$ must be right $\angle$s. That means $\overline{DE}\bot\overline{FG}$
*PLANNING: First, we need to prove $\triangle FED\cong\triangle GED$. Then we can deduce $\angle DEF\cong\angle DEG$. So $\angle DEF$ and $\angle DEG$ must be right $\angle$s. That means $\overline{DE}\bot\overline{FG}$ 1. $E$ is the midpoint of $\overline{FG}$ (Given) 2. $\overline{EF}\cong\overline{EG}$ (The midpoint of a line divides it into 2 congruent lines) 3. $\overline{DF}\cong\overline{DG}$ (Given) 4. $\overline{DE}\cong\overline{DE}$ (Identity) So now we have all 3 sides of $\triangle FED$ are congruent with 3 corresponding sides of $\triangle GED$. Therefore, 5. $\triangle FED\cong\triangle GED$ (SSS) 6. $\angle DEF\cong\angle DEG$ (CPCTC) However, we see that $\angle DEF+\angle DEG=\angle FEG=180^o$ (since $\overline{FG}$ is a line) Therefore, the value of each angle must be $90^o$. So, 7. $\angle DEF$ and $\angle DEG$ are both right $\angle$s. 8. $\overline{DE}\bot\overline{FG}$ (if a line intersects another one and creates 2 right angles, then those 2 lines are perpendicular with each other)
|
2022-06-29 13:30:58
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8817052245140076, "perplexity": 277.56901955525984}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-27/segments/1656103639050.36/warc/CC-MAIN-20220629115352-20220629145352-00631.warc.gz"}
|
https://cs.stackexchange.com/questions/109585/is-l-langle-m-rangle-mid-lm-subseteq-hp-in-core
|
# Is $L=\{\langle M\rangle\mid L(M)\subseteq HP\}\in coRE$?
My intuition is that $$L\notin coRE$$, but I haven't managed to prove that $$HP \le L$$, as previously I only saw reductions from $$HP$$ or from $$\overline{HP}$$ with $$f$$ such that $$f((\langle M\rangle,x))=\langle M_x\rangle$$, while $$M_x$$ performs some simulation of $$M$$ on $$x$$.
(The answer eluded me for some time, so I started writing a question here. After I found the surprisingly simple answer, I decided to post it (Q&A-style) anyway.)
We would show that $$HP \le L$$.
Let $$f:\Sigma^*\rightarrow \Sigma^*$$ be a function such that for any $$x\in \Sigma^*$$: $$f(x)=\langle M_x\rangle$$ while $$M_x$$ is a TM that accepts $$x$$ and rejects every other word.
$$f$$ is computable, as it only requires writing the encoding of a very simple TM.
Now, for any $$x\in \Sigma^*$$:
• $$L(M_x)=\{x\}$$
• If $$x\in HP$$, then $$\{x\}\subseteq HP$$, and so $$\langle M_x\rangle \in L$$.
• Otherwise, $$x\notin HP$$, and then $$\{x\}\not\subseteq HP$$, and so $$\langle M_x\rangle \notin L$$.
Therefore, indeed $$HP \le L$$.
Thus, it must be that $$L\notin coRE$$, because otherwise the reduction would imply that $$HP\in coRE$$, which is false.
|
2019-11-14 12:30:40
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 27, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9730525016784668, "perplexity": 136.02370067739494}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-47/segments/1573496668416.11/warc/CC-MAIN-20191114104329-20191114132329-00190.warc.gz"}
|
https://math.meta.stackexchange.com/questions/32052/gray-or-yellow-background-for-questions-and-answers-using-and
|
# Gray or yellow background for questions and answers using > and >!
Many times I have asked myself why there is no longer a grey or other colour background as there was a few months ago using the > command. Using a smartphone you can see the color of the background. The visual aspect is important. You can only see the background when you set the >! command, which makes the question visible when you hover the mouse pointer. Is there any possibility to restore it in all communities?
Just an example without gray background.
Just an example without gray background and formula: $$a^2+b^2=c^2$$.
• There's a question announcing this on Meta SE. Pretty much everyone who isn't the designer thinks it's a bad idea. They said they wanted to make it more like (git?) – Matt Samuel Jul 4 '20 at 21:30
• @MattSamuel Hi and thank you very much for your information. It is very ugly without a background. It is a visual impact: your eyes immediately point to the background of the question and then read the rest. At least that happens for me. – Sebastiano Jul 4 '20 at 21:32
• – Matt Samuel Jul 4 '20 at 21:52
• @MattSamuel I give you my sincere "grazie assai". I have written a comment to Aaron Shekey♦ on your link. – Sebastiano Jul 4 '20 at 22:01
• I don't really think that (specific-question) really fits this question. The purpose of this tag is described in the tag-info. – Martin Sleziak Jul 4 '20 at 23:39
• There are browser add-ons (like Stylus for Firefox -- don't install Stylish, which was the original version but was taken over by marketers) that allow one to modify specific HTML fragments on pages from specific domains. I'm not very good with it, and it depends of the original HTML being regular enough that you can reliably locate the section you want modified, but if you get no love from the SE staff you might be able to fix it in your own browser with one of these. – JonathanZ supports MonicaC Jul 4 '20 at 23:55
|
2021-01-19 09:44:01
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 1, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.3355043828487396, "perplexity": 1205.9744729495087}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-04/segments/1610703518201.29/warc/CC-MAIN-20210119072933-20210119102933-00175.warc.gz"}
|
https://bauripalash.github.io/mewlbook/id_assign.html
|
# Identifiers and Assignment
In mewl, identifiers just look like mew numbers, so be careful when reading/writing mewl programs.
Some backstory:
While I was designing mewl identifiers, I was confused about identifiers, I wanted something crazy but usable. So I just decided to go with mew, but to distinguish numbers from identifiers, I needed something special . Now, please continue.
### Identifiers/Variables
In mewl, identifiers look something like this -> ~mew , ~mewmew etc.
So basically, Identifiers follow the same syntax as mew numbers but with a leading ~ (tilde) character.
~mew , ~mewmew , ~mewmewmewmewmew , these all are identifiers.
## Assignment
Assignments are little awkward, it'd be easy to understand with examples:
[=mew [+ mew mew]]
This expression assigns 2 to the variable ~mew and this expression [:: ~mew] would print the value of variable ~mew (which is 2 )
If you want to assign something to variable, write the identifier with a leading = (equal sign) without any space(s).
In short, =mew tells the interpreter to evaluate the following expression(s) and store it in variable ~mew.
For example,
[=mewmew [* mewmew mewmew]] would store 9 in a variable which can be accessed via ~mewmew.
I know, it is confusing.
## More examples:
• [=mewmewmew [+ mew mew [* mewmew mewmew]]] would store 6 to a variable which to be accessed via ~mewmewmew
• Another one
[=mewmewmewmewmew [' mew mew mewmew]]
[:: ~mewmewmewmewmew]
it would print 112 to stdout
[Info Note]: Assiging something using =mewmewmewmewmew wouldn't change the meaning of mewmewmewmewmew as a mew number which still is 5
|
2023-03-28 04:28:36
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.3841630518436432, "perplexity": 12521.748286629962}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": false}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2023-14/segments/1679296948765.13/warc/CC-MAIN-20230328042424-20230328072424-00523.warc.gz"}
|
http://starlink.eao.hawaii.edu/devdocs/sun211.htx/sun211ss548.html
|
### ZoomMap
#### Description:
The ZoomMap class implements a Mapping which performs a " zoom" transformation by multiplying all coordinate values by the same scale factor (the inverse transformation is performed by dividing by this scale factor). The number of coordinate values representing each point is unchanged.
#### Inheritance
The ZoomMap class inherits from the Mapping class.
#### Attributes
In addition to those attributes common to all Mappings, every ZoomMap also has the following attributes:
• Zoom: ZoomMap scale factor
#### Functions
The ZoomMap class does not define any new functions beyond those which are applicable to all Mappings.
|
2022-01-22 11:05:37
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.44469767808914185, "perplexity": 3468.8014603581123}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-05/segments/1642320303845.33/warc/CC-MAIN-20220122103819-20220122133819-00090.warc.gz"}
|
http://accesspediatrics.mhmedical.com/content.aspx?bookid=1429§ionid=84706534
|
Chapter 36
### 1. Enteric duplication cyst
###### Figure 36-1
Enteric duplication cyst.
Longitudinal sonographic imaging of the right upper quadrant of an infant shows an oval cyst just inferior to the liver margin. The wall consists of an outer hypoechoic layer (arrow) and in inner hyperechoic layer. This is the muscular rim sign or double wall sign. There are foci of echogenic debris adherent to the inner wall of the cyst.
### 2. Duplication cyst
###### Figure 36-2
Duplication cyst.
There is echogenic solid material (arrow), likely clotted blood, in the dependent portion of this duodenal duplication cyst. Fine echoes due to particulate debris floating in the cyst fluid are also present.
### 3. Tubular duplication of the small intestine
###### Figure 36-3
Tubular duplication of the small intestine.
99mTc scintigraphy (anterior image) demonstrates tortuous linear uptake in the mid and lower portions of the abdomen (arrows). There is normal accumulation in the stomach wall and bladder lumen.
### 4. Duplication cyst
###### Figure 36-4
Duplication cyst.
An enhancing, well-defined wall (arrow) surrounds this duplication cyst of the ileum. There is no air or ingested contrast within the cyst lumen.
### 5. Duodenal duplication cyst
###### Figure 36-5
Duodenal duplication cyst.
A. A coronal MRCP image of a 6-day-old infant shows a large upper abdominal cyst (C). There is no bile duct dilation. A normal gallbladder (arrow) is present superior to the cyst. B. The cyst (C) is hyperintense on this T2-weighted axial image. The gallbladder (arrow) and liver are normal in appearance. The stomach is located to the leftof the cyst. There are small hyperintense cysts in the superior aspects of the kidneys. C. There is faint echogenic debris within the cyst (C) on this longitudinal sonographic image. The gallbladder (arrow) is normal.
### 6. Meckel diverticulum
###### Figure 36-6
Meckel diverticulum.
There is a focus of abnormal 99mTc accumulation in the lower abdomen (arrow) on this 30-minute anterior image of a 4-year-old child with hematochezia. Note normal activity in the stomach wall and bladder lumen.
### 7. Meckel diverticulum
###### Figure 36-7
Meckel diverticulum.
A. An anterior image obtained 5 minutes ...
Sign in to your MyAccess profile while you are actively authenticated on this site via your institution (you will be able to verify this by looking at the top right corner of the screen - if you see your institution's name, you are authenticated). Once logged in to your MyAccess profile, you will be able to access your institution's subscription for 90 days from any location. You must be logged in while authenticated at least once every 90 days to maintain this remote access.
Ok
If your institution subscribes to this resource, and you don't have a MyAccess profile, please contact your library's reference desk for information on how to gain access to this resource from off-campus.
## Subscription Options
### AccessPediatrics Full Site: One-Year Subscription
Connect to the full suite of AccessPediatrics content and resources including 20+ textbooks such as Rudolph’s Pediatrics and The Pediatric Practice series, high-quality procedural videos, images, and animations, interactive board review, an integrated pediatric drug database, and more.
|
2017-02-27 06:42:39
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.1743764728307724, "perplexity": 13904.552465475237}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-09/segments/1487501172649.58/warc/CC-MAIN-20170219104612-00190-ip-10-171-10-108.ec2.internal.warc.gz"}
|
https://math.stackexchange.com/questions/2847869/equality-of-upper-and-lower-lebesgue-integrals
|
# Equality of Upper and Lower Lebesgue Integrals
I am stuck on the following problem listed below from Tao's "Introduction to Measure Theory".
Exercise 11: Let $f:\mathbb{R}^d \to [0, \infty]$ be measurable, bounded, and vanishing outside of a set of finite measure. Show that the lower and upper Lebesgue integrals of f agree. (Hint: use Exercise 4.)
Here is exercise 4 for reference.
Exercise 4: Let $f:\mathbb{R}^d \to [0, \infty]$. Show that f is a bounded unsigned measurable function iff $f$ is the uniform limit of bounded simple functions.
What I have thought of so far: Since $f$ is measurable, bounded, and finitely supported, there is a sequence of bounded simple functions $\{\phi_n\}_{n = 1}^{\infty}$ such that $\phi_n \to f$ uniformly as $n \to \infty$. It would suffice to show that $$\lim_{n \to \infty}\underline{\int_{\mathbb{R^d}}} \phi_n(x) dx \to \underline{\int_{\mathbb{R^d}}} f(x) dx$$ and $$\lim_{n \to \infty}\overline{\int_{\mathbb{R^d}}} \phi_n(x) dx \to \overline{\int_{\mathbb{R^d}}} f(x) dx$$
Would this be the right approach? Usually, I would try to use one of the convergence theorems (dominated, monotone, bounded convergence), but they have not been discussed in the book and I assumed there was an simpler way to approach this problem.
|
2019-05-24 18:14:37
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.934653639793396, "perplexity": 101.86200404914898}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-22/segments/1558232257699.34/warc/CC-MAIN-20190524164533-20190524190533-00144.warc.gz"}
|
https://sciencing.com/calculate-triangle-one-side-given-8464316.html
|
# How to Calculate the Area of Triangle When One Side Is Given
••• NIKITA GINDEA/iStock/GettyImages
Print
Geometry is the study of shapes and figures that take up a given space. Geometric problems try to identify the size and scope of those shapes by solving mathematical equations. Geometry problems have two types of information: "givens" and "unknowns." The givens represent the information in the problem that is given to you. The unknowns are the pieces of the equation you must solve. It is possible to find the area of a triangle with only one side length given. However, to solve the problem, you also need to know two of the interior angles.
#### TL;DR (Too Long; Didn't Read)
To calculate the area of a triangle given one side and two angles, solve for another side using the Law of Sines, then find the area with the formula: area = 1/2 × b × c × sin(A).
## Find Third Angle
Determine the third angle of the triangle. For example, the sample problem has a triangle where side B is 10 units. Both angle A and angle B are 50 degrees. Solve for angle C. Math law states that the angles of a triangle add up to 180 degrees, therefore
\text{Angle} A + \text{Angle} B + \text{Angle} C = 180.
Insert the given angles into the equation.
50 + 50 + C = 180
Solve for C by adding the first two angles and subtracting from 180.
180 - 100 = 80
Angle C is 80 degrees.
## Set up Rule of Sines
Use the sine rule to re-write the equation. The sine rule is a mathematical rule that aids in solving unknown angles and lengths. It states:
\frac{a}{\sin A} = \frac{b}{\sin B} = \frac{c}{\sin C}
In the equation the small a, b and c represent the lengths, while the capital A, B and C represent the internal angles of the triangle. Because all portions of the equation equal each other, you can use any two portions. Use the portion for the side you were given. In the sample problem this is side B, 10 units.
Following the laws of math re-write the equation as:
c = \frac{b \sin C}{\sin B}
The small c represents the side you are solving for. The capital C is moved to the numerator on the opposite side of the equation because according to the laws of math you must isolate c in order to solve for it. When moving a denominator, it goes to the numerator so you can later multiply it.
## Solve Rule of Sines
Insert the givens into your new equation.
c = \frac{10 × \sin (100)}{\sin (50)}
Place this into your geometry calculator to return a result of:
c = 12.86
## Find Triangle Area
Solve for the area of the triangle. To find the area of a triangle you need two side lengths which you have now obtained. One equation for the area of a triangle is
\text{area} = \frac{1}{2} × b × c × \sin(A)
The "b" and "c" represent two sides and A is the angle between them.
Therefore:
\begin{aligned} \text{area} &= 0.5 × 10 × 12.86 × \sin(50) \\ &= 49.26 \text{ units}^2 \end{aligned}
Dont Go!
We Have More Great Sciencing Articles!
|
2021-12-01 16:13:26
|
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9450648427009583, "perplexity": 1122.429254710122}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-49/segments/1637964360803.6/warc/CC-MAIN-20211201143545-20211201173545-00487.warc.gz"}
|
https://pypi.org/project/graphql-compiler/
|
Turn complex GraphQL queries into optimized database queries.
# graphql-compiler
Turn complex GraphQL queries into optimized database queries.
pip install graphql-compiler
## Quick Overview
Through the GraphQL compiler, users can write powerful queries that uncover deep relationships in the data while not having to worry about the underlying database query language. The GraphQL compiler turns read-only queries written in GraphQL syntax to different query languages.
Furthermore, the GraphQL compiler validates queries through the use of a GraphQL schema that specifies the underlying schema of the database. We can currently autogenerate a GraphQL schema by introspecting an OrientDB database, (see End to End Example).
In the near future, we plan to add schema autogeneration from SQLAlchemy metadata as well.
For a more detailed overview and getting started guide, please see our blog post.
## Features
• Databases and Query Languages: We currently support a single database, OrientDB version 2.2.28+, and two query languages that OrientDB supports: the OrientDB dialect of gremlin, and OrientDB's own custom SQL-like query language that we refer to as MATCH, after the name of its graph traversal operator. With OrientDB, MATCH should be the preferred choice for most users, since it tends to run faster than gremlin, and has other desirable properties. See the Execution model section for more details.
Support for relational databases including PostgreSQL, MySQL, SQLite, and Microsoft SQL Server is a work in progress. A subset of compiler features are available for these databases. See the SQL section for more details.
• GraphQL Language Features: We prioritized and implemented a subset of all functionality supported by the GraphQL language. We hope to add more functionality over time.
## End-to-End Example
Even though this example specifically targets an OrientDB database, it is meant as a generic end-to-end example of how to use the GraphQL compiler.
from graphql.utils.schema_printer import print_schema
from graphql_compiler import (
get_graphql_schema_from_orientdb_schema_data, graphql_to_match
)
from graphql_compiler.schema_generation.orientdb.utils import ORIENTDB_SCHEMA_RECORDS_QUERY
# Step 1: Get schema metadata from hypothetical Animals database.
client = your_function_that_returns_an_orientdb_client()
schema_records = client.command(ORIENTDB_SCHEMA_RECORDS_QUERY)
schema_data = [record.oRecordData for record in schema_records]
# Step 2: Generate GraphQL schema from metadata.
schema, type_equivalence_hints = get_graphql_schema_from_orientdb_schema_data(schema_data)
print(print_schema(schema))
# schema {
# query: RootSchemaQuery
# }
#
# directive @filter(op_name: String!, value: [String!]!) on FIELD | INLINE_FRAGMENT
#
# directive @tag(tag_name: String!) on FIELD
#
# directive @output(out_name: String!) on FIELD
#
# directive @output_source on FIELD
#
# directive @optional on FIELD
#
# directive @recurse(depth: Int!) on FIELD
#
# directive @fold on FIELD
#
# type Animal {
# name: String
# net_worth: Int
# limbs: Int
# }
#
# type RootSchemaQuery{
# Animal: [Animal]
# }
# Step 3: Write GraphQL query that returns the names of all animals with a certain net worth.
# Note that we prefix net_worth with '$' and surround it with quotes to indicate it's a parameter. graphql_query = ''' { Animal { name @output(out_name: "animal_name") net_worth @filter(op_name: "=", value: ["$net_worth"])
}
}
'''
parameters = {
'net_worth': '100',
}
# Step 4: Use autogenerated GraphQL schema to compile query into the target database language.
compilation_result = graphql_to_match(schema, graphql_query, parameters, type_equivalence_hints)
print(compilation_result.query)
# SELECT Animal___1.name AS animal_name
# FROM ( MATCH { class: Animal, where: ((net_worth = decimal("100"))), as: Animal___1 }
# RETURN $matches) ## Definitions • Vertex field: A field corresponding to a vertex in the graph. In the below example, Animal and out_Entity_Related are vertex fields. The Animal field is the field at which querying starts, and is therefore the root vertex field. In any scope, fields with the prefix out_ denote vertex fields connected by an outbound edge, whereas ones with the prefix in_ denote vertex fields connected by an inbound edge. { Animal { name @output(out_name: "name") out_Entity_Related { ... on Species { description @output(out_name: "description") } } } } • Property field: A field corresponding to a property of a vertex in the graph. In the above example, the name and description fields are property fields. In any given scope, property fields must appear before vertex fields. • Result set: An assignment of vertices in the graph to scopes (locations) in the query. As the database processes the query, new result sets may be created (e.g. when traversing edges), and result sets may be discarded when they do not satisfy filters or type coercions. After all parts of the query are processed by the database, all remaining result sets are used to form the query result, by taking their values at all properties marked for output. • Scope: The part of a query between any pair of curly braces. The compiler infers the type of each scope. For example, in the above query, the scope beginning with Animal { is of type Animal, the one beginning with out_Entity_Related { is of type Entity, and the one beginning with ... on Species { is of type Species. • Type coercion: An operation that produces a new scope of narrower type than the scope in which it exists. Any result sets that cannot satisfy the narrower type are filtered out and not returned. In the above query, ... on Species is a type coercion which takes its enclosing scope of type Entity, and coerces it into a narrower scope of type Species. This is possible since Entity is an interface, and Species is a type that implements the Entity interface. ## Directives ### @optional Without this directive, when a query includes a vertex field, any results matching that query must be able to produce a value for that vertex field. Applied to a vertex field, this directive prevents result sets that are unable to produce a value for that field from being discarded, and allowed to continue processing the remainder of the query. #### Example Use { Animal { name @output(out_name: "name") out_Animal_ParentOf @optional { name @output(out_name: "child_name") } } } For each Animal: • if it is a parent of another animal, at least one row containing the parent and child animal's names, in the name and child_name columns respectively; • if it is not a parent of another animal, a row with its name in the name column, and a null value in the child_name column. #### Constraints and Rules • @optional can only be applied to vertex fields, except the root vertex field. • It is allowed to expand vertex fields within an @optional scope. However, doing so is currently associated with a performance penalty in MATCH. For more detail, see: Expanding @optional vertex fields. • @recurse, @fold, or @output_source may not be used at the same vertex field as @optional. • @output_source and @fold may not be used anywhere within a scope marked @optional. If a given result set is unable to produce a value for a vertex field marked @optional, any fields marked @output within that vertex field return the null value. When filtering (via @filter) or type coercion (via e.g. ... on Animal) are applied at or within a vertex field marked @optional, the @optional is given precedence: • If a given result set cannot produce a value for the optional vertex field, it is preserved: the @optional directive is applied first, and no filtering or type coercion can happen. • If a given result set is able to produce a value for the optional vertex field, the @optional does not apply, and that value is then checked against the filtering or type coercion. These subsequent operations may then cause the result set to be discarded if it does not match. For example, suppose we have two Person vertices with names Albert and Betty such that there is a Person_Knows edge from Albert to Betty. Then the following query: { Person { out_Person_Knows @optional { name @filter(op_name: "=", value: ["$name"])
}
name @output(out_name: "person_name")
}
}
with runtime parameter
{
"name": "Charles"
}
would output an empty list because the Person_Knows edge from Albert to Betty satisfies the @optional directive, but Betty doesn't match the filter checking for a node with name Charles.
However, if no such Person_Knows edge existed from Albert, then the output would be
{
name: 'Albert'
}
because no such edge can satisfy the @optional directive, and no filtering happens.
### @output
Denotes that the value of a property field should be included in the output. Its out_name argument specifies the name of the column in which the output value should be returned.
#### Example Use
{
Animal {
name @output(out_name: "animal_name")
}
}
This query returns the name of each Animal in the graph, in a column named animal_name.
#### Constraints and Rules
• @output can only be applied to property fields.
• The value provided for out_name may only consist of upper or lower case letters (A-Z, a-z), or underscores (_).
• The value provided for out_name cannot be prefixed with ___ (three underscores). This namespace is reserved for compiler internal use.
• For any given query, all out_name values must be unique. In other words, output columns must have unique names.
If the property field marked @output exists within a scope marked @optional, result sets that are unable to assign a value to the optional scope return the value null as the output of that property field.
### @fold
Applying @fold on a scope "folds" all outputs from within that scope: rather than appearing on separate rows in the query result, the folded outputs are coalesced into lists starting at the scope marked @fold.
#### Example Use
{
Animal {
name @output(out_name: "animal_name")
out_Animal_ParentOf @fold {
name @output(out_name: "child_names")
}
}
}
Each returned row has two columns: animal_name with the name of each Animal in the graph, and child_names with a list of the names of all children of the Animal named animal_name. If a given Animal has no children, its child_names list is empty.
#### Constraints and Rules
• @fold can only be applied to vertex fields, except the root vertex field.
• May not exist at the same vertex field as @recurse, @optional, or @output_source.
• Any scope that is either marked with @fold or is nested within a @fold marked scope, may expand at most one vertex field.
• There must be at least one @output field within a @fold scope.
• All @output fields within a @fold traversal must be present at the innermost scope. It is invalid to expand vertex fields within a @fold after encountering an @output directive.
• @tag, @recurse, @optional, @output_source and @fold may not be used anywhere within a scope marked @fold.
• Use of type coercions or @filter at or within the vertex field marked @fold is allowed. Only data that satisfies the given type coercions and filters is returned by the @fold.
• If the compiler is able to prove that the type coercion in the @fold scope is actually a no-op, it may optimize it away. See the Optional type_equivalence_hints compilation parameter section for more details.
#### Example
The following GraphQL is not allowed and will produce a GraphQLCompilationError. This query is invalid for two separate reasons:
• It expands vertex fields after an @output directive (outputting animal_name)
• The in_Animal_ParentOf scope, which is within a scope marked @fold, expands two vertex fields instead of at most one.
{
Animal {
out_Animal_ParentOf @fold {
name @output(out_name: "animal_name")
in_Animal_ParentOf {
out_Animal_OfSpecies {
uuid @output(out_name: "species_id")
}
out_Entity_Related {
... on Animal {
name @output(out_name: "relative_name")
}
}
}
}
}
}
The following is a valid use of @fold:
{
Animal {
out_Animal_ParentOf @fold {
in_Animal_ParentOf {
in_Animal_ParentOf {
out_Entity_Related {
... on Animal {
name @output(out_name: "final_name")
}
}
}
}
}
}
}
### @tag
The @tag directive enables filtering based on values encountered elsewhere in the same query. Applied on a property field, it assigns a name to the value of that property field, allowing that value to then be used as part of a @filter directive.
To supply a tagged value to a @filter directive, place the tag name (prefixed with a % symbol) in the @filter's value array. See Passing parameters for more details.
#### Example Use
{
Animal {
name @tag(tag_name: "parent_name")
out_Animal_ParentOf {
name @filter(op_name: "<", value: ["%parent_name"])
@output(out_name: "child_name")
}
}
}
Each row returned by this query contains, in the child_name column, the name of an Animal that is the child of another Animal, and has a name that is lexicographically smaller than the name of its parent.
#### Constraints and Rules
• @tag can only be applied to property fields.
• The value provided for tag_name may only consist of upper or lower case letters (A-Z, a-z), or underscores (_).
• For any given query, all tag_name values must be unique.
• Cannot be applied to property fields within a scope marked @fold.
• Using a @tag and a @filter that references the tag within the same vertex is allowed, so long as the two do not appear on the exact same property field.
### @filter
Allows filtering of the data to be returned, based on any of a set of filtering operations. Conceptually, it is the GraphQL equivalent of the SQL WHERE keyword.
See Supported filtering operations for details on the various types of filtering that the compiler currently supports. These operations are currently hardcoded in the compiler; in the future, we may enable the addition of custom filtering operations via compiler plugins.
Multiple @filter directives may be applied to the same field at once. Conceptually, it is as if the different @filter directives were joined by SQL AND keywords.
Using a @tag and a @filter that references the tag within the same vertex is allowed, so long as the two do not appear on the exact same property field.
#### Passing Parameters
The @filter directive accepts two types of parameters: runtime parameters and tagged parameters.
Runtime parameters are represented with a $ prefix (e.g. $foo), and denote parameters whose values will be known at runtime. The compiler will compile the GraphQL query leaving a spot for the value to fill at runtime. After compilation, the user will have to supply values for all runtime parameters, and their values will be inserted into the final query before it can be executed against the database.
Consider the following query:
{
Animal {
name @output(out_name: "animal_name")
color @filter(op_name: "=", value: ["$animal_color"]) } } It returns one row for every Animal vertex that has a color equal to $animal_color. Each row contains the animal's name in a column named animal_name. The parameter $animal_color is a runtime parameter -- the user must pass in a value (e.g. {"animal_color": "blue"}) that will be inserted into the query before querying the database. Tagged parameters are represented with a % prefix (e.g. %foo) and denote parameters whose values are derived from a property field encountered elsewhere in the query. If the user marks a property field with a @tag directive and a suitable name, that value becomes available to use as a tagged parameter in all subsequent @filter directives. Consider the following query: { Animal { name @tag(out_name: "parent_name") out_Animal_ParentOf { name @filter(op_name: "has_substring", value: ["%parent_name"]) @output(out_name: "child_name") } } } It returns the names of animals that contain their parent's name as a substring of their own. The database captures the value of the parent animal's name as the parent_name tag, and this value is then used as the %parent_name tagged parameter in the child animal's @filter. We considered and rejected the idea of allowing literal values (e.g. 123) as @filter parameters, for several reasons: • The GraphQL type of the @filter directive's value field cannot reasonably encompass all the different types of arguments that people might supply. Even counting scalar types only, there's already ID, Int, Float, Boolean, String, Date, DateTime... -- way too many to include. • Literal values would be used when the parameter's value is known to be fixed. We can just as easily accomplish the same thing by using a runtime parameter with a fixed value. That approach has the added benefit of potentially reducing the number of different queries that have to be compiled: two queries with different literal values would have to be compiled twice, whereas using two different sets of runtime arguments only requires the compilation of one query. • We were concerned about the potential for accidental misuse of literal values. SQL systems have supported stored procedures and parameterized queries for decades, and yet ad-hoc SQL query construction via simple string interpolation is still a serious problem and is the source of many SQL injection vulnerabilities. We felt that disallowing literal values in the query will drastically reduce both the use and the risks of unsafe string interpolation, at an acceptable cost. #### Constraints and Rules • The value provided for op_name may only consist of upper or lower case letters (A-Z, a-z), or underscores (_). • Values provided in the value list must start with either $ (denoting a runtime parameter) or % (denoting a tagged parameter), followed by exclusively upper or lower case letters (A-Z, a-z) or underscores (_).
• The @tag directives corresponding to any tagged parameters in a given @filter query must be applied to fields that appear either at the same vertex as the one with the @filter, or strictly before the field with the @filter directive.
• "Can't compare apples and oranges" -- the GraphQL type of the parameters supplied to the @filter must match the GraphQL types the compiler infers based on the field the @filter is applied to.
• If the @tag corresponding to a tagged parameter originates from within a vertex field marked @optional, the emitted code for the @filter checks if the @optional field was assigned a value. If no value was assigned to the @optional field, comparisons against the tagged parameter from within that field return True.
• For example, assuming %from_optional originates from an @optional scope, when no value is assigned to the @optional field:
• using @filter(op_name: "=", value: ["%from_optional"]) is equivalent to not having the filter at all;
• using @filter(op_name: "between", value: ["$lower", "%from_optional"]) is equivalent to @filter(op_name: ">=", value: ["$lower"]).
• Using a @tag and a @filter that references the tag within the same vertex is allowed, so long as the two do not appear on the exact same property field.
### @recurse
Applied to a vertex field, specifies that the edge connecting that vertex field to the current vertex should be visited repeatedly, up to depth times. The recursion always starts at depth = 0, i.e. the current vertex -- see the below sections for a more thorough explanation.
#### Example Use
Say the user wants to fetch the names of the children and grandchildren of each Animal. That could be accomplished by running the following two queries and concatenating their results:
{
Animal {
name @output(out_name: "ancestor")
out_Animal_ParentOf {
name @output(out_name: "descendant")
}
}
}
{
Animal {
name @output(out_name: "ancestor")
out_Animal_ParentOf {
out_Animal_ParentOf {
name @output(out_name: "descendant")
}
}
}
}
If the user then wanted to also add great-grandchildren to the descendants output, that would require yet another query, and so on. Instead of concatenating the results of multiple queries, the user can simply use the @recurse directive. The following query returns the child and grandchild descendants:
{
Animal {
name @output(out_name: "ancestor")
out_Animal_ParentOf {
out_Animal_ParentOf @recurse(depth: 1) {
name @output(out_name: "descendant")
}
}
}
}
Each row returned by this query contains the name of an Animal in the ancestor column and the name of its child or grandchild in the descendant column. The out_Animal_ParentOf vertex field marked @recurse is already enclosed within another out_Animal_ParentOf vertex field, so the recursion starts at the "child" level (the out_Animal_ParentOf not marked with @recurse). Therefore, the descendant column contains the names of an ancestor's children (from depth = 0 of the recursion) and the names of its grandchildren (from depth = 1).
Recursion using this directive is possible since the types of the enclosing scope and the recursion scope work out: the @recurse directive is applied to a vertex field of type Animal and its vertex field is enclosed within a scope of type Animal. Additional cases where recursion is allowed are described in detail below.
The descendant column cannot have the name of the ancestor animal since the @recurse is already within one out_Animal_ParentOf and not at the root Animal vertex field. Similarly, it cannot have descendants that are more than two steps removed (e.g., great-grandchildren), since the depth parameter of @recurse is set to 1.
Now, let's see what happens when we eliminate the outer out_Animal_ParentOf vertex field and simply have the @recurse applied on the out_Animal_ParentOf in the root vertex field scope:
{
Animal {
name @output(out_name: "ancestor")
out_Animal_ParentOf @recurse(depth: 1) {
name @output(out_name: "self_or_descendant")
}
}
}
In this case, when the recursion starts at depth = 0, the Animal within the recursion scope will be the same Animal at the root vertex field, and therefore, in the depth = 0 step of the recursion, the value of the self_or_descendant field will be equal to the value of the ancestor field.
#### Constraints and Rules
• "The types must work out" -- when applied within a scope of type A, to a vertex field of type B, at least one of the following must be true:
• A is a GraphQL union;
• B is a GraphQL interface, and A is a type that implements that interface;
• A and B are the same type.
• @recurse can only be applied to vertex fields other than the root vertex field of a query.
• Cannot be used within a scope marked @optional or @fold.
• The depth parameter of the recursion must always have a value greater than or equal to 1. Using depth = 1 produces the current vertex and its neighboring vertices along the specified edge.
• Type coercions and @filter directives within a scope marked @recurse do not limit the recursion depth. Conceptually, recursion to the specified depth happens first, and then type coercions and @filter directives eliminate some of the locations reached by the recursion.
• As demonstrated by the examples above, the recursion always starts at depth 0, so the recursion scope always includes the vertex at the scope that encloses the vertex field marked @recurse.
### @output_source
See the Completeness of returned results section for a description of the directive and examples.
#### Constraints and Rules
• May exist at most once in any given GraphQL query.
• Can exist only on a vertex field, and only on the last vertex field used in the query.
• Cannot be used within a scope marked @optional or @fold.
## Supported filtering operations
### Comparison operators
Supported comparison operators:
• Equal to: =
• Not equal to: !=
• Greater than: >
• Less than: <
• Greater than or equal to: >=
• Less than or equal to: <=
#### Example Use
##### Equal to (=):
{
Species {
name @filter(op_name: "=", value: ["$species_name"]) uuid @output(out_name: "species_uuid") } } This returns one row for every Species whose name is equal to the value of the $species_name parameter. Each row contains the uuid of the Species in a column named species_uuid.
##### Greater than or equal to (>=):
{
Animal {
name @output(out_name: "name")
birthday @output(out_name: "birthday")
@filter(op_name: ">=", value: ["$point_in_time"]) } } This returns one row for every Animal vertex that was born after or on a $point_in_time. Each row contains the animal's name and birthday in columns named name and birthday, respectively.
#### Constraints and Rules
• All comparison operators must be on a property field.
### name_or_alias
Allows you to filter on vertices which contain the exact string $wanted_name_or_alias in their name or alias fields. #### Example Use { Animal @filter(op_name: "name_or_alias", value: ["$wanted_name_or_alias"]) {
name @output(out_name: "name")
}
}
This returns one row for every Animal vertex whose name and/or alias is equal to $wanted_name_or_alias. Each row contains the animal's name in a column named name. The value provided for $wanted_name_or_alias must be the full name and/or alias of the Animal. Substrings will not be matched.
#### Constraints and Rules
• Must be on a vertex field that has name and alias properties.
### between
#### Example Use
{
Animal {
name @output(out_name: "name")
birthday @filter(op_name: "between", value: ["$lower", "$upper"])
@output(out_name: "birthday")
}
}
This returns:
• One row for every Animal vertex whose birthday is in between $lower and $upper dates (inclusive). Each row contains the animal's name in a column named name.
#### Constraints and Rules
• Must be on a property field.
• The lower and upper bounds represent an inclusive interval, which means that the output may contain values that match them exactly.
### in_collection
#### Example Use
{
Animal {
name @output(out_name: "animal_name")
color @output(out_name: "color")
@filter(op_name: "in_collection", value: ["$colors"]) } } This returns one row for every Animal vertex which has a color contained in a list of colors. Each row contains the Animal's name and color in columns named animal_name and color, respectively. #### Constraints and Rules • Must be on a property field that is not of list type. ### not_in_collection #### Example Use { Animal { name @output(out_name: "animal_name") color @output(out_name: "color") @filter(op_name: "not_in_collection", value: ["$colors"])
}
}
This returns one row for every Animal vertex which has a color not contained in a list of colors. Each row contains the Animal's name and color in columns named animal_name and color, respectively.
#### Constraints and Rules
• Must be on a property field that is not of list type.
### has_substring
#### Example Use
{
Animal {
name @filter(op_name: "has_substring", value: ["$substring"]) @output(out_name: "animal_name") } } This returns one row for every Animal vertex whose name contains the value supplied for the $substring parameter. Each row contains the matching Animal's name in a column named animal_name.
#### Constraints and Rules
• Must be on a property field of string type.
### contains
#### Example Use
{
Animal {
alias @filter(op_name: "contains", value: ["$wanted"]) name @output(out_name: "animal_name") } } This returns one row for every Animal vertex whose list of aliases contains the value supplied for the $wanted parameter. Each row contains the matching Animal's name in a column named animal_name.
#### Constraints and Rules
• Must be on a property field of list type.
### not_contains
#### Example Use
{
Animal {
alias @filter(op_name: "not_contains", value: ["$wanted"]) name @output(out_name: "animal_name") } } This returns one row for every Animal vertex whose list of aliases does not contain the value supplied for the $wanted parameter. Each row contains the matching Animal's name in a column named animal_name.
#### Constraints and Rules
• Must be on a property field of list type.
### intersects
#### Example Use
{
Animal {
alias @filter(op_name: "intersects", value: ["$wanted"]) name @output(out_name: "animal_name") } } This returns one row for every Animal vertex whose list of aliases has a non-empty intersection with the list of values supplied for the $wanted parameter. Each row contains the matching Animal's name in a column named animal_name.
#### Constraints and Rules
• Must be on a property field of list type.
### has_edge_degree
#### Example Use
{
Animal {
name @output(out_name: "animal_name")
out_Animal_ParentOf @filter(op_name: "has_edge_degree", value: ["$child_count"]) @optional { uuid } } } This returns one row for every Animal vertex that has exactly $child_count children (i.e. where the out_Animal_ParentOf edge appears exactly $child_count times). Each row contains the matching Animal's name, in a column named animal_name. The uuid field within the out_Animal_ParentOf vertex field is added simply to satisfy the GraphQL syntax rule that requires at least one field to exist within any {}. Since this field is not marked with any directive, it has no effect on the query. N.B.: Please note the @optional directive on the vertex field being filtered above. If in your use case you expect to set $child_count to 0, you must also mark that vertex field @optional. Recall that absence of @optional implies that at least one such edge must exist. If the has_edge_degree filter is used with a parameter set to 0, that requires the edge to not exist. Therefore, if the @optional is not present in this situation, no valid result sets can be produced, and the resulting query will return no results.
#### Constraints and Rules
• Must be on a vertex field that is not the root vertex of the query.
• Tagged values are not supported as parameters for this filter.
• If the runtime parameter for this operator can be 0, it is strongly recommended to also apply @optional to the vertex field being filtered (see N.B. above for details).
## Type coercions
Type coercions are operations that create a new scope whose type is different than the type of the enclosing scope of the coercion -- they coerce the enclosing scope into a different type. Type coercions are represented with GraphQL inline fragments.
#### Example Use
{
Species {
name @output(out_name: "species_name")
out_Species_Eats {
... on Food {
name @output(out_name: "food_name")
}
}
}
}
Here, the out_Species_Eats vertex field is of the Union__Food__FoodOrSpecies__Species union type. To proceed with the query, the user must choose which of the types in the Union__Food__FoodOrSpecies__Species union to use. In this example, ... on Food indicates that the Food type was chosen, and any vertices at that scope that are not of type Food are filtered out and discarded.
{
Species {
name @output(out_name: "species_name")
out_Entity_Related {
... on Species {
name @output(out_name: "food_name")
}
}
}
}
In this query, the out_Entity_Related is of Entity type. However, the query only wants to return results where the related entity is a Species, which ... on Species ensures is the case.
## Meta fields
### __typename
The compiler supports the standard GraphQL meta field __typename, which returns the runtime type of the scope where the field is found. Assuming the GraphQL schema matches the database's schema, the runtime type will always be a subtype of (or exactly equal to) the static type of the scope determined by the GraphQL type system. Below, we provide an example query in which the runtime type is a subtype of the static type, but is not equal to it.
The __typename field is treated as a property field of type String, and supports all directives that can be applied to any other property field.
#### Example Use
{
Entity {
__typename @output(out_name: "entity_type")
name @output(out_name: "entity_name")
}
}
This query returns one row for each Entity vertex. The scope in which __typename appears is of static type Entity. However, Animal is a type of Entity, as are Species, Food, and others. Vertices of all subtypes of Entity will therefore be returned, and the entity_type column that outputs the __typename field will show their runtime type: Animal, Species, Food, etc.
### _x_count
The _x_count meta field is a non-standard meta field defined by the GraphQL compiler that makes it possible to interact with the number of elements in a scope marked @fold. By applying directives like @output and @filter to this meta field, queries can output the number of elements captured in the @fold and filter down results to select only those with the desired fold sizes.
We use the _x_ prefix to signify that this is an extension meta field introduced by the compiler, and not part of the canonical set of GraphQL meta fields defined by the GraphQL specification. We do not use the GraphQL standard double-underscore (__) prefix for meta fields, since all names with that prefix are explicitly reserved and prohibited from being used in directives, fields, or any other artifacts.
#### Adding the _x_count meta field to your schema
Since the _x_count meta field is not currently part of the GraphQL standard, it has to be explicitly added to all interfaces and types in your schema. There are two ways to do this.
The preferred way to do this is to use the EXTENDED_META_FIELD_DEFINITIONS constant as a starting point for building your interfaces' and types' field descriptions:
from graphql import GraphQLInt, GraphQLField, GraphQLObjectType, GraphQLString
from graphql_compiler import EXTENDED_META_FIELD_DEFINITIONS
fields = EXTENDED_META_FIELD_DEFINITIONS.copy()
fields.update({
'foo': GraphQLField(GraphQLString),
'bar': GraphQLField(GraphQLInt),
# etc.
})
graphql_type = GraphQLObjectType('MyType', fields)
# etc.
If you are not able to programmatically define the schema, and instead simply have a pre-made GraphQL schema object that you are able to mutate, the alternative approach is via the insert_meta_fields_into_existing_schema() helper function defined by the compiler:
# assuming that existing_schema is your GraphQL schema object
insert_meta_fields_into_existing_schema(existing_schema)
# existing_schema was mutated in-place and all custom meta-fields were added
#### Example Use
{
Animal {
name @output(out_name: "name")
out_Animal_ParentOf @fold {
_x_count @output(out_name: "number_of_children")
name @output(out_name: "child_names")
}
}
}
This query returns one row for each Animal vertex. Each row contains its name, and the number and names of its children. While the output type of the child_names selection is a list of strings, the output type of the number_of_children selection is an integer.
{
Animal {
name @output(out_name: "name")
out_Animal_ParentOf @fold {
_x_count @filter(op_name: ">=", value: ["$min_children"]) @output(out_name: "number_of_children") name @filter(op_name: "has_substring", value: ["$substr"])
@output(out_name: "child_names")
}
}
}
Here, we've modified the above query to add two more filtering constraints to the returned rows:
• child Animal vertices must contain the value of $substr as a substring in their name, and • Animal vertices must have at least $min_children children that satisfy the above filter.
Importantly, any filtering on _x_count is applied after any other filters and type coercions that are present in the @fold in question. This order of operations matters a lot: selecting Animal vertices with 3+ children, then filtering the children based on their names is not the same as filtering the children first, and then selecting Animal vertices that have 3+ children that matched the earlier filter.
#### Constraints and Rules
• The _x_count field is only allowed to appear within a vertex field marked @fold.
• Filtering on _x_count is always applied after any other filters and type coercions present in that @fold.
• Filtering or outputting the value of the _x_count field must always be done at the innermost scope of the @fold. It is invalid to expand vertex fields within a @fold after filtering or outputting the value of the _x_count meta field.
#### How is filtering on _x_count different from @filter with has_edge_degree?
The has_edge_degree filter allows filtering based on the number of edges of a particular type. There are situations in which filtering with has_edge_degree and filtering using = on _x_count produce equivalent queries. Here is one such pair of queries:
{
Species {
name @output(out_name: "name")
in_Animal_OfSpecies @filter(op_name: "has_edge_degree", value: ["$num_animals"]) { uuid } } } and { Species { name @output(out_name: "name") in_Animal_OfSpecies @fold { _x_count @filter(op_name: "=", value: ["$num_animals"])
}
}
}
In both of these queries, we ask for the names of the Species vertices that have precisely $num_animals members. However, we have expressed this question in two different ways: once as a property of the Species vertex ("the degree of the in_Animal_OfSpecies is $num_animals"), and once as a property of the list of Animal vertices produced by the @fold ("the number of elements in the @fold is $num_animals"). When we add additional filtering within the Animal vertices of the in_Animal_OfSpecies vertex field, this distinction becomes very important. Compare the following two queries: { Species { name @output(out_name: "name") in_Animal_OfSpecies @filter(op_name: "has_edge_degree", value: ["$num_animals"]) {
out_Animal_LivesIn {
name @filter(op_name: "=", value: ["$location"]) } } } } versus { Species { name @output(out_name: "name") in_Animal_OfSpecies @fold { out_Animal_LivesIn { _x_count @filter(op_name: "=", value: ["$num_animals"])
name @filter(op_name: "=", value: ["$location"]) } } } } In the first, for the purposes of the has_edge_degree filtering, the location where the animals live is irrelevant: the has_edge_degree only makes sure that the Species vertex has the correct number of edges of type in_Animal_OfSpecies, and that's it. In contrast, the second query ensures that only Species vertices that have $num_animals animals that live in the selected location are returned -- the location matters since the @filter on the _x_count field applies to the number of elements in the @fold scope.
## The GraphQL schema
This section assumes that the reader is familiar with the way schemas work in the reference implementation of GraphQL.
The GraphQL schema used with the compiler must contain the custom directives and custom Date and DateTime scalar types defined by the compiler:
directive @recurse(depth: Int!) on FIELD
directive @filter(value: [String!]!, op_name: String!) on FIELD | INLINE_FRAGMENT
directive @tag(tag_name: String!) on FIELD
directive @output(out_name: String!) on FIELD
directive @output_source on FIELD
directive @optional on FIELD
directive @fold on FIELD
scalar DateTime
scalar Date
If constructing the schema programmatically, one can simply import the the Python object representations of the custom directives and the custom types:
from graphql_compiler import DIRECTIVES # the list of custom directives
from graphql_compiler import GraphQLDate, GraphQLDateTime # the custom types
Since the GraphQL and OrientDB type systems have different rules, there is no one-size-fits-all solution to writing the GraphQL schema for a given database schema. However, the following rules of thumb are useful to keep in mind:
• Generally, represent OrientDB abstract classes as GraphQL interfaces. In GraphQL's type system, GraphQL interfaces cannot inherit from other GraphQL interfaces.
• Generally, represent OrientDB non-abstract classes as GraphQL types, listing the GraphQL interfaces that they implement. In GraphQL's type system, GraphQL types cannot inherit from other GraphQL types.
• Inheritance relationships between two OrientDB non-abstract classes, or between two OrientDB abstract classes, introduce some difficulties in GraphQL. When modelling your data in OrientDB, it's best to avoid such inheritance if possible.
• If it is impossible to avoid having two non-abstract OrientDB classes A and B such that B inherits from A, you have two options:
• You may choose to represent the A OrientDB class as a GraphQL interface, which the GraphQL type corresponding to B can implement. In this case, the GraphQL schema preserves the inheritance relationship between A and B, but sacrifices the representation of any inheritance relationships A may have with any OrientDB superclasses.
• You may choose to represent both A and B as GraphQL types. The tradeoff in this case is exactly the opposite from the previous case: the GraphQL schema sacrifices the inheritance relationship between A and B, but preserves the inheritance relationships of A with its superclasses. In this case, it is recommended to create a GraphQL union type A | B, and to use that GraphQL union type for any vertex fields that in OrientDB would be of type A.
• If it is impossible to avoid having two abstract OrientDB classes A and B such that B inherits from A, you similarly have two options:
• You may choose to represent B as a GraphQL type that can implement the GraphQL interface corresponding to A. This makes the GraphQL schema preserve the inheritance relationship between A and B, but sacrifices the ability for other GraphQL types to inherit from B.
• You may choose to represent both A and B as GraphQL interfaces, sacrificing the schema's representation of the inheritance between A and B, but allowing GraphQL types to inherit from both A and B. If necessary, you can then create a GraphQL union type A | B and use it for any vertex fields that in OrientDB would be of type A.
• It is legal to fully omit classes and fields that are not representable in GraphQL. The compiler currently does not support OrientDB's EmbeddedMap type nor embedded non-primitive typed fields, so such fields can simply be omitted in the GraphQL representation of their classes. Alternatively, the entire OrientDB class and all edges that may point to it may be omitted entirely from the GraphQL schema.
## Execution model
Since the GraphQL compiler can target multiple different query languages, each with its own behaviors and limitations, the execution model must also be defined as a function of the compilation target language. While we strive to minimize the differences between compilation targets, some differences are unavoidable.
The compiler abides by the following principles:
• When the database is queried with a compiled query string, its response must always be in the form of a list of results.
• The precise format of each such result is defined by each compilation target separately.
• gremlin, MATCH and SQL return data in a tabular format, where each result is a row of the table, and fields marked for output are columns.
• However, future compilation targets may have a different format. For example, each result may appear in the nested tree format used by the standard GraphQL specification.
• Each such result must satisfy all directives and types in its corresponding GraphQL query.
• The returned list of results is not guaranteed to be complete!
• In other words, there may have been additional result sets that satisfy all directives and types in the corresponding GraphQL query, but were not returned by the database.
• However, compilation target implementations are encouraged to return complete results if at all practical. The MATCH compilation target is guaranteed to produce complete results.
### Completeness of returned results
To explain the completeness of returned results in more detail, assume the database contains the following example graph:
a ---->_ x
|____ /|
_|_/
/ |____
/ \/
b ----> y
Let a, b, x, y be the values of the name property field of four vertices. Let the vertices named a and b be of type S, and let x and y be of type T. Let vertex a be connected to both x and y via directed edges of type E. Similarly, let vertex b also be connected to both x and y via directed edges of type E.
Consider the GraphQL query:
{
S {
name @output(out_name: "s_name")
out_E {
name @output(out_name: "t_name")
}
}
}
Between the data in the database and the query's structure, it is clear that combining any of a or b with any of x or y would produce a valid result. Therefore, the complete result list, shown here in JSON format, would be:
[
{"s_name": "a", "t_name": "x"},
{"s_name": "a", "t_name": "y"},
{"s_name": "b", "t_name": "x"},
{"s_name": "b", "t_name": "y"},
]
This is precisely what the MATCH compilation target is guaranteed to produce. The remainder of this section is only applicable to the gremlin compilation target. If using MATCH, all of the queries listed in the remainder of this section will produce the same, complete result list.
Since the gremlin compilation target does not guarantee a complete result list, querying the database using a query string generated by the gremlin compilation target will produce only a partial result list resembling the following:
[
{"s_name": "a", "t_name": "x"},
{"s_name": "b", "t_name": "x"},
]
Due to limitations in the underlying query language, gremlin will by default produce at most one result for each of the starting locations in the query. The above GraphQL query started at the type S, so each s_name in the returned result list is therefore distinct. Furthermore, there is no guarantee (and no way to know ahead of time) whether x or y will be returned as the t_name value in each result, as they are both valid results.
Users may apply the @output_source directive on the last scope of the query to alter this behavior:
{
S {
name @output(out_name: "s_name")
out_E @output_source {
name @output(out_name: "t_name")
}
}
}
Rather than producing at most one result for each S, the query will now produce at most one result for each distinct value that can be found at out_E, where the directive is applied:
[
{"s_name": "a", "t_name": "x"},
{"s_name": "a", "t_name": "y"},
]
Conceptually, applying the @output_source directive makes it as if the query were written in the opposite order:
{
T {
name @output(out_name: "t_name")
in_E {
name @output(out_name: "s_name")
}
}
}
## SQL
The following table outlines GraphQL compiler features, and their support (if any) by various relational database flavors:
Feature/Dialect Required Edges @filter @output @recurse @fold @optional @output_source
PostgreSQL No Limited, intersects, has_edge_degree, and name_or_alias filter unsupported Limited, __typename output metafield unsupported No No No No
SQLite No Limited, intersects, has_edge_degree, and name_or_alias filter unsupported Limited, __typename output metafield unsupported No No No No
Microsoft SQL Server No Limited, intersects, has_edge_degree, and name_or_alias filter unsupported Limited, __typename output metafield unsupported No No No No
MySQL No Limited, intersects, has_edge_degree, and name_or_alias filter unsupported Limited, __typename output metafield unsupported No No No No
MariaDB No Limited, intersects, has_edge_degree, and name_or_alias filter unsupported Limited, __typename output metafield unsupported No No No No
### Configuring SQLAlchemy
Relational databases are supported by compiling to SQLAlchemy core as an intermediate language, and then relying on SQLAlchemy's compilation of the dialect specific SQL string to query the target database.
For the SQL backend, GraphQL types are assumed to have a SQL table of the same name, and with the same properties. For example, a schema type
type Animal {
name: String
}
is expected to correspond to a SQLAlchemy table object of the same name, case insensitive. For this schema type this could look like:
from sqlalchemy import MetaData, Table, Column, String
# table for GraphQL type Animal
animal_table = Table(
'animal', # name of table matches type name from schema
Column('name', String(length=12)), # Animal.name GraphQL field has corresponding 'name' column
)
If a table of the schema type name does not exist, an exception will be raised at compile time. See Configuring the SQL Database to Match the GraphQL Schema for a possible option to resolve such naming discrepancies.
### End-To-End SQL Example
An end-to-end example including relevant GraphQL schema and SQLAlchemy engine preparation follows.
This is intended to show the setup steps for the SQL backend of the GraphQL compiler, and does not represent best practices for configuring and running SQLAlchemy in a production system.
from graphql import parse
from graphql.utils.build_ast_schema import build_ast_schema
from sqlalchemy import MetaData, Table, Column, String, create_engine
from graphql_compiler import graphql_to_sql
# Step 1: Configure a GraphQL schema (note that this can also be done programmatically)
schema_text = '''
schema {
query: RootSchemaQuery
}
# IMPORTANT NOTE: all compiler directives are expected here, but not shown to keep the example brief
directive @filter(op_name: String!, value: [String!]!) on FIELD | INLINE_FRAGMENT
# < more directives here, see the GraphQL schema section of this README for more details. >
directive @output(out_name: String!) on FIELD
type Animal {
name: String
}
'''
schema = build_ast_schema(parse(schema_text))
# Step 2: For all GraphQL types, bind all corresponding SQLAlchemy Tables to a single SQLAlchemy
# metadata instance, using the expected naming detailed above.
# See https://docs.sqlalchemy.org/en/latest/core/metadata.html for more details on this step.
animal_table = Table(
'animal', # name of table matches type name from schema
# Animal.name schema field has corresponding 'name' column in animal table
Column('name', String(length=12)),
)
# Step 3: Prepare a SQLAlchemy engine to query the target relational database.
# See https://docs.sqlalchemy.org/en/latest/core/engines.html for more detail on this step.
engine = create_engine('<connection string>')
# Step 4: Wrap the SQLAlchemy metadata and dialect as a SqlMetadata GraphQL compiler object
# Step 5: Prepare and compile a GraphQL query against the schema
graphql_query = '''
{
Animal {
name @output(out_name: "animal_name")
@filter(op_name: "in_collection", value: ["$names"]) } } ''' parameters = { 'names': ['animal name 1', 'animal name 2'], } compilation_result = graphql_to_sql(schema, graphql_query, parameters, sql_metadata) # Step 6: Execute compiled query against a SQLAlchemy engine/connection. # See https://docs.sqlalchemy.org/en/latest/core/connections.html for more details. query = compilation_result.query query_results = [dict(result_proxy) for result_proxy in engine.execute(query)] ### Configuring the SQL Database to Match the GraphQL Schema For simplicity, the SQL backend expects an exact match between SQLAlchemy Tables and GraphQL types, and between SQLAlchemy Columns and GraphQL fields. What if the table name or column name in the database doesn't conform to these rules? Eventually the plan is to make this aspect of the SQL backend more configurable. In the near-term, a possible way to address this is by using SQL views. For example, suppose there is a table in the database called animal_table and it has a column called animal_name. If the desired schema has type type Animal { name: String } Then this could be exposed via a view like: CREATE VIEW animal AS SELECT animal_name AS name FROM animal_table At this point, the animal view can be used in the SQLAlchemy Table for the purposes of compiling. ## Miscellaneous ### Pretty-Printing GraphQL Queries To pretty-print GraphQL queries, use the included pretty-printer: python -m graphql_compiler.tool <input_file.graphql >output_file.graphql It's modeled after Python's json.tool, reading from stdin and writing to stdout. ### Expanding @optional vertex fields Including an optional statement in GraphQL has no performance issues on its own, but if you continue expanding vertex fields within an optional scope, there may be significant performance implications. Going forward, we will refer to two different kinds of @optional directives. • A "simple" optional is a vertex with an @optional directive that does not expand any vertex fields within it. For example: { Animal { name @output(out_name: "name") in_Animal_ParentOf @optional { name @output(out_name: "parent_name") } } } OrientDB MATCH currently allows the last step in any traversal to be optional. Therefore, the equivalent MATCH traversal for the above GraphQL is as follows: SELECT Animal___1.name as name, Animal__in_Animal_ParentOf___1.name as parent_name FROM ( MATCH { class: Animal, as: Animal___1 }.in('Animal_ParentOf') { as: Animal__in_Animal_ParentOf___1 } RETURN$matches
)
• A "compound" optional is a vertex with an @optional directive which does expand vertex fields within it. For example:
{
Animal {
name @output(out_name: "name")
in_Animal_ParentOf @optional {
name @output(out_name: "parent_name")
in_Animal_ParentOf {
name @output(out_name: "grandparent_name")
}
}
}
}
Currently, this cannot represented by a simple MATCH query. Specifically, the following is NOT a valid MATCH statement, because the optional traversal follows another edge:
-- NOT A VALID QUERY
SELECT
Animal___1.name as name,
Animal__in_Animal_ParentOf___1.name as parent_name
FROM (
MATCH {
class: Animal,
as: Animal___1
}.in('Animal_ParentOf') {
optional: true,
as: Animal__in_Animal_ParentOf___1
}.in('Animal_ParentOf') {
as: Animal__in_Animal_ParentOf__in_Animal_ParentOf___1
}
RETURN $matches ) Instead, we represent a compound optional by taking an union (UNIONALL) of two distinct MATCH queries. For instance, the GraphQL query above can be represented as follows: SELECT EXPAND($final_match)
LET
$match1 = ( SELECT Animal___1.name AS name FROM ( MATCH { class: Animal, as: Animal___1, where: ( (in_Animal_ParentOf IS null) OR (in_Animal_ParentOf.size() = 0) ), } ) ),$match2 = (
SELECT
Animal___1.name AS name,
Animal__in_Animal_ParentOf___1.name AS parent_name
FROM (
MATCH {
class: Animal,
as: Animal___1
}.in('Animal_ParentOf') {
as: Animal__in_Animal_ParentOf___1
}.in('Animal_ParentOf') {
as: Animal__in_Animal_ParentOf__in_Animal_ParentOf___1
}
)
),
$final_match = UNIONALL($match1, $match2) In the first case where the optional edge is not followed, we have to explicitly filter out all vertices where the edge could have been followed. This is to eliminate duplicates between the two MATCH selections. The previous example is not exactly how we implement compound optionals (we also have SELECT statements within $match1 and \$match2), but it illustrates the the general idea.
#### Performance Penalty
If we have many compound optionals in the given GraphQL, the above procedure results in the union of a large number of MATCH queries. Specifically, for n compound optionals, we generate 2n different MATCH queries. For each of the 2n subsets S of the n optional edges:
• We remove the @optional restriction for each traversal in S.
• For each traverse t in the complement of S, we entirely discard t along with all the vertices and directives within it, and we add a filter on the previous traverse to ensure that the edge corresponding to t does not exist.
Therefore, we get a performance penalty that grows exponentially with the number of compound optional edges. This is important to keep in mind when writing queries with many optional directives.
If some of those compound optionals contain @optional vertex fields of their own, the performance penalty grows since we have to account for all possible subsets of @optional statements that can be satisfied simultaneously.
### Optional type_equivalence_hints parameter
This compilation parameter is a workaround for the limitations of the GraphQL and Gremlin type systems:
• GraphQL does not allow type to inherit from another type, only to implement an interface.
• Gremlin does not have first-class support for inheritance at all.
Assume the following GraphQL schema:
type Animal {
name: String
}
type Cat {
name: String
}
type Dog {
name: String
}
union AnimalCatDog = Animal | Cat | Dog
type Foo {
}
An appropriate type_equivalence_hints value here would be { Animal: AnimalCatDog }. This lets the compiler know that the AnimalCatDog union type is implicitly equivalent to the Animal type, as there are no other types that inherit from Animal in the database schema. This allows the compiler to perform accurate type coercions in Gremlin, as well as optimize away type coercions across edges of union type if the coercion is coercing to the union's equivalent type.
Setting type_equivalence_hints = { Animal: AnimalCatDog } during compilation would enable the use of a @fold on the adjacent_animal vertex field of Foo:
{
Foo {
... on Animal {
name @output(out_name: "name")
}
}
}
}
### SchemaGraph
When building a GraphQL schema from the database metadata, we first build a SchemaGraph from the metadata and then, from the SchemaGraph, build the GraphQL schema. The SchemaGraph is also a representation of the underlying database schema, but it has three main advantages that make it a more powerful schema introspection tool:
1. It's able to store and expose a schema's index information. The interface for accessing index information is provisional though and might change in the near future.
2. Its classes are allowed to inherit from non-abstract classes.
3. It exposes many utility functions, such as get_subclass_set, that make it easier to explore the schema.
See below for a mock example of how to build and use the SchemaGraph:
from graphql_compiler.schema_generation.orientdb.schema_graph_builder import (
get_orientdb_schema_graph
)
from graphql_compiler.schema_generation.orientdb.utils import (
ORIENTDB_INDEX_RECORDS_QUERY, ORIENTDB_SCHEMA_RECORDS_QUERY
)
# Get schema metadata from hypothetical Animals database.
client = your_function_that_returns_an_orientdb_client()
schema_records = client.command(ORIENTDB_SCHEMA_RECORDS_QUERY)
schema_data = [record.oRecordData for record in schema_records]
# Get index data.
index_records = client.command(ORIENTDB_INDEX_RECORDS_QUERY)
index_query_data = [record.oRecordData for record in index_records]
# Build SchemaGraph.
schema_graph = get_orientdb_schema_graph(schema_data, index_query_data)
# Get all the subclasses of a class.
print(schema_graph.get_subclass_set('Animal'))
# {'Animal', 'Dog'}
# Get all the outgoing edge classes of a vertex class.
print(schema_graph.get_vertex_schema_element_or_raise('Animal').out_connections)
# {'Animal_Eats', 'Animal_FedAt', 'Animal_LivesIn'}
# Get the vertex classes allowed as the destination vertex of an edge class.
print(schema_graph.get_edge_schema_element_or_raise('Animal_Eats').out_connections)
# {'Fruit', 'Food'}
# Get the superclass of all classes allowed as the destination vertex of an edge class.
print(schema_graph.get_edge_schema_element_or_raise('Animal_Eats').base_out_connection)
# Food
# Get the unique indexes defined on a class.
print(schema_graph.get_unique_indexes_for_class('Animal'))
# [IndexDefinition(name='uuid', 'base_classname'='Animal', fields={'uuid'}, unique=True, ordered=False, ignore_nulls=False)]
In the future, we plan to add SchemaGraph generation from SQLAlchemy metadata. We also plan to add a mechanism where one can query a SchemaGraph using GraphQL queries.
### Cypher query parameters
RedisGraph doesn't support query parameters, so we perform manual parameter interpolation in the graphql_to_redisgraph_cypher function. However, for Neo4j, we can use Neo4j's client to do parameter interpolation on its own so that we don't reinvent the wheel.
The function insert_arguments_into_query does so based on the query language, which isn't fine-grained enough here-- for Cypher backends, we only want to insert parameters if the backend is RedisGraph, but not if it's Neo4j.
Instead, the correct approach for Neo4j Cypher is as follows, given a Neo4j Python client called neo4j_client:
compilation_result = compile_graphql_to_cypher(
schema, graphql_query, type_equivalence_hints=type_equivalence_hints)
with neo4j_client.driver.session() as session:
result = session.run(compilation_result.query, parameters)
## Amending Parsed Custom Scalar Types
Information about the description, serialization and parsing of custom scalar type objects is lost when a GraphQL schema is parsed from a string. This causes issues when working with custom scalar type objects. In order to avoid these issues, one can use the code snippet below to amend the definitions of the custom scalar types used by the compiler.
from graphql_compiler.schema import CUSTOM_SCALAR_TYPES
from graphql_compiler.schema_generation.utils import amend_custom_scalar_types
amend_custom_scalar_types(your_schema, CUSTOM_SCALAR_TYPES)
## FAQ
Q: Do you really use GraphQL, or do you just use GraphQL-like syntax?
A: We really use GraphQL. Any query that the compiler will accept is entirely valid GraphQL, and we actually use the Python port of the GraphQL core library for parsing and type checking. However, since the database queries produced by compiling GraphQL are subject to the limitations of the database system they run on, our execution model is somewhat different compared to the one described in the standard GraphQL specification. See the Execution model section for more details.
Q: Does this project come with a GraphQL server implementation?
A: No -- there are many existing frameworks for running a web server. We simply built a tool that takes GraphQL query strings (and their parameters) and returns a query string you can use with your database. The compiler does not execute the query string against the database, nor does it deserialize the results. Therefore, it is agnostic to the choice of server framework and database client library used.
Q: Do you plan to support other databases / more GraphQL features in the future?
A: We'd love to, and we could really use your help! Please consider contributing to this project by opening issues, opening pull requests, or participating in discussions.
Q: I think I found a bug, what do I do?
A: Please check if an issue has already been created for the bug, and open a new one if not. Make sure to describe the bug in as much detail as possible, including any stack traces or error messages you may have seen, which database you're using, and what query you compiled.
Q: I think I found a security vulnerability, what do I do?
A: Please reach out to us at graphql-compiler-maintainer@kensho.com so we can triage the issue and take appropriate action.
Licensed under the Apache 2.0 License. Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the specific language governing permissions and limitations under the License.
Copyright 2017-present Kensho Technologies, LLC. The present date is determined by the timestamp of the most recent commit in the repository.
## Project details
Uploaded source
Uploaded py2 py3
|
2022-11-29 17:41:55
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.17787998914718628, "perplexity": 5280.732678769557}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-49/segments/1669446710710.91/warc/CC-MAIN-20221129164449-20221129194449-00671.warc.gz"}
|
https://gaseri.org/en/tutorials/gromacs/5-umbrella/
|
# GROMACS Tutorial 5 -- Methane-Methane PMF from Window Sampling
In this tutorial we'll be using window sampling, also sometimes known as umbrella sampling, to extract the methane-methane potential of mean force. In tutorial 3 we got the PMF the direct way by simulating several methanes and getting the radial distribution function. It's not always possible to sample several solutes like this.
We are going to take two methanes and restrain them at different distances using a harmonic potential. The different distances are "windows". In a regular simulation, like the one we did in tutorial 3, some of these windows are rarely sampled. Umbrella sampling solves this by forcing the molecules to stay within a certain range of a set distance.
At the end, we'll use the GROMACS implementation of weighted histogram analysis (WHAM) to reconstruct the PMF. An article on the Alchemistry wiki discusses WHAM in the context of alchemical changes. Here our reaction coordinate is the distance between the two methanes.
## Setup
### Create box
Once again we'll be reusing methane.pdb and topol.top from our previous tutorials. Insert two methanes into a box using gmx insert-molecules and then solvate the box using gmx solvate. The box needs to be cubic and at least 3.1 nm in each direction for this tutorial.
### Create index file
We also need to create an index file with the two groups we are interested in restraining with our umbrella potential. Create an index file using gmx index creating a group containing just one carbon from one of the methanes and naming them CA and CB respectively.
$gmx make_ndx -f conf.gro Then, assuming the residue CH4 is in group 2: > splitres 2 Now I assume the last two groups are 6 and 7 which are the two methane molecules: > 6 & a C > 7 & a C Now name the groups: > name 8 CA > name 9 CB > q There are other ways to get to the same place with gmx make_ndx. The point is, you need to get each methane carbon in its own index group and name them CA and CB. ### Parameter files We're pretty much reusing the parameter files from the first few tutorials, except we'll be adding a section on center-of-mass (COM) pulling. The pull code is how we'll keep our methanes a specified distance apart. There are probably a few different ways to set this up, but for this system, we'll manually specify each distance we want for the two methanes. The parameter files for each step are found here. Just like in the free energy tutorial, these files are templates with a keyword that will be replaced in a bash script. This is because we have to run a full set of simulations for each window and need to specify the distance in each one. Here's an explanation of the new parameters that are used in each file: parameter value explanation pull yes Use the pull code. pull-ngroups 2 We have two groups that we're pulling. pull-group1-name CA We specified this in the index file. For us, this will be the carbon of one of the methanes, although we probably could have chosen the entire methane. If we did that it would have been pulled along the COM of the entire molecule. pull-group2-name CB The carbon of the other methane. pull-ncoords 1 We are pulling along only one coordinate. pull-coord1-geometry distance We're going to pull along the vector connecting our two groups. pull-coord1-type umbrella Use an umbrella (harmonic) potential for this coordinate. pull-coord1-groups 1 2 For this pull coordinate these are the two groups (defined below) that will be pulled. You can actually have more thane one pull coordinate and so do pulling across different sets of molecules, but that's not applicable here. pull-coord1-k 5000.0 The force constant used in the umbrella potential in kJ/(mol nm). pull-coord1-init WINDOW This is the distance we want our two groups to be apart. I've put this keyword here that I'll replace in our bash script for each window. pull-coord1-rate 0.0 We don't want the groups to move along the coordinate any, so this is 0. pull-coord1-start no We're manually specifying the distance for each window, so we do not want to add the center of mass distance to the calculation. The parameter files are set up for a 100 ps NVT equilibration, then a 1 ns NPT equilibration, and lastly a 5 ns production run. We are planning on the methanes getting to the correct distances when the umbrella potential is applied during the equilibrations. For some other systems, you may have to be more methodical in how you generate your initial configurations for each window. ## Simulation For the simulations, we're going to use a bash script to replace the WINDOW keyword in our mdp files, very similar to what we did in the free energy simulation. Here is the script: #!/bin/bash set -e for ((i = 0 ; i < 27 ; i++)); do x=$(echo "0.05*$(($i+1))" | bc);
sed 's/WINDOW/'$x'/g' mdp/min.mdp > grompp.mdp gmx grompp -o min.$i.tpr -pp min.$i.top -po min.$i.mdp -n index.ndx
gmx mdrun -s min.$i.tpr -o min.$i.trr -x min.$i.xtc -c min.$i.gro -e min.$i.edr -g min.$i.log -pf pullf-min.$i -px pullx-min.$i
sed 's/WINDOW/'$x'/g' mdp/min2.mdp > grompp.mdp gmx grompp -o min2.$i.tpr -c min.$i.gro -pp min2.$i.top -po min2.$i.mdp -maxwarn 1 -n index.ndx gmx mdrun -s min2.$i.tpr -o min2.$i.trr -x min2.$i.xtc -c min2.$i.gro -e min2.$i.edr -g min2.$i.log -pf pullf-min2.$i -px pullx-min2.$i sed 's/WINDOW/'$x'/g' mdp/eql.mdp > grompp.mdp
gmx grompp -o eql.$i.tpr -c min2.$i.gro -pp eql.$i.top -po eql.$i.mdp -n index.ndx
gmx mdrun -s eql.$i.tpr -o eql.$i.trr -x eql.$i.xtc -c eql.$i.gro -e eql.$i.edr -g eql.$i.log -pf pullf-eql.$i -px pullx-eql.$i
sed 's/WINDOW/'$x'/g' mdp/eql2.mdp > grompp.mdp gmx grompp -o eql2.$i.tpr -c eql.$i.gro -pp eql2.$i.top -po eql2.$i.mdp -n index.ndx gmx mdrun -s eql2.$i.tpr -o eql2.$i.trr -x eql2.$i.xtc -c eql2.$i.gro -e eql2.$i.edr -g eql2.$i.log -pf pullf-eql2.$i -px pullx-eql2.$i sed 's/WINDOW/'$x'/g' mdp/prd.mdp > grompp.mdp
gmx grompp -o prd.$i.tpr -c eql2.$i.gro -pp prd.$i.top -po prd.$i.mdp -n index.ndx
gmx mdrun -s prd.$i.tpr -o prd.$i.trr -x prd.$i.xtc -c prd.$i.gro -e prd.$i.edr -g prd.$i.log -pf pullf-prd.$i -px pullx-prd.$i
done
We are simulating 26 windows from 0.05 to around 1.3 nm in distance. Notice that I've added -pf and -px flags for the pulling force and pulling distance for each step. This is because with -deffnm GROMACS will try to write both to the same file. Also, I've specified the index file with -n since gmx grompp needs to get the groups we specified with the pull parameters. Note that I am using a little trick with bc in order to do math with floating-point numbers in bash.
## Analysis
We're going to use gmx wham to get the PMF. The program takes a file containing a list of the .tpr files and another file containing a list of the .xvg files containing the force as arguments.
To create these two files do:
$ls prd.*.tpr > tpr.dat$ ls pullf-prd.*.xvg > pullf.dat
Then you can run gmx wham:
$gmx wham -it tpr.dat -f pullf.dat After running gmx wham you'll get the potential of mean force in a file named profile.xvg. If you were to plot this right away, it should look like this: We would expect the interaction to go to zero at longer distances. Because we used a 3-dimensional biasing potential, however, we need to include a correction. Imagine one of the methanes as the reference point. The other methane is allowed to sample all around that point at distance r, covering the surface of some sphere with radius r. This adds extra configurational space to our sampling, decreasing the entropy. This extra entropic contribution to our PMF needs to be removed. Recall that the Gibbs free energy in the isothermal isobaric ensemble is -kTln(W) where W is the partition function. In the case of our methane dancing around the surface of a sphere, W is proportional to the surface area of that sphere. So, a correction of 2kTln(r) needs to be added. Additionally, we need to shift the plot up such that its tail goes to zero. I found adding about 77 worked for my particular system, but yours may be different. To plot this in gnuplot do the following in a gnuplot terminal: > plot 'profile.xvg' u 1:($2+2*8.314e-3*298.15*log(\$1)+77) w l
Your PMF should now look like this:
Comparing this with the PMF from tutorial 3 we can see that they are nearly identical:
One difference is that with the direct method we never get near as close as with window sampling. Two methanes will not just naturally want to be near each other, which is why we have to add the umbrella potentials to keep them there.
The other output is histo.xvg which is helpful in determining if there is enough overlap between windows. Here is a plot of each histogram for this simulation:
Clearly, our windows are overlapping sufficiently. If they were not, we might have to choose a smaller window size or pick specific spots that were missing to simulate.
## Summary
In this tutorial, we used GROMACS COM pull code to do window sampling on two methanes in water. From there we used gmx wham to extract the potential of mean force.
Author: Wes Barnett, Vedran Miletić
|
2023-01-27 15:49:57
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.3102991580963135, "perplexity": 5909.688538577158}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2023-06/segments/1674764494986.94/warc/CC-MAIN-20230127132641-20230127162641-00194.warc.gz"}
|
http://codeforces.com/blog/entry/70915?locale=en
|
### MikeMirzayanov's blog
By MikeMirzayanov, 11 months ago, translation,
Hello!
ICPC Southern and Volga Russian Regional Contest (Northern Eurasia) 2019 has ended on October 15. There were 76 teams onsite in Saratov, most of them were invited because of their result on the qualification stage.
In this contest I play a role of Cheif Judge and the jury teams consists of ex-participants of ICPC from Saratov and jury members from other cities. Many thanks to all of them! I hope you will like the problems!
I invite ICPC teams and individual participants of Codeforces competitions to take part! Sure, the contest will be unrated.
• +163
» 11 months ago, # | -66 is this rated for div3?
• » » 11 months ago, # ^ | +12 "Sure, the contest will be unrated."
» 11 months ago, # | -9 Thank you for your efforts. Good luck to everyone!
» 11 months ago, # | 0 how to solve E?
• » » 11 months ago, # ^ | ← Rev. 2 → +22 It's kind of 2-SAT.Let's iterate over all pairs of necklaces. For each pair $p$ and $q$, look at the similarities of $p, q$ and $p, rev(q)$. If this pair can only exist when both are in the same direction, draw a white edge between them; if this pair can only exist when one of them is reversed, draw a black edge between them; if this pair can't exist either way, report a failure. After this reduction it is fairly similar to e.g. 553C - Любовные треугольники.We want to split the resulting graph into two parts $S$ and $T$, such that there are only black edges between $S$ and $T$, and only white edges within $S$ and $T$; also, we want to minimize the size of $T$ (i.e. the number of reversed necklaces). First check if there is a cycle with odd number of black edges, if there is, report a failure. Otherwise, we have a bunch of "bipartite" connected components. Put the smaller half of each component into $T$ and we have our answer.
• » » 11 months ago, # ^ | +22 E had very odd bounds as it can be solved in quadratic time.For every pair of necklaces, we either get that Neither or both necklaces must be flipped (xor must be 0) Exactly one of the necklaces must be flipped (xor must be 1) The pair of necklaces imposes no conditions No amount of flipping makes them similar If 4 happens even once, answer impossible. Now knowing the orientation of one necklace in a connected component (with conditions as edges) determines the orientations of the rest. Just take the orientation that minimises flipped necklaces.
• » » » 11 months ago, # ^ | +7 Isn't it still O(N^2*M) for constructing the graph? For every pair of strings you need to iterate over them to count the similarity. After that it's just the coloring described by you. Is there a way of making the graph in a better complexity?
• » » » » 11 months ago, # ^ | +4 You can compare the pairs of strings by simply taking their xor. If you store them as bitmasks, the comparison takes, in some sense, $O(1)$ time. I assume this is what he means.
• » » » » » 11 months ago, # ^ | 0 Thanks! I figured it out eventually. I find the bounds kind of odd now too.
• » » 11 months ago, # ^ | 0 oh I'll check it. Thank you so much -is-this-fft- mango_lassi
» 11 months ago, # | -6 How to approach B?
• » » 11 months ago, # ^ | +14 you need at least k/2 buses and at most k buses since k<=8000 iterate on each k to find the optimal one
• » » » 11 months ago, # ^ | ← Rev. 2 → +3 Only elaborating this approach, since I find the problem quite interesting. Suppose we have the counts of all teams in a non-descending/sorted order. An important observation is that in an optimal solution, the teams that go alone in the bus forms a suffix. This can be proven with an exchange argument. Now, what is the best way to pair up teams in a prefix? We want to pair up teams so that the maximum sum of counts is minimised. Consider the largest team in the prefix. It is best to pair it up with the smallest team (again, can prove with an exchange argument). So we remove the first and last of the prefix and repeat the procedure. Finally, we iterate on each prefix of even length (say $2i$) — obtain the maximum of the remaining suffix, and obtain the maximum sum of pairs of elements that are equally distant from the beginning and end of the prefix. Here, $s$ is the maximum of the two, and $r$ is $i + (n - 2i)$. And for all such $i$, we take the minimum of $s\cdot r$.
• » » » » 11 months ago, # ^ | 0 Can you provide a link to the code? Thanks for the help.
• » » » » » 11 months ago, # ^ | 0
» 11 months ago, # | -16 What is test case 2 in J ?
• » » 11 months ago, # ^ | 0 there were a lot of test cases
• » » » 11 months ago, # ^ | 0 Thanks, test cases are visible now
» 11 months ago, # | ← Rev. 2 → -21 -
» 11 months ago, # | 0 Can anyone help me in explaining how to solve problem N (Wires) ? My solution is failing for test-case 7 :( Not able to figure it out
• » » 11 months ago, # ^ | +1 You delete a bridge.
• » » » 11 months ago, # ^ | 0 I thought that I am already taking care of that. Looks like I am having a big or something. Thanks for that :)
• » » 11 months ago, # ^ | +21 Iterate over components, run dfs.Then you can always delete the last vertex in dfs (Try to think why it is true). So connect any vertex from next component with parent of last vertex.You don't need to find bridges or something like this.
• » » » 11 months ago, # ^ | 0 Could you hack my solution?I add all the edges to connect the point, which is relabeled,using dsu and keep the sum of edge for each point.I check all the edges,if its Unicom block haven't been visit and one of its end is size of 1,I push it and its size-one point to a stack.If there is still some Unicom blocks haven't been visit, which means it's a circle.I got any of a edge with one of its point.I connect the 2~n element in the stack to element 1.I also WA on test 7.I have test many case and submit 10 times.I wonder whether my solution bugged or the SPJ bugged.Thanks a lot.
• » » » 11 months ago, # ^ | ← Rev. 3 → 0 oh yeah! I am using algorithm to find all articulation bridges and made it complex. I see why choosing last node of the simple DFS works. I ll keep that in mind. Thanks for that :)Edit: Removing the code to find articulation bridges and just using last edge of DFS gave me an AC :)
• » » » » 10 months ago, # ^ | 0 Hi dude, can you explain me why your greedy approach is correct?
• » » » » » 10 months ago, # ^ | 0 Never mind, I just realized that!
» 11 months ago, # | 0 According to the contests tab, it is "open hacking phase" now? Is there any actual hacking that can be done now? I.e. is there a reasonable chance that the test set (which was probably the same at the subregional) is not complete?
» 11 months ago, # | +40 Should we wait for editorial or not?
» 11 months ago, # | +4 How to solve problem C?
• » » 11 months ago, # ^ | ← Rev. 3 → +1 For C, What I did was, start by sorting queries by their r value. then i created a segment tree which tells the maximum possible answer with some l value for fixed r. segment tree was range update and point maximum query segment tree which also tells the index of maximum. now the main problem was how to store answer in segment tree so for fixed right index =x the index x would have -k + sum of segments which lie in (x-1,x) and index x-1 would have -2*k + sum of segments which lie in(x-2,x) and so on. we can do this possible by iterating over r and updating the range (1,r) with -k everytime.Heres my code- https://www.ideone.com/g2j16q
• » » 11 months ago, # ^ | 0 https://pastebin.com/UNyjNnuuHere hope this helps
• » » 11 months ago, # ^ | -8 Key observation is that to reach row_index rb from ra all the elements lying between will be of same type(if element at ra is even then all elements will be even and vice versa)..same thing applies for column too... I am limited by my formatting skills otherwise I could have explained better
• » » » 11 months ago, # ^ | 0 I think you posted this in the wrong announcement.
• » » » » 11 months ago, # ^ | 0 Oops...nvm thanks for pointing it out
» 11 months ago, # | 0 How to solve problem N:Wires?
• » » 11 months ago, # ^ | +1 If the number of connected components in the graph is $c$, then the answer is $c-1$.Just connect all the other components to the first component. To join the component:1) If it is a tree, change any leaf-vertex's edge2) otherwise, change any edge that belongs to a cycle
• » » » 11 months ago, # ^ | 0 I was also trying to do this....but I couldn't hand the case31 23 45 6My code was connecting 1 with 3 or 4 which resulted in 2 becoming isolated.. How can I handle this situation
• » » » » 11 months ago, # ^ | 0 Vertices can be isolated.We just need to make sure the edges are connected.
• » » » » » 11 months ago, # ^ | ← Rev. 3 → 0 Oww!!!!of course...I guess I was overthinking after getting WA on test case 6...I was thinking about the vertices being isolated [Even sample case does that :|]....
• » » » 11 months ago, # ^ | ← Rev. 2 → 0 UPD : Got it!! I wasn't considering multiple edges between two nodesIf the component is not a tree then if I change either any leaf node edge or cycle node edge, then will there be any problem63618975 Here I am trying to do that , but getting WA
» 11 months ago, # | +24 How to solve D, I, M?
• » » 11 months ago, # ^ | ← Rev. 2 → +8 I: my solution was the weirdest solution I've ever came up withFirst to make it simpler we'll take the complement of the set, so we'll order by lesser size and greater sum and the restriction goes from SET <= K to SUM — SET >= K.Now, let's try and get TLE with a bruteforce solution like bf(i, remaining that need to be taken, current sum) + ordering the values before bruteforce.Here comes the trick, if we compress a tree to not have any internal nodes with degree 2 then it has O(N) leafs and O(N) nodes, using this fact we can compress the segments that we don't have a choice but to choose the values by using a binary search to remove that segment.This isn't enough since it might have values mixed in that are smaller than what should be the last value, so we do a binary search to find the largest value that doesn't use all M things and then do a special iteration to get some value that's exactly what the last should be.Complexity is something like O(N * logN * logV) or something like that lol.
• » » » 11 months ago, # ^ | ← Rev. 4 → +23 Alternative solution for I:Let's suppose we fix $K$ as the number of people in the subset $S$ and the people are sorted in increasing order of awkwardness. Obviously, the first set that we should output is $v_1, ..., v_K$. The main idea is how to transition from this set to potentially other sets while iterating through them in order of total awkwardness.It turns out that we can do that by two operations. Consider a pointer that initially points to $K$ and represents to the position of the $K$-th person in the set. We now consider two operations: - increase the position of the pointer by $1$ - fix the position of the pointer and grab a pointer to the previous person in the set.Basically, for set $1, 3, 5$ we can do these operations: $1, 2, 3 \rightarrow 1, 2, 4 \rightarrow 1, 2, 5 \rightarrow 1, 3, 5$.Essentially, we start increasing the second pointer only when we fixed the value for the first pointer.Two aspects here are essential: that the sequence of operations is unique for a given target set (we don't end up at a target set in two different ways), and that the intermediary sets are of increasing total awkwardness (the cost of transitioning is always positive). Therefore, the state space with these two operations is actually a rooted tree, and edges are always of positive weight. In fact, all edge costs are of form $v_{i + 1} - v_i$. That means that we can traverse the tree in order by simulating Dijkstra algorithm on this search tree.In order to reach the good complexity, we need to keep for each node in the tree a state of form: (maximum remaining sum, where current pointer points to, where is the fixed element to its right, how many elements are before it). Then we can transition from a state $(s, pos, npos, idx)$ to $(s - v_{pos+1} + v_{pos}, pos + 1, npos, idx)$ if $pos + 1 < npos$ (incrementing) or $(s - v_{idx + 1} + v_{idx}, idx, pos, idx - 1)$ if $idx > 0$ (fixing + incrementing). Each state determines the values $ptr_0 = 0, ptr_1 = 1, ..., ptr_{idx - 1} = idx - 1, ptr_{idx} = pos$, and the positions greater than $idx$ are determined by its path in the search tree.The rest is just reconstruction bonanza.Total complexity is $O(k \log{k})$.SolutionHow to solve D?
• » » » » 11 months ago, # ^ | 0 Thank you but the solution can't be viewed.
• » » » » » 11 months ago, # ^ | 0 Fixed.
• » » » » 11 months ago, # ^ | +47 solution of D:First we can imagine all the persons as 2-dimensional point $(L,R)$ with color $c$. If a person who located at $(x,y)$ is unhappy, then all person who located in the upper-left of $(y,x)$ should be in the same color(think about when two intervals don't intersect). So you can enumerate all the persons and see:if he is unhappy, what the color will be in the upper-left rectangle or we don't know the color yet.we change the problem to this:give some points with color(or don't have any color yet),and some upper-left rectangles with color(or don't have any color yet). you should choose rectangles as many as possible and make sure: 1.give colors to all rectangles and points which don't have colors yet. 2.all points in a chosen rectangle should have the same color with the rectangle.we can run a dp in $O(n^2k)$. for the first step,sort all rectangles according to its right boundary(from right to left).then define $dp[i][j][k]$ as "after considering the first $i$ rectangles, all useful points whose y-coordinate is larger than $j$ are already colored the $k-th$ color,how many rectangles you can choose at most?". some explanation about it: 1.after discretization, there are only $O(n)$ y-coordinates; 2.definition of useful points:you can imagine all the chosen rectangles, the figure of them is something like “stairs”. Obiviously only the points in the left of the leftmost stair are useful(because the rectangles are sorted).there are more details: 1.precalculate the color/number of points in all rectangles(I use bitset) 2.color the rectangle. and in the process of dp you need the information precalculated in the first step.the dp details are a little bit complicated but easy to think.At first I'm afraid this solution is not efficient enough, but after coding carefully, my solution runs in only $140ms$
• » » » » » 11 months ago, # ^ | +34 How to solve K? qwq
• » » » » » » 11 months ago, # ^ | ← Rev. 4 → +32 Suppose that we fix a way of distributing the ordinary projectors to seminars. Then there's a way to distribute the HD projectors iff for any integer $t,$ the time $t+0.5$ is not contained within more than $x$ intervals corresponding to seminars or lectures which do not already have a projector. (condition 1)It follows that we don't actually care about the exact intervals spanned by lectures. For example, $[1,4), [2,3)$ is indistinguishable from $[1,2),[2,3),[3,4),[2,3)$ or $[1,3),[2,4).$ Thus, we want to create some sort of flow graph with the seminars (not the lectures) as edges.The way of distributing the ordinary projectors must satisfy the following condition: Suppose that at time $t+0.5,$ exactly $X$ lectures and $Y$ seminars are happening. Then $X\le x, X+Y\le x+y,$ and at most $\min(y,x+y-X-Y)$ ordinary projectors are not being used at time $t+0.5$. (condition 2)Now let's construct a graph for the ordinary projectors. Create a vertex for each (compressed) time in increasing order from left to right. The source (left of all other vertices) has an edge to the first time and the last time has an edge to the sink (right of all other vertices). There will be edges for each seminar with capacity one and edges between every two adjacent times with capacities determined by (condition 2). All edges are directed from left to right. Then there will exist a distribution satisfying (condition 2) iff the max flow from the source to the sink is equal to $y.$If (condition 2) is satisfied, then (condition 1) will be satisfied as well, so after distributing the ordinary projectors we can distribute the HD projectors greedily. (Nice problem!)Code
• » » » » » 11 months ago, # ^ | ← Rev. 4 → +26 I have another interesting solution for D. First we can sort all persons by $r$ (departure time) in increasing order. For the $i$-th persion, consider who he will meet during the conference. We can define it as a set. $S(i)$ means the set of persons that the $i$-th persion will meet. For simplicity, we let $i \in S(i)$.According to the colors, there are 4 types of sets. The set contains at least two different nonzero colors. The colors in the set are the same, a nonzero color. The set contains two different colors, one is 0 and the other is nonzero. The colors in the set are all 0. When $S(i)$ is type 1, person $i$ is impossibe to be upset. When $S(i)$ is type 2, person $i$ must be upset.So we can preprocess these two types of sets and ignore them in the following solution.So the problem can be changed to this: we can select some sets (to make the corresponding persons upset), and assign a color to each set (For set of type 3, we can only assign the existing color). And this assignment is valid if and only if it doesn't exist that two sets are assigned different color but intersect with each other.Since $r$ (departure time) for everyone is increasing, we can describe $S(i)$ in a simple way. We can define $p(i)$ as the smallest $x$ that satisifies $r_x \geq l_i$. Then for all $p(i) \leq x \leq i, x \in S(i)$. And for those $x$ greater than $i$, $x \in S(i)$ if and only if $p(x) \leq i$. Hence, $S(i)= \{x|p(i)\leq x \leq i \vee p(x) \leq i \}$$S(i)$ consists of two parts, the first part of which is $\{x|p(i)\leq x \leq i \}$, a consecutive interval; the second part is $\{x|x>i,p(x) \leq i \}$, which may be some isolated points.Then we consider when will two sets $S(i)$ and $S(j)$ intersect. Suppose $i < j$, there are 3 types of intersection. The first part of $S(i)$ intersects with the first part of $S(j)$, if and only if $p(j)\leq i$. The second part of $S(i)$ intersects with the first part of $S(j)$, if and only if $\exists x, p(j)\leq x \leq j \wedge p(x) \leq i$. The second part of $S(i)$ intersects with the second part of $S(j)$, if and only if $\exists x, x>j \wedge p(x) \leq i$. (It is impossible that the first part of $S(i)$ intersects with the second part of $S(j)$ since $i < j$.)We can combine the conditions for the second and third types of intersection to $\exists x, x \geq p(j)\wedge p(x) \leq i$. We can define ${\rm minp}(i)=\min_{x\geq i}p(x)$. Then this condition can be rewrited as ${\rm minp}(p(j))\leq i$.Next, we can define $f(i) = \min(p(i), {\rm minp}(p(i)))$. Then $S(j)$ intersects with $S(i)$($i < j)$ if and only if $f(j) \leq i$. This is an elegant conclusion. After we get it, we can work out a dp solution. Define $dp(i)$ as the maximum number of unhappy persons when we only consider the first $i$ persons. After calculating $dp(i-1)$, how to get the value of $dp(i)$? First, we set $dp(i) = dp(i-1)$, then consider the maximum number of persons upset if we make person $i$ upset. We can enumerate the color $j$ assigned to $S(i)$, and then enumerate $k(0\leq k < p(i))$, which means the sets among $S(x)( k k$ and the color of $S(x)$ is $j$ or 0. We can use Binary Indexed Tree to maintain the amounts. This editorial is so long and I have written it for a long time because I'm not so good at English. But this approach is very easy to implement. I finished the code within 20 minutes. The time complexity is $O(n^2 k\log (n))$, where $k$ is the number of total colors. In the problem, $k=200$. Since we shouldn't enumerate all colors when $S(i)$ is type 3 (have a specific color), the complexity is far from reaching the upper bound. My solution runs in $93ms$.Here is my code.solution
• » » » » » » 11 months ago, # ^ | 0 For the reason that the number of add operation is $n$, so with prefix sum, the time complexity is $O(n^2k)$.
• » » 11 months ago, # ^ | +64 M is nice. For each bit $i$, let L = set of indices $x$ with bit $i = 0$, R = set of indices $y$ with bit $i = 1$ and $y + 1$ also has bit $i = 1$. Similarly, for each bit $i$, let L = set of indices $x$ with bit $i = 1$, R = set of indices $y$ with bit $i = 1$ and $y + 1$ also has bit $i = 0$. Now, we used 26 sets.Observe that only pairs $(x, y)$, with y = 01111...1 (the string of 1s is nonempty) and x = 1 such that there is at least one bit 1 in the suffix, are uncovered.Thus, it is sufficient to make $12$ more queries that cover all y of the form 0111...1 for a fixed 1-suffix length and then put all x that can be matched on the left hand side.In total, my solution uses $38$ queries.
• » » » 11 months ago, # ^ | +5 Can you please provide your implimentation if you have it.
• » » » » 11 months ago, # ^ | +3 My implementation if you need :/#include using namespace std; // const int N = 5007; // bool mark[N][N]; inline bool checkbit(int x, int p) { return (x & (1< 0; } int main() { ios::sync_with_stdio(false); cin.tie(0); cout.tie(0); int n; cin >> n; vector< vector > rowres, colres; // memset(mark, false, sizeof mark); for(int b=0; b<2; ++b) { for(int i=0; (1< rows, cols; for(int k=1; k<=n; ++k) { if(checkbit(k, i) == b) rows.push_back(k); else if(checkbit(k+1, i) != b) cols.push_back(k); } if(rows.empty() or cols.empty()) continue; rowres.push_back(rows); colres.push_back(cols); // for(int &x : rows) { // for(int &y : cols) { // mark[x][y] = true; // } // } } } for(int i=1; (1< cols; for(int y=1; y<=n; ++y) { if((y & suff) == suff and (y & suff_e) == suff) { cols.push_back(y); } } if(cols.empty()) continue; vector rows; for(int x=1; x<=n; ++x) { bool ok = true; for(int y : cols) { if(y == x or y == x-1) { ok = false; break; } } if(ok) rows.push_back(x); } if(rows.empty()) continue; rowres.push_back(rows); colres.push_back(cols); // for(int &x : rows) { // for(int &y : cols) { // mark[x][y] = true; // } // } } int sz = (int) rowres.size(); cout << sz << "\n"; for(int i=0; i
• » » » » » 11 months ago, # ^ | 0 Thank you very much.
• » » » 8 months ago, # ^ | 0 Hi can you elaborate on the solution. I'm finding it difficult to understand the notations you have used in your explanation. Thanks
• » » 11 months ago, # ^ | +19 Simpler solution for I:Sort all values. We loop through the number of elements from N to 1. Let's say now we are considering the ways to choose k elements. Start with the smallest k elements. We build our solutions progressively using 2 types of steps: Consider a pointer starting at the largest element. Each step consists of either increasing the current element to the next possible element, or move the pointer lefft and increase the current element to the next possible element. We can push the current sum, current position of pointer, position of previous element, position of current element into a priority queue to simulate this expansion. To restore the solution, just create an ID for each state and keep track of which move is used to reach the current state, and we can backtrack at the end.Complexity: $O(M\log M)$ Code#include #include #include using namespace std; using namespace __gnu_pbds; #define fi first #define se second #define mp make_pair #define pb push_back #define fbo find_by_order #define ook order_of_key typedef long long ll; typedef pair ii; typedef vector vi; typedef long double ld; typedef tree, rb_tree_tag, tree_order_statistics_node_update> pbds; pair par[3001101]; void solve() { int n; ll k; int m; scanf("%d %I64d %d",&n,&k,&m); vector > vec; for(int i=0;i > concerts; ll cursum=0; for(int i=0;i=1;siz--) { if(concerts.size()k) { cursum-=vec[siz-1].fi; continue; } int curid=0; priority_queue,vector >,greater > > pq; pq.push({cursum,0,n-1,siz-1,siz-1}); while(!pq.empty()&&concerts.size()(tmp); int id=get<1>(tmp); int maxnw=get<2>(tmp); int cur=get<3>(tmp); int curele=get<4>(tmp); lasid=id; concerts.pb({siz,S}); //proceed with current (type=0) if(cur0&&cur>curele&&S+vec[curele].fi-vec[curele-1].fi<=k) { pq.push({S+vec[curele].fi-vec[curele-1].fi,++curid,cur-1,curele,curele-1}); par[curid].fi=id; par[curid].se=1; } } cursum-=vec[siz-1].fi; } } printf("%d\n",int(concerts.size())); for(int i=0;i0) { seq.pb(par[lasid].se); lasid=par[lasid].fi; } vi ans; for(int i=0;i=0;i--) { if(seq[i]) {curid--; ans[curid]++;} else ans[curid]++; } for(int x:ans) { printf("%d ",vec[x].se); } printf("\n"); } } int main() { int t; scanf("%d",&t); while(t--) solve(); }
» 11 months ago, # | +21 MikeMirzayanov codes are still not visible, will it be fixed or it will remain like this ?
• » » 11 months ago, # ^ | 0 In fact, codes of a problem in this contest will be visible after you solve it.
• » » » 11 months ago, # ^ | 0 For me visible are codes of problems which I solved. If you see codes of all problems maybe it is because of coach mode.
• » » » » 11 months ago, # ^ | +1 It's the same to me. I can't see the solutions of problems I haven't upsolved. If you want to find a solution of a problem in this contest, you can search tutorial in this blog. Other users may have posted their own solutions. If the tutorial of the problem is not posted, you can ask about it. I will try to help you.
• » » » » » 11 months ago, # ^ | 0 Thanks, can you tell the solution of C and N.In N I came up what is the main idea, but I don't know how to implement it easily.
» 11 months ago, # | ← Rev. 2 → 0 Does this competition have any influence on ratings?
• » » 11 months ago, # ^ | +3 of course not
• » » » 11 months ago, # ^ | 0 Thanks a lot.
» 11 months ago, # | +3 Is there any solutions to these problems ? Officially or unofficially.
» 11 months ago, # | 0 Can anyone provide a link to there submission for problem B and J. Thanks
• » » 11 months ago, # ^ | 0 in J you should binary search the amount of soldiers in one row. in B you should sort teams by size and consider you put 1st and last together, then 2nd and last-1 ans so on for each capacity you must check amount of rides and minimal capacity*number of rides among them will be the answer
• » » » 11 months ago, # ^ | 0 can u provide link to your solution.. Thanks for replying..
• » » » » 11 months ago, # ^ | +3
• » » » » » 11 months ago, # ^ | +1 Thanks for providing the solutions, it help me a lot...
• » » » » » 6 months ago, # ^ | 0 Can you explain why your solution works. You have explained your approach and I understood what you have done. But I can't really understand how we can restrict possible values for "Maximum capacity of single ride".
• » » » » » 4 months ago, # ^ | 0 Thanks, nChurgulia! Can you explain what's the magic is here: if(arr[i] + rem < c){ rem = arr[i]; }else{ rem = (arr[i] + rem)%c; } Why this does not work: ans += (arr[i] + rem)/c; rem = (arr[i] + rem)%c;
• » » » » » » 2 months ago, # ^ | 0 Because difference of heights can not be greater than 1, in this case you are accumulating rem
• » » » » » » 6 weeks ago, # ^ | ← Rev. 2 → 0 If you set rem = (arr[i] +rem)%c, there are chances that you will have soldiers of three different heights in a single row and you don't want that to happen. So, it is mandatory to check whether (rem+ arr[i]) >=c. If (rem+arr[i]) < c and you still carry on with making the row, then this row will have soldiers of height (i-1 i.e. rem) && (i) && (i+1) which are three different heights.
» 11 months ago, # | +4 Please write the blog for editorial, it will help in upsolveing the problem. Thanks
» 11 months ago, # | 0 Can you please make all sources visible?
» 11 months ago, # | +3 How to solve G?
• » » 11 months ago, # ^ | +5 Enumerate on the position of last Reset operation, then we can use binary search to determine if this is possible. Also we need to know at least how many Reset operations we need to do before this one, which can be preprocessed using dynamic programming. The transition can be done using either binary search or two pointers. Time complexity should be $O(n\log{n})$.
• » » » 11 months ago, # ^ | +3 Can you explain your approach in more details?
» 11 months ago, # | 0 How to solve L ?
• » » 11 months ago, # ^ | ← Rev. 2 → 0 Just find the answer with a binary search.We can assume that $a \geq c$ because they are symmetric, then for an answer $k$ wee need to check whether we can satisfy the condition.Suppose we have all $a$ in set $A$, $b$ in set $B$ and $c$ in set $C$, then we need to adjust it.If $a \leq k$, obviously $c \leq a \leq k$, we only need to check if we can put some person b to $A$ or $C$.If $a \geq c > k$, we failed, otherwise we put $a - k$ person a in $B$, and we can take at most $b$ person b from $B$ to $C$, just to find out whether it is possible.
» 11 months ago, # | +10 I am wondering if everything is ok with checker for task I.During contest I've submitted solution. After contest with some assertions it turned out that test 9 is maxtest ($n, m = 10^6, k = 10^{18}, a_i = 10^{12}$). So I asserted to check if my answer is correct and I got wa instead of rte. To check if I haven't made any mistake (in asserts etc) I took accepted code from comments and with diff checked on my laptop that I calculated answer correctly. Now I have no idea what's wrong with my code/output, can someone investigate this (maybe I made a mistake, but I just can't see where)?
• » » 11 months ago, # ^ | +10 You solution prints lines like 999998 999998000000000000 in the bottom of the output.
• » » » 11 months ago, # ^ | 0 Thank you. Sorry, I made a stupid bug in assertions.
» 11 months ago, # | ← Rev. 2 → 0 Can anyone provide a solution for L? It will help me a lot... Thanks!
• » » 11 months ago, # ^ | 0 Here it is.
• » » » 11 months ago, # ^ | 0 Thanks a lot!
» 10 months ago, # | 0 How to solve problem K ?
» 2 months ago, # | 0 can someone please explain L more clearly,using binary search?
|
2020-09-24 18:51:08
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.5919684171676636, "perplexity": 1068.057011300226}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-40/segments/1600400219691.59/warc/CC-MAIN-20200924163714-20200924193714-00605.warc.gz"}
|
https://toph.co/p/brownian-motion
|
# Brownian Motion
ULAB Take-off Programming...
Limits 1s, 512 MB
Microscopic particles suspended on any fluid has a chaotic behavior since they do not stay at same location. Every moment they experience a number of collisions with fluid molecules from different directions. As a result of these collisions,the particle moves randomly as if it is confused all the time. In physics, this phenomenon is known as Brownian Motion named after the Scottish botanist Robert Brown who first described the phenomenon in 1827. Then, almost 80 years later, in 1905, Albert Einstein published a paper explaining Brownian Motion to establish the idea of existence of atoms and molecules. Now, more than 100 years later, with advancements of computer science, modeling the physical phenomena has become an important task for physicists. Since you are a great programmer and love to solve challenging problems, a group of physicists from your country came to you to help them solve this problem regarding Brownian Motion.
For a particle placed on the water surface, the physicists want to know the probability of the particle reaching another point on the surface after a certain amount of time. To model the scenario they imagined the surface of the fluid as a two-dimensional grid and each cell of the grid represents a point on the fluid surface. At each millisecond, the particle moves from one cell to any of its eight neighbor cells. The surface is a closed system, therefore, particles can not escape by crossing the boundary of the grid.
Initially, the surface is divided into an $n\times n$ grid, where, $1 \lt n \le 100$. Cells are identified with the their row and column number as $(r,c)$ where $0\le r, c \lt n$. If a particle is located at $(r,c)$, after a millisecond it must move to any of the following cells $(r, c+1)$, $(r,c-1)$, $(r+1, c)$, $(r+1, c+1)$, $(r+1,c-1)$, $(r-1, c)$, $(r-1, c+1)$, $(r-1, c-1)$. You will be given the initial position $(r_i, c_i)$and final position $(r_f, c_f)$. You have to find the probability of reaching from $(r_i, c_i)$ to $(r_f, c_f)$ after $t$ milliseconds, where, $0\le t \le 10^3$.
The probability can be represented as $\frac{P}{Q}$ where $P, Q$ are co-prime integers. Let, $Q^{-1}$ be an integer for which $Q.Q^{-1} \equiv 1 \pmod{10^9+7}$. You have to print $P.Q^{-1} \pmod{10^9+7}$.
## Input
The input consist of two lines.
First line of input contains two integers $n$$(1\lt n \le 100)$ and $t$$(0\le t \le 10^3)$ separated by space.
Following line contains 4 integers $r_i, c_i, r_f, c_f$ $(0 \le r_i, c_i, r_f, c_f \lt n)$ separated by space.
## Output
Print a single integer, the probability of reaching initial to final cell after $t$ milliseconds as $P.Q^{-1} \pmod{10^9+7}$.
## Samples
InputOutput
6 0
2 3 2 3
1
InputOutput
60 110
4 40 2 56
448675357
InputOutput
4 2
1 1 2 2
281250002
Note: The particle must move after a millisecond. It can not stay at the same cell for more than 1 millisecond.
|
2022-08-15 07:34:34
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 32, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.6317644715309143, "perplexity": 370.82588944439686}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-33/segments/1659882572161.46/warc/CC-MAIN-20220815054743-20220815084743-00396.warc.gz"}
|
http://journeydots.com/sixties-baby
|
Sixties Baby
The strength of a mother
One life, one journey, one dot at a time... making sense of it all from the inside, out.
|
2020-02-29 09:00:40
|
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8001821637153625, "perplexity": 7500.614759246171}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-10/segments/1581875148850.96/warc/CC-MAIN-20200229083813-20200229113813-00193.warc.gz"}
|
https://intelligencemission.com/free-electricity-images-free-electricity-weekends.html
|
Free Power’s law is overridden by Pauli’s law, where in general there must be gaps in heat transfer spectra and broken sýmmetry between the absorption and emission spectra within the same medium and between disparate media, and Malus’s law, where anisotropic media like polarizers selectively interact with radiation.
Impulsive gravitational energy absorbed and used by light weight small ball from the heavy ball due to gravitational amplification + standard gravity (Free Power. Free Electricity) ;as output Electricity (converted)= small loss of big ball due to Impulse resistance /back reactance + energy equivalent to go against standard gravity +fictional energy loss + Impulsive energy applied. ” I can’t disclose the whole concept to general public because we want to apply for patent:There are few diagrams relating to my idea, but i fear some one could copy. Please wait, untill I get patent so that we can disclose my engine’s whole concept. Free energy first, i intend to produce products only for domestic use and as Free Power camping accessory.
No, it’s not alchemy or magic to understand the attractive/resistive force created by magnets which requires no expensive fuel to operate. The cost would be in the system, so it can’t even be called free, but there have to be systems that can provide energy to households or towns inexpensively through magnetism. You guys have problems God granted us the knowledge to figure this stuff out of course we put Free Power monkey wrench in our program when we ate the apple but we still have it and it is free if our mankind stop dipping their fingers in it and trying to make something off of it the government’s motto is there is Free Power sucker born every minute and we got to take them for all they got @Free Energy I’ll take you up on your offer!!! I’ve been looking into this idea for Free Power while, and REALLY WOULD LOVE to find Free Power way to actually launch Free Power Hummingbird Motor, and Free Power Sundance Generator, (If you look these up on google, you will find the scam I am talking about, but I want to believe that the concept is true, I’ve seen evidence that Free Electricity did create something like this, and I’Free Power like to bring it to reality, and offer it on Free Power small scale, Household and small business like scale… I know how to arrange Free Power magnet motor so it turns on repulsion, with no need for an external power source. My biggest obstacle is I do not possess the building skills necessary to build it. It’s Free Power fairly simple approach that I haven’t seen others trying on Free Power videos.
Free Energy to leave possible sources of motive force out of it. 0. 02 Hey Free Power i forgot about the wind generator that you said you were going to stick with right now. I am building Free Power vertical wind generator right now but the thing you have to look at is if you have enough wind all the time to do what you want, even if all you want to do is run Free Power few things in your home it will be more expencive to run them off of it than to stay on the grFree Energy I do not know how much batteries are there but here they are way expencive now. Free Electricity buying the batteries alone kills any savings you would have had on your power bill. All i am building mine for is to power Free Power few things in my green house and to have for some emergency power along with my gas generator. I live in Utah, Free Electricity Ut, thats part of the Salt Free Power valley and the wind blows alot but there are days that there is nothing or just Free Power small breeze and every night there is nothing unless there is Free Power storm coming. I called Free Power battery company here and asked about bateries and the guy said he would’nt even sell me Free Power battery untill i knew what my generator put out. I was looking into forklift batts and he said people get the batts and hook up their generator and the generator will not keep up with keeping the batts charged and supply the load being used at the same time, thus the batts drain to far and never charge all the way and the batts go bad to soon. So there are things to look at as you build, especially the cost. Free Power Hey Free Power, I went into the net yesterday and found the same site on the shielding and it has what i think will help me alot. Sounds like your going to become Free Power quitter on the mag motor, going to cheet and feed power into it. Im just kidding, have fun. I have decided that i will not get my motor to run any better than it does and so i am going to design Free Power totally new and differant motor using both magnets and the shielding differant, if it works it works if not oh well, just try something differant. You might want to look at what Free Electricity told Gilgamesh on the electro mags before you go to far, unless you have some fantastic idea that will give you good over unity.
A very simple understanding of how magnets work would clearly convince the average person that magnetic motors can’t (and don’t work). Pray tell where does the energy come from? The classic response is magnetic energy from when they were made. Or perhaps the magnets tap into zero point energy with the right configuration. What about they harness the earth’s gravitational field. Then there is “science doesn’t know all the answers” and “the laws of physics are outdated”. The list goes on with equally implausible rubbish. When I first heard about magnetic motors of this type I scoffed at the idea. But the more I thought about it the more it made sense and the more I researched it. Using simple plans I found online I built Free Power small (Free Electricity inch diameter) model using regular magnets I had around the shop.
Or, you could say, “That’s Free Power positive Delta G. “That’s not going to be spontaneous. ” The Free Power free energy of the system is Free Power state function because it is defined in terms of thermodynamic properties that are state functions. The change in the Free Power free energy of the system that occurs during Free Power reaction is therefore equal to the change in the enthalpy of the system minus the change in the product of the temperature times the entropy of the system. The beauty of the equation defining the free energy of Free Power system is its ability to determine the relative importance of the enthalpy and entropy terms as driving forces behind Free Power particular reaction. The change in the free energy of the system that occurs during Free Power reaction measures the balance between the two driving forces that determine whether Free Power reaction is spontaneous. As we have seen, the enthalpy and entropy terms have different sign conventions. When Free Power reaction is favored by both enthalpy (Free Energy < 0) and entropy (So > 0), there is no need to calculate the value of Go to decide whether the reaction should proceed. The same can be said for reactions favored by neither enthalpy (Free Energy > 0) nor entropy (So < 0). Free energy calculations become important for reactions favored by only one of these factors. Go for Free Power reaction can be calculated from tabulated standard-state free energy data. Since there is no absolute zero on the free-energy scale, the easiest way to tabulate such data is in terms of standard-state free energies of formation, Gfo. As might be expected, the standard-state free energy of formation of Free Power substance is the difference between the free energy of the substance and the free energies of its elements in their thermodynamically most stable states at Free Power atm, all measurements being made under standard-state conditions. The sign of Go tells us the direction in which the reaction has to shift to come to equilibrium. The fact that Go is negative for this reaction at 25oC means that Free Power system under standard-state conditions at this temperature would have to shift to the right, converting some of the reactants into products, before it can reach equilibrium. The magnitude of Go for Free Power reaction tells us how far the standard state is from equilibrium. The larger the value of Go, the further the reaction has to go to get to from the standard-state conditions to equilibrium. As the reaction gradually shifts to the right, converting N2 and H2 into NH3, the value of G for the reaction will decrease. If we could find some way to harness the tendency of this reaction to come to equilibrium, we could get the reaction to do work. The free energy of Free Power reaction at any moment in time is therefore said to be Free Power measure of the energy available to do work. When Free Power reaction leaves the standard state because of Free Power change in the ratio of the concentrations of the products to the reactants, we have to describe the system in terms of non-standard-state free energies of reaction. The difference between Go and G for Free Power reaction is important. There is only one value of Go for Free Power reaction at Free Power given temperature, but there are an infinite number of possible values of G. Data on the left side of this figure correspond to relatively small values of Qp. They therefore describe systems in which there is far more reactant than product. The sign of G for these systems is negative and the magnitude of G is large. The system is therefore relatively far from equilibrium and the reaction must shift to the right to reach equilibrium. Data on the far right side of this figure describe systems in which there is more product than reactant. The sign of G is now positive and the magnitude of G is moderately large. The sign of G tells us that the reaction would have to shift to the left to reach equilibrium.
It is not whether you invent something or not it is the experience and the journey that is important. To sit on your hands and do nothing is Free Power waste of life. My electrical engineer friend is saying to mine, that it can not be done. Those with closed minds have no imagination. This and persistance is what it takes to succeed. The hell with the laws of physics. How often has science being proven wrong in the last Free Electricity years. Dont let them say you are Free Power fool. That is what keeps our breed going. Dont ever give up. I’ll ignore your attempt at sarcasm. That is an old video. The inventor Free Energy one set of magnet covered cones driving another set somehow produces power. No explanation, no test results, no published information.
Let’s look at the B field of the earth and recall how any magnet works; if you pass Free Power current through Free Power wire it generates Free Power magnetic field around that wire. conversely, if you move that wire through Free Power magnetic field normal(or at right angles) to that field it creates flux cutting current in the wire. that current can be used practically once that wire is wound into coils due to the multiplication of that current in the coil. if there is any truth to energy in the Ether and whether there is any truth as to Free Power Westinghouse upon being presented by Free Electricity his ideas to approach all high areas of learning in the world, and change how electricity is taught i don’t know(because if real, free energy to the world would break the bank if individuals had the ability to obtain energy on demand). i have not studied this area. i welcome others who have to contribute to the discussion. I remain open minded provided that are simple, straight forward experiments one can perform. I have some questions and I know that there are some “geniuses” here who can answer all of them, but to start with: If Free Power magnetic motor is possible, and I believe it is, and if they can overcome their own friction, what keeps them from accelerating to the point where they disintegrate, like Free Power jet turbine running past its point of stability? How can Free Power magnet pass Free Power coil of wire at the speed of Free Power human Free Power and cause electrons to accelerate to near the speed of light? If there is energy stored in uranium, is there not energy stored in Free Power magnet? Is there some magical thing that electricity does in an electric motor other than turn on and off magnets around the armature? (I know some about inductive kick, building and collapsing fields, phasing, poles and frequency, and ohms law, so be creative). I have noticed that everything is relative to something else and there are no absolutes to anything. Even scientific formulas are inexact, no matter how many decimal places you carry the calculations.
Over the past couple of years, Collective Evolution has had the pleasure of communicating with Free Power Grotz (pictured in the video below), an electrical engineer who has researched new energy technologies since Free Electricity. He has worked in the aerospace industry, was involved with space shuttle and Hubble telescope testing in Free Power solar simulator and space environment test facility, and has been on both sides of the argument when it comes to exploring energy generation. He has been involved in exploring oil and gas and geothermal resources, as well as coal, natural gas, and nuclear power-plants. He is very passionate about new energy generation, and recognizes that the time to make the transition is now.
|
2021-01-18 20:14:22
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.44857057929039, "perplexity": 742.477183312264}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-04/segments/1610703515235.25/warc/CC-MAIN-20210118185230-20210118215230-00167.warc.gz"}
|
https://en.wikipedia.org/wiki/Inequity_aversion
|
# Inequity aversion
Inequity aversion (IA) is the preference for fairness and resistance to incidental inequalities.[1] The social sciences that study inequity aversion include sociology, economics, psychology, anthropology, and ethology.
## Human studies
Inequity aversion research on humans mostly occurs in the discipline of economics though it is also studied in sociology.
Research on inequity aversion began in 1978 when studies suggested that humans are sensitive to inequities in favor of as well as those against them, and that some people attempt overcompensation when they feel "guilty" or unhappy to have received an undeserved reward.[2]
A more recent definition of inequity aversion (resistance to inequitable outcomes) was developed in 1999 by Fehr and Schmidt.[1] They postulated that people make decisions so as to minimize inequity in outcomes. Specifically, consider a setting with individuals {1,2,...,n} who receive pecuniary outcomes xi. Then the utility to person i would be given by
${\displaystyle U_{i}(\{x_{i},x_{j}\})=x_{i}-{\frac {\alpha _{i}}{n-1}}\times \sum {\max(x_{j}-x_{i},0)}-{\frac {\beta _{i}}{n-1}}\times \sum {\max(x_{i}-x_{j},0)},}$
where α parametrizes the distaste of person i for disadvantageous inequality in the first nonstandard term, and β parametrizes the distaste of person i for advantageous inequality in the final term.
### Punishing unjust success and game theory
Fehr and Schmidt showed that disadvantageous inequity aversion manifests itself in humans as the "willingness to sacrifice potential gain to block another individual from receiving a superior reward". They argue that this apparently self-destructive response is essential in creating an environment in which bilateral bargaining can thrive. Without inequity aversion's rejection of injustice, stable cooperation would be harder to maintain (for instance, there would be more opportunities for successful free riders).[3]
James H. Fowler and his colleagues also argue that inequity aversion is essential for cooperation in multilateral settings.[4] In particular, they show that subjects in random income games (closely related to public goods games) are willing to spend their own money to reduce the income of wealthier group members and increase the income of poorer group members even when there is no cooperation at stake.[5] Thus, individuals who free ride on the contributions of fellow group members are likely to be punished because they earn more, creating a decentralized incentive for the maintenance of cooperation.
### Experimental economics
Inequity aversion is broadly consistent with observations of behavior in three standard economics experiments:
1. Dictator game – The subject chooses how a reward should be split between himself and another subject. If the dictator acted self-interestedly, the split would consist of 0 for the partner and the full amount for the dictator. While the most common choice is indeed to keep everything, many dictators choose to give, with the second most common choice being the 50:50 split.
2. Ultimatum game – The dictator game is played, but the recipient is allowed to veto the entire deal, so that both subjects receive nothing. The partner typically vetos the deal when low offers are made. People consistently prefer getting nothing to receiving a small share of the pie. Rejecting the offer is in effect paying to punish the dictator (called the proposer).
3. Trust game – The same result as found in the dictator game shows up when the dictator's initial endowment is provided by his partner, even though this requires the first player to trust that something will be returned (reciprocity). This experiment often yields a 50:50 split of the endowment, and has been used as evidence of the inequity aversion model.
In 2005, John List modified these experiments slightly to determine if something in the construction of the experiments was prompting specific behaviors. When given a choice to steal money from the other player, even a single dollar, the observed altruism all but disappeared. In another experiment, the two players were given a sum of money and the choice to give or take any amount from the other player. In this experiment, only 10% of the participants gave the other person any money at all, and fully 40% of the players opted to take all of the other player's money.
The last such experiment was identical to the former, where 40% were turned into a gang of robbers, with one catch: the two players were forced to earn the money by stuffing envelopes. In this last experiment, more than two thirds of the players neither took nor gave a cent, while just over 20% still took some of the other player's money.
In 2011, Ert Erev and Roth[6] ran a model prediction competition on two datasets, each of which included 120 two-player games. In each game player 1 decides whether to "opt out" and determine the payoffs for both players, or to "opt in" and let player 2 decide about the payoff allocation by choosing between actions "left" or "right". The payoffs were randomly selected, so the dataset included games like the Ultimatum, Dictator, and Trust, as well as other games. The results suggested that inequity aversion could be described as one of many strategies that people might use in such games.
Other research in experimental economics addresses risk aversion in decision making[7] and the comparison of inequality measures to subjective judgments on perceived inequalities.[8]
### Studies of companies
Surveys of employee opinions within firms have shown modern labor economists that inequity aversion is very important to them. Employees compare not only relative salaries but also relative performance against that of co-workers. Where these comparisons lead to guilt or envy, inequity aversion may lower employee morale. According to Bewley (1999), the main reason that managers create formal pay structures is so that the inter-employee comparison is seen to be "fair", which they considered "key" for morale and job performance.[9]
It is natural to think of inequity aversion leading to greater solidarity within the labor pool, to the benefit of the average employee. However, a 2008 paper by Pedro Rey-Biel shows that this assumption can be subverted, and that an employer can use inequity aversion to get higher performance for less pay than would be possible otherwise.[10] This is done by moving away from formal pay structures and using off-equilibrium bonus payments as incentives for extra performance. He shows that the optimal contract for inequity aversion employees is less generous at the optimal production level than contracts for "standard agents" (who don't have inequity aversion) in an otherwise identical two-employee model.
### Criticisms
In 2005 Avner Shaked distributed a "pamphlet" entitled "The Rhetoric of Inequity Aversion" that attacked the inequity aversion papers of Fehr & Schmidt.[11] In 2010, Shaked has published an extended version of the criticism together with Ken Binmore in the Journal of Economic Behavior and Organization (the same issue also contains a reply by Fehr and Schmidt and a rejoinder by Binmore and Shaked).[12][13][14] A problem of inequity aversion models is the fact that there are free parameters; standard theory is simply a special case of the inequity aversion model. Hence, by construction inequity aversion must always be at least as good as standard theory when the inequity aversion parameters can be chosen after seeing the data. Binmore and Shaked also point out that Fehr and Schmidt (1999) pick a distribution of alpha and beta without conducting a formal estimation. The perfect correlation between the alpha and beta parameters in Fehr and Schmidt (1999) is an assumption made in the appendix of their paper that is not justified by the data that they provide.
More recently, several papers have estimated Fehr-Schmidt inequity aversion parameters using estimation techniques such as maximum likelihood. The results are mixed. Some authors have found beta larger than alpha, which contradicts a central assumption made by Fehr and Schmidt (1999).[15] Other authors have found that inequity aversion with Fehr and Schmidt's (1999) distribution of alphas and betas explains data of contract-theoretic experiments not better than standard theory; they also estimate average values of alpha that are much smaller than suggested by Fehr and Schmidt (1999).[16] Moreover, Levitt and List (2007) have pointed out that laboratory experiments tend to exaggerate the importance of pro-social behaviors because the subjects in the laboratory know that they are being monitored.[17]
An alternative[8] to the concept of a general inequity aversion is the assumption, that the degree and the structure of inequality could lead either to acceptance or to aversion of inequality.
## Non-human studies
An experiment on capuchin monkeys (Brosnan, S and de Waal, F) showed that the subjects would prefer receiving nothing to receiving a reward awarded inequitably in favor of a second monkey, and appeared to target their anger at the researchers responsible for the inequitable distribution of food.[18] Anthropologists suggest that this research indicates a biological and evolutionary sense of social "fair play" in primates, though others believe that this is learned behavior or explained by other mechanisms.[citation needed] There is also evidence for inequity aversion in chimpanzees[19] (though see a recent study questioning this interpretation[20]). The latest study shows that chimpanzees play the Ultimatum Game in the same way as children, preferring equitable outcomes. The authors claim that we now are near the point of no difference between humans and apes with regard to a sense of fairness.[21] Recent studies suggest that animals in the canidae family also recognize a basic level of fairness, stemming from living in cooperative societies.[22] Animal cognition studies in other biological orders have not found similar importance on relative "equity" and "justice" as opposed to absolute utility.
## Social inequity aversion
Fehr and Schmidt's model may partially explain the widespread opposition to economic inequality in democracies, but a distinction should be drawn between inequity aversion's "guilt" and egalitarianism's "compassion", which does not necessarily imply injustice.
Inequity aversion should not be confused with the arguments against the consequences of inequality. For example, the pro-publicly funded health care slogan "Hospitals for the poor become poor hospitals" directly objects to a predicted decline in medical care, not the health-care apartheid that is supposed to cause it. The argument that average medical outcomes improve with reduction in healthcare inequality (at the same total spending) is separate from the case for public healthcare on the grounds of inequity aversion.
## References
1. ^ a b Fehr, E.; Schmidt, K.M. (1999). "A theory of fairness, competition, and cooperation". The Quarterly Journal of Economics. 114 (3): 817–68. doi:10.1162/003355399556151.
2. ^ Walster; Berscheid (1978). Equity: theory and research. ISBN 0-205-05929-5.
3. ^ http://epub.ub.uni-muenchen.de/726/1/Fehr-Schmidt_Handbook_2005-Munichecon.pdf
4. ^ Fowler JH, Johnson T, Smirnov O. "Egalitarian Motive and Altruistic Punishment," Nature 433: doi:10.1038/nature03256 (6 January 2005)
5. ^ Dawes CT, Fowler JH, Johnson T, McElreath R, Smirnov O. "Egalitarian Motives in Humans," Nature 446: 794–96, doi:10.1038/nature05651 (12 April 2007)
6. ^ Ert E, Erev I, Roth AE "A Choice Prediction Competition for Social Preferences in Simple Extensive Form Games: An Introduction." Games 2.3 (2011): 257–76.
7. ^ Berg, Joyce E., and Thomas A. Rietz's University of Iowa Discussion Paper, from 1997 Do Unto Others: A Theory and Experimental Test of Interpersonal Factors in Decision Making Under Uncertainty examines the increased risk aversion from lottery-choice games to multi-party dealing. It suggests that this could be explained by altruism and a concern for an equitable distribution among all parties (fairness). This paper also used the phrase 'inequity aversion'
8. ^ a b Yoram Amiel (author), Frank A. Cowell: Thinking about Inequality: Personal Judgment and Income Distributions, 2000
9. ^ Bewley, T. (1999) Why wages don't fall during a Recession. Harvard University Press, ISBN 0-674-95241-3
10. ^ Rey-Biel, P. (June 2008). "Inequity Aversion and Team Incentives". The Scandinavian Journal of Economics. 10 (2): 297–320. doi:10.1111/j.1467-9442.2008.00540.x.
11. ^ Shaked, Avner (2005). "The Rhetoric of Inequity Aversion". SSRN . Bonn University.
12. ^ Binmore, Ken; Shaked, Avner (2010). "Experimental economics: Where next?". Journal of Economic Behavior & Organization. On the Methodology of Experimental Economics. 73 (1): 87–100. doi:10.1016/j.jebo.2008.10.019.
13. ^ Fehr, Ernst; Schmidt, Klaus M. (2010). "On inequity aversion: A reply to Binmore and Shaked". Journal of Economic Behavior & Organization. On the Methodology of Experimental Economics. 73 (1): 101–08. doi:10.1016/j.jebo.2009.12.001.
14. ^ Binmore, Ken; Shaked, Avner (2010). "Experimental Economics: Where Next? Rejoinder". Journal of Economic Behavior & Organization. On the Methodology of Experimental Economics. 73 (1): 120–21. doi:10.1016/j.jebo.2009.11.008.
15. ^ Bellemare, Charles; Kröger, Sabine; Van Soest, Arthur (2008). "Measuring Inequity Aversion in a Heterogeneous Population Using Experimental Decisions and Subjective Probabilities". Econometrica. 76 (4): 815–39. doi:10.1111/j.1468-0262.2008.00860.x. ISSN 1468-0262.
16. ^ Hoppe, Eva I.; Schmitz, Patrick W. (2013). "Contracting under Incomplete Information and Social Preferences: An Experimental Study". The Review of Economic Studies. 80 (4): 1516–44. doi:10.1093/restud/rdt010. ISSN 0034-6527.
17. ^ Levitt, Steven D; List, John A (2007). "What Do Laboratory Experiments Measuring Social Preferences Reveal About the Real World?". Journal of Economic Perspectives. 21 (2): 153–74. doi:10.1257/jep.21.2.153. ISSN 0895-3309.
18. ^ Brosnan, S.F.; de Waal, F.B.M. (2003). "Monkeys reject unequal pay". Nature. 425 (6955): 297–99. doi:10.1038/nature01963. PMID 13679918.
19. ^ Brosnan, S. F.; Schiff, H. C.; de Waal, F. B. M. (2005). "Tolerance for inequity may increase with social closeness in chimpanzees" (PDF). Proceedings of the Royal Society B. 272 (1560): 253–58. doi:10.1098/rspb.2004.2947. PMC . PMID 15705549. Archived from the original (PDF) on 2005-04-09.
20. ^ Bräuer, J.; Call, J.; Tomasello, M. (2009). "Are Apes Inequity Averse? New Data on the Token-Exchange Paradigm" (PDF). American Journal of Primatology. 71 (2): 175–81. doi:10.1002/ajp.20639. PMID 19021260.
21. ^ Proctor, D.; et al. (2013). "Chimpanzees play the ultimatum game". Proceedings of the National Academy of Sciences USA. 110: 2070–75. doi:10.1073/pnas.1220806110.
22. ^ Greenfieldboyce, Nell (9 December 2009). "Dogs Understand Fairness, Get Jealous, Study Finds". NPR.
|
2018-08-16 10:05:15
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 1, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.5661870837211609, "perplexity": 5444.972703405097}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-34/segments/1534221210615.10/warc/CC-MAIN-20180816093631-20180816113631-00276.warc.gz"}
|
https://www.isa.uni-stuttgart.de/institut/team/Steinwart-00002/
|
Univ.-Prof. Dr. rer. nat.
# Ingo Steinwart
Professor, Institutsleitung
Institut für Stochastik und Anwendungen
Lehrstuhl für Stochastik
## Kontakt
+49 711 685-65388
Website
Pfaffenwaldring 57
70569 Stuttgart
Deutschland
Raum: 8.544
## Sprechstunde
Nach Vereinbarung via E-Mail
## Fachgebiet
• Statistische Lerntheorie
• Kernbasierte Lernverfahren
• Cluster Analysis
• Effiziente Lernverfahren für große Datenmengen
• Verlustfunktionen
• Lernen mit nicht .i.i.d. Daten
• Anwendungen von Lernverfahren
• Reproduzierende Kern-Hilberträume
Publikationen aus Puma:
1. ### 2017
1. P. Thomann, I. Steinwart, I. Blaschzyk, and M. Meister, “Spatial Decompositions for Large Scale SVMs,” in Proceedings of Machine Learning Research Volume 54: Proceedings of the 20th International Conference on Artificial Intelligence and Statistics 2017, 2017, pp. 1329--1337.
2. I. Steinwart, “A short note on the comparison of interpolation widths, entropy numbers, and Kolmogorov widths,” JOURNAL OF APPROXIMATION THEORY, vol. 215, pp. 13–27, 2017.
3. I. Steinwart, “Representation of quasi-monotone functionals by families of separating hyperplanes,” MATHEMATISCHE NACHRICHTEN, vol. 290, no. 11–12, pp. 1859–1883, 2017.
4. I. Steinwart and P. Thomann, “liquidSVM: A Fast and Versatile SVM Package,” Fakultät für Mathematik und Physik, Universität Stuttgart, 2017.
5. I. Steinwart, B. K. Sriperumbudur, and P. Thomann, “Adaptive Clustering Using Kernel Density Estimators,” Fakultät für Mathematik und Physik, Universität Stuttgart, 2017.
6. I. Steinwart, “A Short Note on the Comparison of Interpolation Widths, Wntropy Numbers, and Kolmogorov Widths,” J. Approx. Theory, vol. 215, pp. 13--27, 2017.
7. I. Steinwart, “Convergence Types and Rates in Generic Karhunen-Loève Expansions with Applications to Sample Paths Properties,” ArXiv e-prints, 2017.
8. H. Hang and I. Steinwart, “A Bernstein-type Inequality for Some Mixing Processes and Dynamical Systems with an Application to Learning,” Ann. Statist., vol. 45, pp. 708--743, 2017.
9. S. Fischer and I. Steinwart, “Sobolev Norm Learning Rates for Regularized Least-Squares Algorithm,” Fakultät für Mathematik und Physik, Universität Stuttgart, 2017.
10. M. Farooq and I. Steinwart, “Learning Rates for Kernel-Based Expectile Regression,” Fakultät für Mathematik und Physik, Universität Stuttgart, 2017.
11. M. Farooq and I. Steinwart, “An SVM-like Approach for Expectile Regression,” Comput. Statist. Data Anal., vol. 109, pp. 159--181, 2017.
2. ### January 2016
1. I. Steinwart, “Simons’ SVM: A fast SVM toolbox,” no. \dots, Jan-2016.
3. ### 2016
1. P. Thomann, I. Steinwart, I. Blaschzyk, and M. Meister, “Spatial Decompositions for Large Scale SVMs.,” CoRR, vol. abs/1612.00374, 2016.
2. I. Steinwart, P. Thomann, and N. Schmid, “Learning with Hierarchical Gaussian Kernels,” Fakultät für Mathematik und Physik, Universität Stuttgart, 2016.
3. M. Meister and I. Steinwart, “Optimal Learning Rates for Localized SVMs,” J. Mach. Learn. Res., vol. 17, pp. 1–44, 2016.
4. H. Hang, I. Steinwart, Y. Feng, and J. A. K. Suykens, “Kernel Density Estimation for Dynamical Systems,” Fakultät für Mathematik und Physik, Universität Stuttgart, 2016.
5. H. Hang, Y. Feng, I. Steinwart, and J. A. K. Suykens, “Learning theory estimates with observations from general stationary stochastic processes,” Neural Computation, vol. 28, pp. 2853--2889, 2016.
6. I. Blaschzyk and I. Steinwart, “Improved Classification Rates under Refined Margin Conditions,” Fakultät für Mathematik und Physik, Universität Stuttgart, 2016.
4. ### 2015
1. P. Thomann, I. Steinwart, and N. Schmid, “Towards an Axiomatic Approach to Hierarchical Clustering of Measures,” J. Mach. Learn. Res., vol. 16, pp. 1949--2002, 2015.
2. I. Steinwart, “Fully Adaptive Density-Based Clustering,” Ann. Statist., vol. 43, pp. 2132--2167, 2015.
3. I. Steinwart, “Measuring the capacity of sets of functions in the analysis of ERM,” in Festschrift in Honor of Alexey Chervonenkis, A. Gammerman and V. Vovk, Eds. Berlin: Springer, 2015, pp. 223--239.
5. ### 2014
1. I. Steinwart, “Convergence Types and Rates in Generic Karhunen-Loève Expansions with Applications to Sample Path Properties,” Fakultät für Mathematik und Physik, Universität Stuttgart, 2014–007, 2014.
2. I. Steinwart, C. Pasin, R. Williamson, and S. Zhang, “Elicitation and Identification of Properties,” in JMLR Workshop and Conference Proceedings Volume 35: Proceedings of the 27th Conference on Learning Theory 2014, 2014, pp. 482--526.
3. H. Hang and I. Steinwart, “Fast learning from α-mixing observations.,” J. Multivariate Analysis, vol. 127, pp. 184–199, 2014.
4. H. Hang and I. Steinwart, “Fast Learning from $\alpha$-mixing Observations,” J. Multivariate Anal., vol. 127, pp. 184--199, 2014.
6. ### 2013
1. I. Steinwart, “Supplement A to Fully Adaptive Density-Based Clustering’’,” Fakultät für Mathematik und Physik, Universität Stuttgart, 2013–016, 2013.
2. I. Steinwart, “Supplement B to Fully Adaptive Density-Based Clustering’’,” Fakultät für Mathematik und Physik, Universität Stuttgart, 2013.
3. I. Steinwart, “Some Remarks on the Statistical Analysis of SVMs and Related Methods,” in Empirical Inference -- Festschrift in Honor of Vladimir N. Vapnik, B. Schölkopf, Z. Luo, and V. Vovk, Eds. Berlin: Springer, 2013, pp. 25–36.
4. M. Eberts and I. Steinwart, “Optimal regression rates for SVMs using Gaussian kernels,” Electron. J. Stat., vol. 7, pp. 1--42, 2013.
7. ### 2012
1. I. Steinwart and C. Scovel, “Mercer’s Theorem on General Domains: on the Interaction between Measures, Kernels, and RKHSs,” Constr. Approx., vol. 35, pp. 363--417, 2012.
2. B. K. Sriperumbudur and I. Steinwart, “Consistency and Rates for Clustering with DBSCAN,” in JMLR Workshop and Conference Proceedings Volume 22: Proceedings of the 15th International Conference on Artificial Intelligence and Statistics 2012, 2012, pp. 1090–1098.
3. L. Bornn, M. Anghel, and I. Steinwart, “Forecasting with Historical Data or Process Knowledge under Misspecification: A Comparison,” Fakultät für Mathematik und Physik, Universität Stuttgart, 2012.
8. ### 2011
1. I. Steinwart and A. Christmann, “Estimating Conditional Quantiles with the Help of the Pinball Loss,” Bernoulli, vol. 17, pp. 211--225, 2011.
2. I. Steinwart, D. Hush, and C. Scovel, “Training SVMs without offset,” J. Mach. Learn. Res., vol. 12, pp. 141--202, 2011.
3. I. Steinwart, “Adaptive Density Level Set Clustering,” in JMLR Workshop and Conference Proceedings Volume 19: Proceedings of the 24th Conference on Learning Theory 2011, 2011, pp. 703--738.
4. M. Eberts and I. Steinwart, “Optimal learning rates for least squares SVMs using Gaussian kernels,” in Advances in Neural Information Processing Systems 24, 2011, pp. 1539--1547.
9. ### 2010
1. I. Steinwart, J. Theiler, and D. Llamocca, “Using support vector machines for anomalous change detection,” in IEEE Geoscience and Remote Sensing Society and the IGARSS 2010, 2010, pp. 3732--3735.
2. C. Scovel, D. Hush, I. Steinwart, and J. Theiler, “Radial kernels and their reproducing kernel Hilbert spaces,” J. Complexity, vol. 26, pp. 641–660, 2010.
3. A. Christmann and I. Steinwart, “Universal Kernels on Non-Standard Input Spaces,” in Advances in Neural Information Processing Systems 23, 2010, pp. 406--414.
10. ### 2009
1. I. Steinwart, “Oracle inequalities for support vector machines that are based on random entropy numbers,” J. Complexity, vol. 25, no. 5, pp. 437–454, 2009.
2. I. Steinwart, D. Hush, and C. Scovel, “Learning from dependent observations,” J. Multivariate Anal., vol. 100, pp. 175--194, 2009.
3. I. Steinwart and A. Christmann, “Fast Learning from Non-i.i.d. Observations,” in Advances in Neural Information Processing Systems 22, 2009, pp. 1768--1776.
4. I. Steinwart and A. Christmann, “Sparsity of SVMs that use the $\epsilon$-insensitive loss,” in Advances in Neural Information Processing Systems 21, 2009, pp. 1569--1576.
5. I. Steinwart, “Two oracle inequalities for regularized boosting classifiers,” Stat. Interface, vol. 2, pp. 271--284, 2009.
6. I. Steinwart, “Oracle inequalities for SVMs that are Based on Random Entropy Numbers,” J. Complexity, vol. 25, pp. 437--454, 2009.
7. I. Steinwart and M. Anghel, “An SVM approach for forecasting the evolution of an unknown ergodic dynamical system from observations with unknown noise,” Ann. Statist., vol. 37, pp. 841--875, 2009.
8. I. Steinwart, D. Hush, and C. Scovel, “Optimal Rates for Regularized Least Squares Regression,” in Proceedings of the 22nd Annual Conference on Learning Theory, 2009, pp. 79--93.
9. A. Christmann, A. van Messem, and I. Steinwart, “On consistency and robustness properties of support vector machines for heavy-tailed distributions,” Stat. Interface, vol. 2, pp. 311--327, 2009.
11. ### 2008
1. I. Steinwart and A. Christmann, “Sparsity of SVMs that use the epsilon-insensitive loss.,” in NIPS, 2008, pp. 1569–1576.
2. I. Steinwart and A. Christmann, “How SVMs can estimate quantiles and the median,” in Advances in Neural Information Processing Systems 20, Cambridge, MA, 2008, pp. 305--312.
3. I. Steinwart and A. Christmann, Support Vector Machines. New York: Springer, 2008.
4. A. Christmann and I. Steinwart, “Consistency of kernel based quantile regression,” Appl. Stoch. Models Bus. Ind., vol. 24, pp. 171--183, 2008.
12. ### 2007
1. I. Steinwart, D. Hush, and C. Scovel, “An Oracle Inequality for Clipped Regularized Risk Minimizers,” in Advances in Neural Information Processing Systems 19, Cambridge, MA, 2007, pp. 1321--1328.
2. I. Steinwart and C. Scovel, “Fast rates for support vector machines using Gaussian kernels,” Ann. Statist., vol. 35, pp. 575--607, 2007.
3. I. Steinwart, “How to compare different loss functions,” Constr. Approx., vol. 26, pp. 225--287, 2007.
4. C. Scovel, D. Hush, and I. Steinwart, “Approximate duality,” J. Optim. Theory Appl., vol. 135, pp. 429--443, 2007.
5. N. List, D. Hush, C. Scovel, and I. Steinwart, “Gaps in support vector optimization,” in Proceedings of the 20th Conference on Learning Theory, New York, 2007, pp. 336--348.
6. D. Hush, C. Scovel, and I. Steinwart, “Stability of unstable learning algorithms,” Mach. Learn., vol. 67, pp. 197--206, 2007.
7. A. Christmann and I. Steinwart, “How SVMs can estimate quantiles and the median.,” in NIPS, 2007, pp. 305–312.
8. A. Christmann, I. Steinwart, and M. Hubert, “Robust learning from bites for data mining,” Comput. Statist. Data Anal., vol. 52, pp. 347--361, 2007.
9. A. Christmann and I. Steinwart, “Consistency and robustness of kernel-based regression in convex risk minimization,” Bernoulli, vol. 13, pp. 799--819, 2007.
13. ### 2006
1. I. Steinwart, D. R. Hush, and C. Scovel, “An Oracle Inequality for Clipped Regularized Risk Minimizers.,” in NIPS, 2006, pp. 1321–1328.
2. I. Steinwart, D. Hush, and C. Scovel, “A new concentration result for regularized risk minimizers,” in High Dimensional Probability IV, Beachwood, OH, 2006, pp. 260--275.
3. I. Steinwart, D. Hush, and C. Scovel, “Function classes that approximate the Bayes risk,” in Proceedings of the 19th Annual Conference on Learning Theory, New York, 2006, pp. 79--93.
4. I. Steinwart, D. Hush, and C. Scovel, “An explicit Description of the reproducing kernel Hilbert spaces of Gaussian RBF kernels,” IEEE Trans. Inform. Theory, vol. 52, pp. 4635--4643, 2006.
5. D. Hush, P. Kelly, C. Scovel, and I. Steinwart, “QP Algorithms with Guaranteed Accuracy and Run Time for Support Vector Machines,” J. Mach. Learn. Res., vol. 7, pp. 733--769, 2006.
14. ### 2005
1. I. Steinwart, “Consistency of support vector machines and other regularized kernel classifiers.,” IEEE Trans. Information Theory, vol. 51, no. 1, pp. 128–142, 2005.
2. I. Steinwart, D. Hush, and C. Scovel, “Density Level Detection is Classification,” in Advances in Neural Information Processing Systems 17, Cambridge, MA, 2005, pp. 1337--1344.
3. I. Steinwart, D. Hush, and C. Scovel, “A classification framework for anomaly detection,” J. Mach. Learn. Res., vol. 6, pp. 211--232, 2005.
4. I. Steinwart and C. Scovel, “Fast Rates for Support Vector Machines,” in Proceedings of the 18th Annual Conference on Learning Theory, New York, 2005, pp. 279--294.
5. I. Steinwart, “Consistency of support vector machines and other regularized kernel machines,” IEEE Trans. Inform. Theory, vol. 51, pp. 128--142, 2005.
6. C. Scovel, D. Hush, and I. Steinwart, “Learning Rates for Density Level Detection,” Anal. Appl., vol. 3, pp. 356--371, 2005.
7. D. M. Patterson and L. Steinwart, “Classroom technology: assisting faculty in finding weapons of mass instruction.,” in SIGUCCS, 2005, pp. 310–311.
8. D. Hush, P. Kelly, C. Scovel, and I. Steinwart, “Provably fast algorithms for anomaly detection,” in International Workshop on Data Mining Methods for Anomaly Detection at KDD 2005, 2005, pp. 27--31.
15. ### 2004
1. I. Steinwart, D. R. Hush, and C. Scovel, “Density Level Detection is Classification.,” in NIPS, 2004, pp. 1337–1344.
2. I. Steinwart and C. Scovel, “Fast Rates to Bayes for Kernel Machines.,” in NIPS, 2004, pp. 1345–1352.
3. I. Steinwart, “Entropy of convex hulls---some Lorentz norm results,” J. Approx. Theory, vol. 128, pp. 42--52, 2004.
4. I. Steinwart and C. Scovel, “When do support vector machines learn fast?,” in 16th International Symposium on Mathematical Theory of Networks and Systems, 2004.
5. I. Steinwart, “Sparseness of support vector machines---some asymptotically sharp bounds,” in Advances in Neural Information Processing Systems 16, Cambridge, MA, 2004, pp. 1069--1076.
6. A. Christmann and I. Steinwart, “On robustness properties of convex risk minimization methods for pattern recognition,” J. Mach. Learn. Res., vol. 5, pp. 1007--1034, 2004.
16. ### 2003
1. I. Steinwart, “On the Optimal Parameter Choice for v-Support Vector Machines.,” IEEE Trans. Pattern Anal. Mach. Intell., vol. 25, no. 10, pp. 1274–1284, 2003.
2. I. Steinwart, “Sparseness of Support Vector Machines---Some Asymptotically Sharp Bounds.,” in NIPS, 2003, pp. 1069–1076.
3. I. Steinwart, “On the optimal parameter choice for $\nu$-support vector machines,” IEEE Transactions on Pattern Analysis and Machine Intelligence, vol. 25, pp. 1274--1284, 2003.
4. I. Steinwart, “Entropy numbers of convex hulls and an application to learning algorithms,” Arch. Math., vol. 80, pp. 310--318, 2003.
5. I. Steinwart, “Sparseness of support vector machines,” J. Mach. Learn. Res., vol. 4, pp. 1071--1105, 2003.
6. K. Mittmann and I. Steinwart, “On the existence of continuous modifications of vector-valued random fields,” Georgian Math. J., vol. 10, pp. 311--317, 2003.
17. ### 2002
1. I. Steinwart, “Support vector machines are universally consistent,” J. Complexity, vol. 18, pp. 768--791, 2002.
2. I. Steinwart, “Which data--dependent bounds are suitable for SVM’s?,” 2002.
3. J. Creutzig and I. Steinwart, “Metric entropy of convex hulls in type $p$ spaces---the critical case,” Proc. Amer. Math. Soc., vol. 130, pp. 733--743, 2002.
18. ### 2001
1. I. Steinwart, “On the influence of the kernel on the consistency of support vector machines,” J. Mach. Learn. Res., vol. 2, pp. 67--93, 2001.
19. ### 2000
1. J. M. Yohe and M. C. Steinwart, “Faculty Response to Classroom Use of E-Technology.,” in SIGUCCS, 2000, pp. 326–329.
2. I. Steinwart, “Entropy of $C(K)$-valued operators,” J. Approx. Theory, vol. 103, pp. 302--328, 2000.
3. I. Steinwart, “Entropy of $C(K)$-valued operators and some applications,” PhD dissertation, Friedrich-Schiller Universität Jena, Fakultät für Mathematik und Informatik, 2000.
20. ### 1997
1. M. C. Steinwart, “Nightmare on Elm Street to It’s a Wonderful Life: a port-per-pillow networking experience at Valparaiso University.,” in SIGUCCS, 1997, pp. 291–294.
21. ### 1996
1. J. E. Hicks and M. C. Steinwart, “The Web as a campus wide information system: tunnel of love or house of mirrors.,” in SIGUCCS, 1996, pp. 67–68.
#### Education
02/2000 Doctorate (Dr. rer. nat.) in Mathematics, Friedrich-Schiller-University, Jena 03/1997 Diploma in Mathematics, Carl-von-Ossietzky University, Oldenburg
#### Appointments
07/2017 — Faculty Member, International Max Planck Research School for Intelligent Systems, Stuttgart/Tübingen 04/2010 — Full Professor, Institute for Stochastics and Applications, Department of Mathematics, University of Stuttgart 01/2010 — 06/2011 Associate Adjunct Professor, Jack Baskin School of Engineering, Department of Computer Science, University of California, Santa Cruz 07/2008 — 04/2010 Scientist Level 4, CCS-3, Los Alamos National Laboratory 03/2003 — 04/2010 Technical Staff Member, CCS-3, Los Alamos National Laboratory 03/2002 — 09/2002 Visiting Scientist, Johannes-Gutenberg University, Mainz 03/2000 — 03/2003 Scientific Staff Member, Friedrich-Schiller-University, Jena 04/1997 — 02/2000 Stipendiary, DFG graduate college “Analytic and Stochastic Structures and Systems”, Friedrich-Schiller-University, Jena
10/2010 — Member of the Senate Committee for Organisation, University Stuttgart 04/2011 — 10/2012 Vice Dean for Mathematics, Faculty of Mathematics and Physics, University of Stuttgart
#### Editorial Services
01/2013 — Associate Editor, Journal of Complexity 12/2008 — Action Editor (Associate Editor), Journal of Machine Learning Research 01/2010 — 12/2012 Associate Editor, Annals of Statistics
#### Program Responsibilities at Conferences
Chair COLT 2013 Program Committee NIPS 2008, 2011 Program Committee COLT 2006, 2008, 2009, 2011, 2012, 2015
## 2018
M. Farooq and I. Steinwart, Learning rates for kernel-based expectile regression, Mach. Learn., 2018. [ final | preprint.pdf ]
H. Hang, I. Steinwart, Y. Feng, and J. Suykens, Kernel density estimation for dynamical systems, J. Mach. Learn. Res., vol. 19, pp. 1-49, 2018. [ final | preprint.pdf ]
I. Steinwart, Convergence types and rates in generic Karhunen-Loève expansions with applications to sample path properties, Potential Anal., vol. -, pp. -, 2018. [ final | preprint.pdf ]
I. Blaschzyk and I. Steinwart, Improved classification rates under refined margin conditions, Electron. J. Stat., vol. 12, pp. 793-823, 2018. [ final | preprint.pdf ]
# liquidCluster: A Fast and Automated Density-Based Clustering Package
LiquidCluster estimates the cluster tree with the help of some density estimators. Key features are its:
• automated hyper-parameter selection procedure
• speed
The currently available command line version is very similar to the interface of liquidSVM.
On Linux and Mac on the terminal liquidCluster can be used in the following way:
wget www.isa.uni-stuttgart.de/software/liquidCluster.tar.gz
tar xzf liquidCluster.tar.gz
cd liquidCluster
make cluster
cd scripts
./cluster.sh bananas-5-2d
Results are written to the ./results folder.
# liquidSVM: A Fast and Versatile SVM Package
## News
### February 23rd 2017: version 1.2
• The package has been renamed to liquidSVM (formerly simons-svm).
• There is now a paper with many benchmarks on the arXiv.
• The R-package has reached version 1.0.0.
### June 8th 2016: version 1.1.
Changes include:
• Large datasets (tested up to 10 millions of samples) can be trained and tested.
• A new recursive algorithm expedites the partitioning of data.
• Many bugs were fixed.
## General Information
Support vector machines (SVMs) and related kernel-based learning algorithms are a well-known class of machine learning algorithms, for non-parametric classification and regression. liquidSVM is an implementation of SVMs whose key features are:
• fully integrated hyper-parameter selection,
• extreme speed on both small and large data sets,
• Bindings for RPythonMATLAB / OctaveJava, and Spark,
• inclusion of a variety of different learning scenarios:
• least-squares, quantile, and expectile regression
• multi-class classification, ROC, and Neyman-Pearson learning
• full flexibility for experts.
## Command Line interface
Installation instructions for the command line versions.
Terminal version for Linux/OS X liquidSVM.tar.gz Terminal version for Windows (64bit) avx2: liquidSVM.zip avx: liquidSVM.zip sse2: liquidSVM.zip Previous versions v1.1 (June 2016), v1.0 (January 2016)
On Linux and Mac on the terminal liquidSVM can be used in the following way:
wget www.isa.uni-stuttgart.de/software/liquidSVM.tar.gz
tar xzf liquidSVM.tar.gz
cd liquidSVM
make all
scripts/mc-svm.sh banana-mc 1 2
## R
Read the demo vignette for a tutorial on installing liquidSVM-package and how to use it and the documentation vignette for more advanced installation options and usage.
An easy usage is:
install.packages("liquidSVM")
library(liquidSVM)
banana <- liquidData('banana-mc')
model <- mcSVM( Y~. , banana$train, display=1, threads=2) result <- test(model, banana$test)
errors(result)
## Extra Datasets for the Demo
covertype data set with 35.090 training and 34.910 test samples
covertype data set with 522.909 training and 58.103 test samples
Both datasets were compiled from LIBSVM's version of the covertype dataset, which in turn was taken from the UCI repository and preprocessed as in [RC02a]. Copyright for this dataset is by Jock A. Blackard and Colorado State University.
## Citation
If you use liquidSVM, please cite it as:
I. Steinwart and P. Thomann. liquidSVM: A fast and versatile SVM package. ArXiv e-prints 1702.06899, February 2017.
|
2019-01-21 08:37:23
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.470053493976593, "perplexity": 12634.902544731793}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-04/segments/1547583763839.28/warc/CC-MAIN-20190121070334-20190121092334-00381.warc.gz"}
|
https://socratic.org/questions/how-do-you-find-the-volume-of-the-solid-obtained-by-rotating-the-region-bounded--36
|
# How do you find the volume of the solid obtained by rotating the region bounded by: y=sqrt(x-1), y=0, x=5 rotated about y=7?
Dec 16, 2017
#### Explanation:
I would use shells. Here is a picture of the region with a slice taken parallel to the axis of rotation.
The slice is taken at a value of $y$, so we need to rewrite the curve $y = \sqrt{x - 1}$ as $x = {y}^{2} + 1$
The thickness of the slice and the shell is $\mathrm{dy}$
The radius is $r = 7 - y$
The height is $h = 5 - \left({y}^{2} + 1\right) = 4 - {y}^{2}$
The shell has volume $2 \pi r h \times \text{thickness} = 2 \pi \left(7 - y\right) \left(4 - {y}^{2}\right) \mathrm{dy}$
$y$ varies from $0$ to $2$ (That is the $y$ value on the curve at $x = 5$.)
The resulting solid has volume:
$V = 2 \pi {\int}_{0}^{2} \left(7 - y\right) \left(4 - {y}^{2}\right) \mathrm{dy}$
$= 2 \pi {\int}_{0}^{2} \left(28 - 4 y - 7 {y}^{2} + {y}^{3}\right) \mathrm{dy}$
$= 2 \pi {\left[28 y - 2 {y}^{2} - \frac{7}{3} {y}^{3} + {y}^{4} / 4\right]}_{0}^{2}$
$= \frac{200 \pi}{3}$
If you prefer washers
Then the thickness is $\mathrm{dx}$,
the greater radius is $7$, and
the lesser radius is $\sqrt{x - 1}$.
$x$ varies from $1$ to $5$, so the volume of the solid is
$V = \pi {\int}_{1}^{5} \left({7}^{2} - {\left(7 - \sqrt{x - 1}\right)}^{2}\right) \mathrm{dx}$
$= \pi {\int}_{1}^{5} \left(14 \sqrt{x - 1} - \left(x - 1\right)\right) \mathrm{dx}$
To integrate I would substitute $u = x - 1$ to get
$\pi {\int}_{0}^{4} \left(14 {u}^{\frac{1}{2}} - u\right) \mathrm{du}$
Dec 16, 2017
See below.
#### Explanation:
Looking at the graph, we can see that the area A rotated around $y = 7$ is the volume we seek.
We can find this volume by finding the volume of the area (A+B). This is easy and doesn’t require integration. The radius of the cylinder that will be formed has a radius of 7, this is just the height from the x axis. We square this and multiply by $\pi$ x the length of the interval $\left[1 , 5\right]$
Length of interval is $5 - 1 = 4$
$\therefore$
Volume (A + B ):
$V = {7}^{2} \pi \cdot 196 \pi$
From this we need to subtract the volume of B. From the graph we can see that the radius is $7 - \sqrt{x - 1}$. This will have to be found by integration in the normal way.
Volume of B:
${\left(7 - \sqrt{x - 1}\right)}^{2} = 48 - 14 \sqrt{x - 1} + x$
$V = \pi \cdot {\int}_{1}^{5} \left(48 - 14 \sqrt{x - 1} + x\right) \mathrm{dx} = 48 x + \frac{1}{2} {x}^{2} - \frac{28}{3} {\left(x - 1\right)}^{\frac{3}{2}}$
Area first:
${\left[48 x + \frac{1}{2} {x}^{2} - \frac{28}{3} {\left(x - 1\right)}^{\frac{3}{2}}\right]}^{5} - {\left[48 x + \frac{1}{2} {x}^{2} - \frac{28}{3} {\left(x - 1\right)}^{\frac{3}{2}}\right]}_{1}$
..................................................................................................................................
${\left[48 \left(5\right) + \frac{1}{2} {\left(5\right)}^{2} - \frac{28}{3} {\left(\left(5\right) - 1\right)}^{\frac{3}{2}}\right]}^{5} - {\left[48 \left(1\right) + \frac{1}{2} {\left(1\right)}^{2} - \frac{28}{3} {\left(\left(1\right) - 1\right)}^{\frac{3}{2}}\right]}_{1}$
$\pi \cdot \left[\frac{505}{2} - \frac{112}{3} \sqrt{4}\right] - \left[\frac{97}{2}\right] = \frac{388}{3} \pi$
Volume= $\frac{388}{3} \pi$
Volume of A:
$196 \pi - \frac{388}{3} \pi = \frac{200}{3} \pi = 66.667 \pi$
Volume of revolution:
|
2022-08-13 19:24:45
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 40, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9399759769439697, "perplexity": 293.67797183883533}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-33/segments/1659882571982.99/warc/CC-MAIN-20220813172349-20220813202349-00698.warc.gz"}
|
https://cstheory.stackexchange.com/questions/21376/covering-by-disjoint-sets
|
# Covering by disjoint sets
We are given a universe $\mathcal{U}=\{e_1,..,e_n\}$ and a set of subsets $\mathcal{S}=\{s_1,s_2,...,s_m\}\subseteq 2^\mathcal{U}$.
I'm interested in the approximability of two problems, or in general, what is known about them.
1. Given a number $k'\in[m]$, is there a set $\mathcal{S'} \subseteq \mathcal{S}$, $|\mathcal{S'}|=k'$ such that all of the sets in $\mathcal{S'}$ are disjoint.
2. Given a number $k''\in[n]$, is there a set $\mathcal{S''} \subseteq \mathcal{S}$ such that $|\cup_{s\in\mathcal{S''}}s|\geq k''$ (i.e. it covers at least $k$ elements) and the sets in $\mathcal{S''}$ are disjoint.
• The problems are NP-complete: The first has a straight forward reduction from 3-dimensional matching, and the second answers exact cover directly.
• The first problem seems $APX-hard$ if we use the hardness results from k-dimensional matching.
• Both problems can be viewed as a special case of Independent Set over the graph whose vertices are $\mathcal{S}$ and there's an edge $(s_i,s_j)$ iff $s_i\cap s_j$ isn't empty ( (2) has vertices weights $w(s_i)=|s_i|$), but I don't see any easy reduction from $IS$ to either that doesn't require exponential size blowup.
What can we say about the approximability of the two problems? maximal k-dimensional matching is known to be approximable within a factor of $\frac{k}{2}$, does it have an analogue for the first problem?
Both of these problems seems natural, so I'm tagging this question as a reference request, assuming they have been looked at under different name, rings a bell to anyone?
• There is an easy reduction from IS to Set Packing. For each vertex $v$ create a set $S_v$ which contains all the edges incident to $v$ as its elements. – Chandra Chekuri Mar 5 '14 at 21:44
1. Your first problem is more or less in-approximable. It contains "Independent Set" as a special case. For a graph $G=(V,E)$, define your ground set as $U:=V$ and for every vertex $v\in V$ construct a corresponding subset $S_v$ in your set system that contains the (closed) neighborhood of $v$. Then finding a cardinality-$k$ independent set in $G$ is equivalent to finding a cardinality-$k$ sub-system of disjoint sets in your set system. "Independent Set" cannot be approximated in polynomial time within a factor of $|V|^{1-\varepsilon}$ for any $\varepsilon>0$, unless P=NP.
Problem 1 is known as SET PACKING. Like other packing problems, it's annoyingly hard. The best known bound is a $O(\sqrt{|S|})$ approximation and it is indeed APX-hard.
|
2021-04-21 07:44:47
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8830781579017639, "perplexity": 223.6030967646708}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-17/segments/1618039526421.82/warc/CC-MAIN-20210421065303-20210421095303-00534.warc.gz"}
|
http://www.scientificlib.com/en/Mathematics/LX/DotProduct.html
|
### - Art Gallery -
In mathematics, the dot product or scalar product is an algebraic operation that takes two equal-length sequences of numbers (usually coordinate vectors) and returns a single number obtained by multiplying corresponding entries and then summing those products. The name "dot product" is derived from the centered dot " $$\cdot$$ " that is often used to designate this operation; the alternative name "scalar product" emphasizes the scalar (rather than vector) nature of the result.
When two Euclidean vectors are expressed in terms of coordinate vectors on an orthonormal basis, the inner product of the former is equal to the dot product of the latter. For more general vector spaces, while both the inner and the dot product can be defined in different contexts (for instance with complex numbers as scalars) their definitions in these contexts may not coincide.
In three dimensional space, the dot product contrasts with the cross product, which produces a vector as result. The dot product is directly related to the cosine of the angle between two vectors in Euclidean space of any number of dimensions.
Definition
The dot product of two vectors a = [a1, a2, ..., an] and b = [b1, b2, ..., bn] is defined as:
$$\mathbf{a}\cdot \mathbf{b} = \sum_{i=1}^n a_ib_i = a_1b_1 + a_2b_2 + \cdots + a_nb_n$$
where Σ denotes summation notation and n is the dimension of the vector space.
In dimension 2, the dot product of vectors [a,b] and [c,d] is ac + bd. Similarly, in a dimension 3, the dot product of vectors [a,b,c] and [d,e,f] is ad + be + cf. For example, the dot product of two three-dimensional vectors [1, 3, −5] and [4, −2, −1] is
$$[1, 3, -5] \cdot [4, -2, -1] = (1)(4) + (3)(-2) + (-5)(-1) = 4 - 6 + 5 = 3.$$
Given two column vectors, their dot product can also be obtained by multiplying the transpose of one vector with the other vector and extracting the unique coefficient of the resulting 1 × 1 matrix. The operation of extracting the coefficient of such a matrix can be written as taking its determinant or its trace (which is the same thing for 1 × 1 matrices); since in general tr(AB) = tr(BA) whenever AB or equivalently BA is a square matrix, one may write
$$\mathbf{a} \cdot \mathbf{b} = \det( \mathbf{a}^{\mathrm{T}}\mathbf{b} ) = \det( \mathbf{b}^{\mathrm{T}}\mathbf{a} ) = \mathrm{tr} ( \mathbf{a}^{\mathrm{T}}\mathbf{b} ) = \mathrm{tr} ( \mathbf{b}^{\mathrm{T}}\mathbf{a} ) = \mathrm{tr} ( \mathbf{a}\mathbf{b}^{\mathrm{T}} ) = \mathrm{tr} ( \mathbf{b}\mathbf{a}^{\mathrm{T}} )$$
More generally the coefficient (i,j) of a product of matrices is the dot product of the transpose of row i of the first matrix and column j of the second matrix.
Geometric interpretation
$$\mathbf{A}_B = \left\|\mathbf{A}\right\| \cos\theta$$ is the scalar projection of $$\mathbf{A}$$ onto $$\mathbf{B}.$$
Since $$\mathbf{A} \cdot \mathbf{B} = \left\|\mathbf{A}\right\| \left\|\mathbf{B}\right\| \cos\theta, then \mathbf{A}_B = \frac{\mathbf{A} \cdot \mathbf{B}}{\left\|\mathbf{B}\right\|}.$$
In Euclidean geometry, the dot product of vectors expressed in an orthonormal basis is related to their length and angle. For such a vector $$\mathbf{a}$$, the dot product $$\mathbf{a}\cdot\mathbf{a}$$ is the square of the length (magnitude) of $$\mathbf{a}$$ , denoted by $$\left\|\mathbf{a}\right\|$$:
$${\mathbf{a} \cdot \mathbf{a}}=\left\|\mathbf{a}\right\|^2$$
If \mathbf{b} is another such vector, and \theta is the angle between them:
$$\mathbf{a} \cdot \mathbf{b}=\left\|\mathbf{a}\right\| \, \left\|\mathbf{b}\right\| \cos \theta \,$$
This formula can be rearranged to determine the size of the angle between two nonzero vectors:
$$\theta=\arccos \left( \frac {\bold{a}\cdot\bold{b}} {\left\|\bold{a}\right\|\left\|\bold{b}\right\|}\right).$$
The Cauchy–Schwarz inequality guarantees that the argument of $$\arccos$$ is valid.
One can also first convert the vectors to unit vectors by dividing by their magnitude:
$$\boldsymbol{\hat{a}} = \frac{\bold{a}}{\left\|\bold{a}\right\|}$$
then the angle \theta is given by
$$\theta = \arccos ( \boldsymbol{\hat a}\cdot\boldsymbol{\hat b}).$$
The terminal points of both unit vectors lie on the unit circle. The unit circle is where the trigonometric values for the six trig functions are found. After substitution, the first vector component is cosine and the second vector component is sine, i.e. $$(\cos x, \sin x)$$ for some angle x. The dot product of the two unit vectors then takes (\cos x, \sin x) and (\cos y, \sin y) \) for angles x and y and returns $$\cos x \, \cos y + \sin x \, \sin y = \cos(x - y)$$ where $$x - y = \theta.$$
As the cosine of 90° is zero, the dot product of two orthogonal vectors is always zero. Moreover, two vectors can be considered orthogonal if and only if their dot product is zero, and they have non-null length. This property provides a simple method to test the condition of orthogonality.
Sometimes these properties are also used for "defining" the dot product, especially in 2 and 3 dimensions; this definition is equivalent to the above one. For higher dimensions the formula can be used to define the concept of angle.
The geometric properties rely on the basis being orthonormal, i.e. composed of pairwise perpendicular vectors with unit length.
Scalar projection
If both $$\mathbf{a}$$ and $$\mathbf{b}$$ have length one (i.e., they are unit vectors), their dot product simply gives the cosine of the angle between them.
If only $$\mathbf{b}$$ is a unit vector, then the dot product $$\mathbf{a}\cdot\mathbf{b}$$ gives $$\left\|\mathbf{a}\right\|\cos\theta$$ , i.e., the magnitude of the projection of $$\mathbf{a}$$ in the direction of $$\mathbf{b}$$ , with a minus sign if the direction is opposite. This is called the scalar projection of $$\mathbf{a}$$ onto $$\mathbf{b}$$ , or scalar component of $$\mathbf{a}$$ in the direction of $$\mathbf{b}$$ (see figure). This property of the dot product has several useful applications (for instance, see next section).
If neither $$\mathbf{a} nor \( \mathbf{b} is a unit vector, then the magnitude of the projection of \mathbf{a} in the direction of \( \mathbf{b} is \( \mathbf{a}\cdot\frac{\mathbf{b}}{\left\|\mathbf{b}\right\|}$$ , as the unit vector in the direction of $$\mathbf{b} is \frac{\mathbf{b}}{\left\|\mathbf{b}\right\|}.$$
Rotation
When an orthonormal basis that the vector \mathbf{a} is represented in terms of is rotated, \mathbf{a}'s matrix in the new basis is obtained through multiplying \mathbf{a} by a rotation matrix \mathbf{R}. This matrix multiplication is just a compact representation of a sequence of dot products.
For instance, let
$$\mathbf{B}_1 = \{\mathbf{x}, \mathbf{y}, \mathbf{z}\} and \mathbf{B}_2 = \{\mathbf{u}, \mathbf{v}, \mathbf{w}\}$$ be two different orthonormal bases of the same space $$\mathbb{R}^3, with \mathbf{B}_2$$ obtained by just rotating $$\mathbf{B}_1,$$
$$\mathbf{a}_1 = (a_x, a_y, a_z)$$ represent vector \mathbf{a} in terms of \mathbf{B}_1,
$$\mathbf{a}_2 = (a_u, a_v, a_w)$$ represent the same vector in terms of the rotated basis $$\mathbf{B}_2,$$
$$\mathbf{u}_1, \mathbf{v}_1, \mathbf{w}_1,$$ be the rotated basis vectors $$\mathbf{u}, \mathbf{v}, \mathbf{w} represented in terms of \mathbf{B}_1.$$
Then the rotation from $$\mathbf{B}_1 to \mathbf{B}_2$$ is performed as follows:
$$\bold a_2 = \bold{Ra}_1 = \begin{bmatrix} u_x & u_y & u_z \\ v_x & v_y & v_z \\ w_x & w_y & w_z \end{bmatrix} \begin{bmatrix} a_x \\ a_y \\ a_z \end{bmatrix} = \begin{bmatrix} \bold u_1\cdot\bold a_1 \\ \bold v_1\cdot\bold a_1 \\ \bold w_1\cdot\bold a_1 \end{bmatrix} = \begin{bmatrix} a_u \\ a_v \\ a_w \end{bmatrix} .$$
Notice that the rotation matrix $$\mathbf{R}$$ is assembled by using the rotated basis vectors $$\mathbf{u}_1, \mathbf{v}_1, \mathbf{w}_1$$ as its rows, and these vectors are unit vectors. By definition, $$\mathbf{Ra}_1$$ consists of a sequence of dot products between each of the three rows of $$\mathbf{R}$$ and vector $$\mathbf{a}_1$$ . Each of these dot products determines a scalar component of $$\mathbf{a}$$ in the direction of a rotated basis vector (see previous section).
If $$\mathbf{a}_1 is a row vector, rather than a column vector, then \( \mathbf{R}$$ must contain the rotated basis vectors in its columns, and must post-multiply $$\mathbf{a}_1$$ :
$$\bold a_2 = \bold a_1 \bold R = \begin{bmatrix} a_x & a_y & a_z \end{bmatrix} \begin{bmatrix} u_x & v_x & w_x \\ u_y & v_y & w_y \\ u_z & v_z & w_z \end{bmatrix} = \begin{bmatrix} \bold u_1\cdot\bold a_1 & \bold v_1\cdot\bold a_1 & \bold w_1\cdot\bold a_1 \end{bmatrix} = \begin{bmatrix} a_u & a_v & a_w \end{bmatrix} .$$
Physics
In physics, vector magnitude is a scalar in the physical sense, i.e. a physical quantity independent of the coordinate system, expressed as the product of a numerical value and a physical unit, not just a number. The dot product is also a scalar in this sense, given by the formula, independent of the coordinate system. Example:
Mechanical work is the dot product of force and displacement vectors.
Magnetic flux is the dot product of the magnetic field and the area vectors.
Properties
The following properties hold if a, b, and c are real vectors and r is a scalar.
The dot product is commutative:
$$\mathbf{a} \cdot \mathbf{b} = \mathbf{b} \cdot \mathbf{a}.$$
The dot product is distributive over vector addition:
$$\mathbf{a} \cdot (\mathbf{b} + \mathbf{c}) = \mathbf{a} \cdot \mathbf{b} + \mathbf{a} \cdot \mathbf{c}.$$
The dot product is bilinear:
$$\mathbf{a} \cdot (r\mathbf{b} + \mathbf{c}) = r(\mathbf{a} \cdot \mathbf{b}) +(\mathbf{a} \cdot \mathbf{c}).$$
When multiplied by a scalar value, dot product satisfies:
$$(c_1\mathbf{a}) \cdot (c_2\mathbf{b}) = (c_1c_2) (\mathbf{a} \cdot \mathbf{b})$$
(these last two properties follow from the first two).
Two non-zero vectors a and b are orthogonal if and only if a • b = 0.
Unlike multiplication of ordinary numbers, where if ab = ac, then b always equals c unless a is zero, the dot product does not obey the cancellation law:
If a • b = a • c and a ≠ 0, then we can write: a • (b − c) = 0 by the distributive law; the result above says this just means that a is perpendicular to (b − c), which still allows (b − c) ≠ 0, and therefore b ≠ c.
Provided that the basis is orthonormal, the dot product is invariant under isometric changes of the basis: rotations, reflections, and combinations, keeping the origin fixed. The above mentioned geometric interpretation relies on this property. In other words, for an orthonormal space with any number of dimensions, the dot product is invariant under a coordinate transformation based on an orthogonal matrix. This corresponds to the following two conditions:
The new basis is again orthonormal (i.e., it is orthonormal expressed in the old one).
The new base vectors have the same length as the old ones (i.e., unit length in terms of the old basis).
If a and b are functions, then the derivative of a • b is a' • b + a • b'
Triple product expansion
Main article: Triple product
This is a very useful identity (also known as Lagrange's formula) involving the dot- and cross-products. It is written as
$$\mathbf{a} \times (\mathbf{b} \times \mathbf{c}) = \mathbf{b}(\mathbf{a}\cdot\mathbf{c}) - \mathbf{c}(\mathbf{a}\cdot\mathbf{b})$$
which is easier to remember as "BAC minus CAB", keeping in mind which vectors are dotted together. This formula is commonly used to simplify vector calculations in physics.
Proof of the geometric interpretation
Consider the element of Rn
$$\mathbf{v} = v_1 \mathbf{\hat{e}}_1 + v_2 \mathbf{\hat{e}}_2 + ... + v_n \mathbf{\hat{e}}_n. \,$$
Repeated application of the Pythagorean theorem yields for its length |v|
$$|\mathbf{v}|^2 = v_1^2 + v_2^2 + ... + v_n^2. \,$$
But this is the same as
$$\mathbf{v} \cdot \mathbf{v} = v_1^2 + v_2^2 + ... + v_n^2, \,$$
so we conclude that taking the dot product of a vector v with itself yields the squared length of the vector.
Lemma 1
$$\mathbf{v} \cdot \mathbf{v} = |\mathbf{v}|^2. \,$$
Now consider two vectors a and b extending from the origin, separated by an angle θ. A third vector c may be defined as
$$\mathbf{c} \ \stackrel{\mathrm{def}}{=}\ \mathbf{a} - \mathbf{b}. \,$$
creating a triangle with sides a, b, and c. According to the law of cosines, we have
$$|\mathbf{c}|^2 = |\mathbf{a}|^2 + |\mathbf{b}|^2 - 2 |\mathbf{a}||\mathbf{b}| \cos \theta. \,$$
Substituting dot products for the squared lengths according to Lemma 1, we get
$$\mathbf{c} \cdot \mathbf{c} = \mathbf{a} \cdot \mathbf{a} + \mathbf{b} \cdot \mathbf{b} - 2 |\mathbf{a}||\mathbf{b}| \cos\theta. \,$$ (1)
But as c ≡ a − b, we also have
$$\mathbf{c} \cdot \mathbf{c} = (\mathbf{a} - \mathbf{b}) \cdot (\mathbf{a} - \mathbf{b}) \,,$$
which, according to the distributive law, expands to
\mathbf{c} \cdot \mathbf{c} = \mathbf{a} \cdot \mathbf{a} + \mathbf{b} \cdot \mathbf{b} -2(\mathbf{a} \cdot \mathbf{b}). \, \) (2)
Merging the two c • c equations, (1) and (2), we obtain
$$\mathbf{a} \cdot \mathbf{a} + \mathbf{b} \cdot \mathbf{b} -2(\mathbf{a} \cdot \mathbf{b}) = \mathbf{a} \cdot \mathbf{a} + \mathbf{b} \cdot \mathbf{b} - 2 |\mathbf{a}||\mathbf{b}| \cos\theta. \,$$
Subtracting a • a + b • b from both sides and dividing by −2 leaves
$$\mathbf{a} \cdot \mathbf{b} = |\mathbf{a}||\mathbf{b}| \cos\theta. \,$$
Q.E.D.
Generalization
Real vector spaces
The inner product generalizes the dot product to abstract vector spaces over the real numbers and is usually denoted by \langle\mathbf{a}\, , \mathbf{b}\rangle. More precisely, if V is a vector space over \mathbb{R}, the inner product is a function V\times V \rightarrow \mathbb{R}. Owing to the geometric interpretation of the dot product, the norm ||a|| of a vector a in such an inner product space is defined as
$$\|\mathbf{a}\| = \sqrt{\langle\mathbf{a}\, , \mathbf{a}\rangle}$$
such that it generalizes length, and the angle θ between two vectors a and b by
$$\cos{\theta} = \frac{\langle\mathbf{a}\, , \mathbf{b}\rangle}{\|\mathbf{a}\| \, \|\mathbf{b}\|}.$$
In particular, two vectors are considered orthogonal if their inner product is zero
$$\langle\mathbf{a}\, , \mathbf{b}\rangle = 0.$$
Complex vectors
For vectors with complex entries, using the given definition of the dot product would lead to quite different geometric properties. For instance the dot product of a vector with itself can be an arbitrary complex number, and can be zero without the vector being the zero vector; this in turn would have severe consequences for notions like length and angle. Many geometric properties can be salvaged, at the cost of giving up the symmetric and bilinear properties of the scalar product, by alternatively defining
$$\mathbf{a}\cdot \mathbf{b} = \sum{a_i \overline{b_i}}$$
where bi is the complex conjugate of bi. Then the scalar product of any vector with itself is a non-negative real number, and it is nonzero except for the zero vector. However this scalar product is not linear in b (but rather conjugate linear), and the scalar product is not symmetric either, since
$$\mathbf{a} \cdot \mathbf{b} = \overline{\mathbf{b} \cdot \mathbf{a}}.$$
The angle between two complex vectors is then given by
$$\cos\theta = \frac{\operatorname{Re}(\mathbf{a}\cdot\mathbf{b})}{\|\mathbf{a}\|\,\|\mathbf{b}\|}.$$
This type of scalar product is nevertheless quite useful, and leads to the notions of Hermitian form and of general inner product spaces.
The Frobenius inner product generalizes the dot product to matrices. It is defined as the sum of the products of the corresponding components of two matrices having the same size.
Generalization to tensors
The dot product between a tensor of order n and a tensor of order m is a tensor of order n+m-2. The dot product is calculated by multiplying and summing across a single index in both tensors. If \mathbf{A} and $$\mathbf{B} are two tensors with element representation \( A_{ij\dots}^{k\ell\dots} and B_{mn\dots}^{p{\dots}i} the elements of the dot product \( \mathbf{A} \cdot \mathbf{B} are given by \( A_{ij\dots}^{k\ell\dots}\cdot B_{mn\dots}^{p{\dots}i} = \sum_{i=1}^n A_{ij\dots}^{k\ell\dots}B_{mn\dots}^{p{\dots}i}.$$
This definition naturally reduces to the standard vector dot product when applied to vectors, and matrix multiplication when applied to matrices .
Occasionally, a double dot product is used to represent multiplying and summing across two indices. The double dot product between two 2nd order tensors is a scalar quantity.
Cauchy–Schwarz inequality
Matrix multiplication
Weisstein, Eric W., "Dot product" from MathWorld.
Explanation of dot product including with complex vectors
"Dot Product" by Bruce Torrence, Wolfram Demonstrations Project, 2007.
|
2021-12-01 23:52:38
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9624467492103577, "perplexity": 246.8447628022655}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-49/segments/1637964361064.58/warc/CC-MAIN-20211201234046-20211202024046-00519.warc.gz"}
|