question_id
int64 9.29k
5.11M
| title
stringlengths 15
149
| question_text
stringlengths 36
6.9k
| question_owner
stringlengths 3
27
| question_link
stringlengths 64
129
| answer
dict | source_license
stringclasses 1
value |
|---|---|---|---|---|---|---|
5,108,519
|
Seeking other generalisations to the integral $\int_0^{\infty} \frac{\ln \left(x+\frac{1}{x}\right)}{1+x^2}dx$
|
The
integral
$$
\int_0^{\infty} \frac{\ln \left(x+\frac{1}{x}\right)}{1+x^2}=\pi \ln 2
$$
invites me to investigate the integral
$$
I=\int_0^{\infty} \frac{\ln \left(x+\frac{1}{x}\right)}{x^4+1} d x
$$
First of all, via the inverse substitution
$x\to \frac{1}{x}$
$$
I=\int_0^{\infty} \frac{x^2\ln \left(\frac{1}{x}+x\right)}{x^4+1} d x
$$
Averaging these two versions gives
$$
\begin{aligned}
I & =\frac{1}{2} \int_0^{\infty} \frac{\left(1+x^2\right) \ln \left(\frac{1}{x}+x\right)}{1+x^4} d x . \\
& =\frac{1}{2} \int_0^{\infty} \frac{\left(1+\frac{1}{x^2}\right) \ln \left(x+\frac{1}{x}\right)}{x^2+\frac{1}{x^2}} d x \\
& =\frac{1}{4} \int_0^{\infty} \frac{\ln \left(x+\frac{1}{x}\right)^2}{x^2+\frac{1}{x^2}} d\left(x-\frac{1}{x}\right) \\
& =\frac{1}{4} \int_0^{\infty} \frac{\ln \left[\left(x-\frac{1}{x}\right)^2+4\right]}{\left(x-\frac{1}{x}\right)^2+2} d\left(x-\frac{1}{x}\right)
\end{aligned}
$$
The
Glasser Master Theorem
$$
\int_0^{\infty} f\left(x-\frac{1}{x}\right) d x=\frac{1}{2} \int_{-\infty}^{\infty} f(x) d x
$$
rewrites the integral as:
$$
\boxed{I
=\frac{1}{2} \int_0^{\infty} \frac{\ln \left(x^2+4\right)}{x^2+2} d x
=\frac{\pi}{2 \sqrt{2}} \ln (2+\sqrt{2})}
$$
using the answer in the
post
.
Your comments and other generalisations are highly appreciated.
|
Lai
|
https://math.stackexchange.com/questions/5108519/seeking-other-generalisations-to-the-integral-int-0-infty-frac-ln-leftx
|
{
"answer_id": 5108524,
"answer_link": null,
"answer_owner": "Claude Leibovici",
"answer_text": "Assuming that\n\n$n$\n\n is a positve integer (\n\n$n \\geq 2$\n\n for the definite integral), for the antiderivative\n\n$$I_n=\\int\\frac{\\log\\left(x+\\frac{1}{x}\\right)}{x^n+1}\\,dx$$\n\n use\n\n$$x^n+1=\\prod_{k=1}^n (x-r_k)$$\n\n and partial fraction decomposition gives\n\n$$I_n=\\sum_{k=1}^n a_k\\,\\int\\frac{\\log \\left(x+\\frac{1}{x}\\right)}{x-r_k}\\,dx$$\n\n$$J_k=\\int\\frac{\\log \\left(x+\\frac{1}{x}\\right)}{x-r_k}\\,dx$$\n\n$$J_k=\\log \\left(r_k+\\frac{1}{r_k}\\right) \\log(x-r_k)+\\text{Li}_2\\left(1-\\frac{x}{r_k}\\right)-\\text{Li}_2\\left(\\frac{r_k-x}{r_k-i}\\right)-\n\n \\text{Li}_2\\left(\\frac{r_k-x}{r_k+i}\\right)$$\n\nRecombine the logarithms and apply the bounds.\n\nIt is tedious but it works.",
"is_accepted": false,
"score": 3
}
|
CC BY-SA (Stack Exchange content)
|
2,631,074
|
Triple integral. Spherical coordinates. Too much calculations
|
I have troubles with integral:
$$
\iiint \limits_{S} g(x;y;z)dxdydz\ \label{orig} \tag{1}
$$
Where $g(x;y;z) = \frac{xyz}{(a^2 + x^2 + y^2 + z^2)^3}$ and area is given by inequalities:
$$
(x^2 + y^2 + z^2)^{3/2} \leqslant 4xy, \\
x \geqslant 0, y \geqslant 0, z\geqslant 0
$$
I know one way to solve this. I used conversion to
spherical coordinate system
and got the following integral:
$$
\int\limits_{0}^{\pi / 2}d\varphi \int\limits_{0}^{\pi/2}d\theta \int\limits_{0}^{\sin2\varphi(1 - \cos2\theta)} \frac{r^5 \sin2\varphi \sin2\theta(1 - \cos2\theta)}{8(a^2 + r^2)^3} dr
$$
This integral is solvable, but there are a lot of calculations in the process. My question is:
Is it possible to solve original integral $\eqref{orig}$ differently? is there more elegant solution? Maybe I'm making some mistakes?
|
puhsu
|
https://math.stackexchange.com/questions/2631074/triple-integral-spherical-coordinates-too-much-calculations
|
{
"answer_id": 5108577,
"answer_link": null,
"answer_owner": "user170231",
"answer_text": "Here is an alternative path to converting to spherical coordinates. This is not (yet) a complete answer, nor is it a more elegant method IMO, but based on the closed form below, I think there is a good chance of there being a much cleaner solution.\n\nDenote the initial domain of integration by\n\n$A$\n\n, then reduce and transform the integral to new ones over\n\n$B$\n\n the region under\n\n$A$\n\n in the plane\n\n$z=0$\n\n (\n\nplot\n\n);\n\n$C$\n\n the rotation of\n\n$B$\n\n clockwise by\n\n$\\pi/4$\n\n rad about the origin (\n\nplot\n\n);\n\n$D$\n\n the region obtained by the change of variables,\n\n$(s,t)=\\left(u^2+v^2,u^2-v^2\\right)$\n\n (\n\nplot\n\n)\n\n(NB: The double integral over\n\n$D$\n\n as shown above needs an additional factor of\n\n$2$\n\n, though I'm not entirely sure why just yet. Symmetry is a likely culprit. This factor is included below.)\n\n$$\\begin{align*}\n\nI(a) &= \\iiint_A \\frac{xyz}{\\left(a^2+x^2+y^2+z^2\\right)^3} \\, dz \\, dy \\, dx \\\\\n\n&= \\iint_B \\frac{xy}4 \\left(\\frac1{\\left(a^2+x^2+y^2\\right)^2} - \\frac1{\\left(a^2+(4xy)^{2/3}\\right)^2}\\right) \\, dy \\, dx \\\\\n\n&= \\iint_C \\frac{u^2-v^2}8 \\left(\\frac1{\\left(a^2+u^2+v^2\\right)^2} - \\frac1{\\left(a^2+2^{2/3}\\left(u^2-v^2\\right)^{2/3}\\right)^2}\\right) \\, dv \\, du \\\\\n\n&= \\iint_D \\frac t{32\\sqrt{s^2-t^2}} \\left(\\frac1{\\left(a^2+s\\right)^2} - \\frac1{\\left(a^2+(2t)^{2/3}\\right)^2}\\right) \\, dt \\, ds \\\\\n\n&= 2 \\int_0^4 \\int_{\\tfrac{s^{3/2}}2}^s \\cdots \\, dt \\, ds \\\\\n\n&= \\frac1{16} \\int_0^4 \\int_\\tfrac{\\sqrt s}2^1 \\frac{st}{\\sqrt{1-t^2}} \\left(\\frac1{\\left(a^2+s\\right)^2} - \\frac1{a^2+(2st)^{2/3}}\\right) \\, dt \\, ds & t\\to st \\\\\n\n&= \\frac12 \\int_0^1 \\int_s^1 \\frac s{\\sqrt{1-t}} \\left(\\frac1{\\left(a^2+4s\\right)^2} - \\frac1{\\left(a^2+4(st)^{2/3}\\right)^2}\\right) \\, dt \\, ds & s\\to4s \\\\\n\n&= \\frac12 \\int_0^1 \\int_0^t \\cdots \\, ds \\, dt & \\text{Fubini} \\\\\n\n&= \\frac12 \\int_0^1 \\int_0^1 \\frac{st^2}{\\sqrt{1-t}} \\left(\\frac1{\\left(a^2+4st\\right)^2} - \\color{red}{\\frac1{\\left(a^2+4s^{2/3}t\\right)^2}}\\right) \\, ds \\, dt & s\\to st \\\\\n\n&= \\frac14 \\int_0^1 \\int_0^1 \\frac{s(2-3s)t^2}{\\sqrt{1-t}\\left(a^2+4st\\right)^2} \\, ds \\, dt & \\color{red}{s\\to s^{3/2}} \\\\\n\n&= \\frac12 \\int_0^1 \\int_0^1 \\frac{s(2-3s)\\left(1-r^2\\right)^2}{\\left(a^2+4\\left(1-r^2\\right)s^2\\right)^2} \\, ds \\, dr & r=\\sqrt{1-t} \\\\\n\n&= -\\frac1{16} \\int_0^1 \\int_0^1 \\left(\\frac32 - \\frac{3a^2+4\\left(1-r^2\\right)}{a^2+4\\left(1-r^2\\right)s} + \\frac{3a^4+8a^2\\left(1-r^2\\right)}{2\\left(a^2+4\\left(1-r^2\\right)s\\right)^2}\\right) \\, ds \\, dr \\\\\n\n&= -\\frac3{32} + \\frac1{64} \\int_0^1 \\left(4+\\frac{3a^2}{1-r^2}\\right) \\log\\frac{a^2+4-4r^2}{a^2} \\, dr \\\\\n\n&\\qquad - \\frac1{32} \\int_0^1 \\left(2+\\frac{a^2}{a^2+4-4r^2}\\right) \\, dr \\\\\n\n&= -\\frac5{32} + \\frac1{16} J_1 + \\frac{3a^2}{64} J_2 - \\frac1{32} J_3 \\\\\n\n&= -\\frac9{32} + \\frac{3a^2+16}{32} J_3 + \\frac{3a^2}{64} J_2\n\n\\end{align*}$$\n\nwhere\n\n$$\\begin{align*}\n\nJ_1 &= \\int_0^1 \\log\\frac{a^2+4-4r^2}{a^2} \\, dr \\\\\n\n&= 2 \\int_0^1 \\left(\\frac{a^2+4}{a^2+4-4r^2} - 1\\right) \\, dr & \\text{by parts} \\\\\n\n&= 2\\left(a^2+4\\right) J_3 - 2 \\\\[2ex]\n\nJ_3 &= \\int_0^1 \\frac{dr}{a^2+4-4r^2} \\, dr \\\\\n\n&= \\frac1{2\\sqrt{a^2+4}} \\int_0^\\tfrac2{\\sqrt{a^2+4}} \\frac{dr}{1-r^2} & r\\to\\frac{\\sqrt{a^2+4}}2r \\\\\n\n&= \\frac1{2\\sqrt{a^2+4}} \\operatorname{artanh} \\frac2{\\sqrt{a^2+4}}\n\n\\end{align*}$$\n\nThe missing piece is\n\n$$J_2 = \\int_0^1 \\log\\frac{a^2+4-4r^2}{a^2} \\cdot \\frac{dr}{1-r^2}\n\n\\stackrel{?}= \\operatorname{artanh}^2\\frac2{\\sqrt{a^2+4}}$$\n\nbased on numerical evidence. This most likely follows from reduction of a combination of\n\ndilogarithms\n\n, but I suspect there is a simpler elementary approach to evaluating\n\n$J_2$\n\n. In any case, we have\n\n$$I(a) = -\\frac9{32} + \\frac{3a^2+16}{64\\sqrt{a^2+4}} \\operatorname{artanh} \\frac2{\\sqrt{a^2+4}} + \\frac{3a^2}{64} \\operatorname{artanh}^2\\frac2{\\sqrt{a^2+4}}$$",
"is_accepted": false,
"score": 0
}
|
CC BY-SA (Stack Exchange content)
|
2,066,444
|
Do Riemann-Stieltjes integrals "iterate"?
|
Let's say we define: $$h(x) = \int_a^x f(t)dg(t),$$ then do we have for integrable functions $a$ that: $$\int_a^b a(u) dh(u) = \int_a^b a(u)f(u)dg(u) ?$$
I would like to know whether this holds for either the Riemann-Stieltjes or the Lebesgues-Stieltjes integral or any similar integral. Also for the sake of simplicity, feel free to assume that all relevant functions are as "nice" as you want, e.g. real-analytic.
A yes/no answer would suffice, as would references which either prove or disprove such a result.
Attempt:
In "nice" cases, we hope that the behavior of the Riemann-Stieltjes sums will predict the behavior for the integrals, i.e. that the behavior will be respected/preserved by the appropriate limits. So let's write now instead: $$\int_a^b a(u) dh(u) \approx \sum_{i=0}^{n-1} a(x_i) (h(x_{i+1}) - h(x_i)) $$ Then by definition of $h$ we have that: $$h(x) \approx \sum_{j=0}^{m-1} f(t_j) (g(t_{j+1}) - g(t_j))$$ In particular for each $i$ we have that (setting $t_m = x_i, t_{m+1}=x_{i+1}$, etc.): $$h(x_{i+1}) - h(x_i) \approx \sum_{j=0}^{m} f(t_j) (g(t_{j+1}) - g(t_j)) - \sum_{j=0}^{m-1} f(t_j) (g(t_{j+1}) - g(t_j)) = f(x_i)(g(x_{i+1})-g(x_i))$$ so that substituting into the above: $$\int_a^b a(u) dh(u) \approx \sum_{i=0}^{n-1} a(x_i) (h(x_{i+1}) - h(x_i)) \approx \sum_{i=0}^{n-1}a(x_i)f(x_i)(g(x_{i+1})-g(x_i)) \approx \int_a^b a(u)f(u)dg(u). $$ Of course, the above "argument" is extremely sloppy and would require considerable effort to be made rigorous, assuming that is even possible.
But hopefully it suggests why I think the above result may be true -- I had hoped to find it or something similar on the
Wikipedia page for the Riemann-Stieltjes integral
, but it is not.
Also one might expect the identity to be true by sloppily "applying" the fundamental theorem of calculus ($h(x)``="\int_a^x f(t)g'(t)dt$ so $h'(u)``="f(u)g'(u)$), i.e. when $$\int_a^b a(u)dh(u) ``=" \int_a^b a(u) h'(u) du ``=" \int_a^b a(u) f(u) g'(u) du ``=" \int_a^b a(u) f(u) dg(u). $$
|
Chill2Macht
|
https://math.stackexchange.com/questions/2066444/do-riemann-stieltjes-integrals-iterate
|
{
"answer_id": 2562823,
"answer_link": null,
"answer_owner": "RRL",
"answer_text": "You are using\n\n$a$\n\n to denote both the lower integration limit and the function. To avoid confusion let your function\n\n$a$\n\n be written as\n\n$\\alpha$\n\n.\n\nThis is true given\n\nonly\n\n that\n\n$\\alpha$\n\n and\n\n$f$\n\n are Riemann-Stieltjes integrable with respect to\n\n$g$\n\n on\n\n$[a,b]$\n\n and\n\n$g$\n\n is increasing. This can be generalized if\n\n$g$\n\n has bounded variation as well. Note that R-S integrability implies that\n\n$\\alpha$\n\n and\n\n$f$\n\n are also bounded and that the product\n\n$\\alpha f$\n\n is R-S integrable with respect to\n\n$g$\n\n.\n\nTake a partition\n\n$P = (x_0, x_1, \\ldots, x_n)$\n\n of\n\n$[a,b]$\n\n. Any corresponding Riemann-Stieltjes with tags\n\n$\\xi_k \\in [x_{k-1},x_k]$\n\n can be written as\n\n$$S(P,\\alpha,h) = \\sum_{k=1}^n \\alpha(\\xi_k)[h(x_k)-h(x_{k-1})] = \\sum_{k=1}^n \\alpha(\\xi_k)\\int_{x_{k-1}}^{x_k}f(u) \\, dg(u)\\\\ = \\sum_{k=1}^n \\int_{x_{k-1}}^{x_k}\\alpha(\\xi_k)f(u) \\, dg(u). $$\n\nWe also have\n\n$$\\int_{a}^{b}\\alpha(u)f(u) \\, dg(u) = \\sum_{k=1}^n \\int_{x_{k-1}}^{x_k}\\alpha(u)f(u) \\, dg(u).$$\n\nThus,\n\n$$\\tag{*}\\left|S(P,\\alpha,h) - \\int_a^b \\alpha(u)f(u) \\, dg(u)\\right| = \\left|\\sum_{k=1}^n \\int_{x_{k-1}}^{x_k}[\\alpha(\\xi_k)-\\alpha(u)]f(u) \\, dg(u)\\right| \\\\ \\leqslant \\sum_{k=1}^n \\int_{x_{k-1}}^{x_k}|\\alpha(\\xi_k)-\\alpha(u)||f(u)| \\, dg(u) \\\\ \\leqslant M(f) \\sum_{k=1}^n \\int_{x_{k-1}}^{x_k}(M_k(\\alpha) - m_k(\\alpha)) \\, dg(u), $$\n\nwhere\n\n$M(f) = \\sup_{u \\in [a,b]} |f(u)|$\n\n,\n\n$M_k(\\alpha) = \\sup_{u \\in [x_{k-1},x_k]} \\alpha(u)$\n\n and\n\n$m_k(\\alpha) = \\inf_{u \\in [x_{k-1},x_k]} \\alpha(u).$\n\nNote that the RHS of (*) can be written in terms of upper and lower Riemann-Stieltjes sums as\n\n$$M(f)\\sum_{k=1}^n \\int_{x_{k-1}}^{x_k}(M_k(\\alpha) - m_k(\\alpha)) \\, dg(u) = \\sum_{k=1}^n \\int_{x_{k-1}}^{x_k}(M_k(\\alpha) - m_k(\\alpha)) [g(x_k) - g(x_{k-1})] \\\\ = M(f)(U(P,\\alpha,g) - L(P,\\alpha,g)).$$\n\nSince\n\n$\\alpha$\n\n is R-S integrable with respect to\n\n$\\alpha$\n\n it follows that for any\n\n$\\epsilon >0$\n\n there is a partition\n\n$P_\\epsilon$\n\n such that if\n\n$P$\n\n is a refinement then\n\n$U(P,\\alpha,g) - L(P,\\alpha,g) < \\epsilon/M(f)$\n\n and\n\n$$\\left|S(P,\\alpha,h) - \\int_a^b \\alpha(u)f(u) \\, dg(u)\\right|< \\epsilon.$$\n\nTherefore,\n\n$$\\int_a^b \\alpha(u) \\, dh(u)= \\int_a^b \\alpha(u)f(u) \\, dg(u).$$",
"is_accepted": true,
"score": 6
}
|
CC BY-SA (Stack Exchange content)
|
5,108,499
|
Isi MMath PMB Problem: derivative becomes zero at infinity
|
Hello Stack exchange I encounter this math problem yesterday. I uploded both the question and my approach , what I wanted to know , where are the possible glitch in my solution. Now why I am asking this beacuse, I upload the same thing on another math paltform and one of the user is saying it's not rigid, however I also think , there might be some lag in the solution. Can someone help and let me know, Thanks
$\textbf{Here are the clarificatiions I need:}$
$\textbf{1. Is my approach towards the problem correct.}$
$\textbf{2.If it's not correct , where can be the possible glitch .}$
Parts I am deeply concered about is
$\textbf{Interchanging of limiting varibles}$
after the part "letting
$h\rightarrow 0$
''.
$\textbf{Question:}$
Let
$f:\mathbb{R} \rightarrow \mathbb{R}$
ba a differentiable function such that
$f'$
continuous , and there exist
$a,b\in\mathbb{R}$
such that
$$\lim_{x\rightarrow \infty} f(x)=a$$
$$\lim_{x\rightarrow \infty} f'(x)=b$$
Show that
$b=0$
.
$\textbf{My Solution:}$
We are given ,
$$\lim_{x\rightarrow \infty} f(x)=a$$
Thus we have , for any fix
$h\in\mathbb{R}$
we have ,
$$\lim_{x\rightarrow \infty} f(x+h)=a$$
.
Thus ,
$$\lim_{x\rightarrow \infty} (f(x+h)-f(x))=(\lim_{x\rightarrow \infty} f(x+h))-(\lim_{x\rightarrow \infty} f(x))=a-a=0$$
.
(Since each of the limit on left hand side exist that is both
$\lim_{x\rightarrow \infty} f(x)=a$
and
$\lim_{x\rightarrow \infty} f(x+h)=a)$
exist for any fix
$h\in\mathbb{R}$
).
Thus for any fix
$0\neq h\in\mathbb{R}$
$$\lim_{x\rightarrow \infty} \frac{f(x+h)-f(x)}{h}=0$$
.
Now letting
$h\rightarrow 0$
, i.e,
$$\lim_{h\rightarrow 0}\lim_{x\rightarrow \infty} \frac{f(x+h)-f(x)}{h}=0(*)$$
.
.
Since
$$\lim_{x\rightarrow \infty}f'(x)=b$$
exist and
$f'$
is continuous from
$\mathbb{R}\rightarrow\mathbb{R}$
Thus ,in
$(*)$
we can interchange the limit,
$$\lim_{x\rightarrow \infty} \lim_{h\rightarrow 0}\frac{f(x+h)-f(x)}{h}=0(*)$$
.
Hence,
$$b=\lim_{x\rightarrow \infty} f'(x)=0$$
This finishes the proof.
$\blacksquare$
|
Safal_DB_Mathogenic
|
https://math.stackexchange.com/questions/5108499/isi-mmath-pmb-problem-derivative-becomes-zero-at-infinity
|
{
"answer_id": 5108555,
"answer_link": null,
"answer_owner": "Karthik Kannan",
"answer_text": "You need to be more careful to justify the interchange of limits. Consider the sequence of functions\n\n$g_{n}:[0, \\infty)\\rightarrow\\mathbb{R}$\n\n defined by\n\n$$g_{n}(x) = \\frac{f(x+1/n)-f(x)}{1/n}.$$\n\n We show that the sequence\n\n$(g_{n})$\n\n converges uniformly on\n\n$[0, \\infty)$\n\n. First, note that using the mean value theorem, we have\n\n$g_{n}(x) = f'(x+\\zeta(n, x))$\n\n for some\n\n$0 < \\zeta(n, x) < 1/n$\n\n that depends on both\n\n$n$\n\n and\n\n$x$\n\n. Fix\n\n$\\varepsilon > 0$\n\n and choose\n\n$M$\n\n large enough such that\n\n$|f'(y)-b|\\leq\\varepsilon/2$\n\n for\n\n$y\\geq M$\n\n. Then for\n\n$x\\geq M$\n\n and any\n\n$n, m$\n\n we have\n\n$|g_{n}(x)-g_{m}(x)|\\leq\\varepsilon$\n\n. Using the fact that\n\n$f'$\n\n is uniformly continuous on\n\n$[0, M+1]$\n\n, there exists\n\n$\\delta > 0$\n\n such that\n\n$|f'(y)-f'(z)|\\leq\\varepsilon$\n\n whenever\n\n$|y-z|\\leq\\delta$\n\n and\n\n$y, z\\in [0, M+1]$\n\n. Choose\n\n$N$\n\n large enough so that\n\n$2/N\\leq\\delta$\n\n. Then for\n\n$n, m\\geq N$\n\n and all\n\n$x\\in [0, M]$\n\n we have\n\n$|g_{n}(x)-g_{m}(x)|\\leq\\varepsilon$\n\n.\n\nSince\n\n$(g_{n})$\n\n converges uniformly on\n\n$[0, \\infty)$\n\n we have\n\n\\begin{align}\\lim_{x\\rightarrow\\infty}f'(x) = \\lim_{x\\rightarrow\\infty}\\lim_{n\\rightarrow\\infty}g_{n}(x) = \\lim_{n\\rightarrow\\infty}\\lim_{x\\rightarrow\\infty}g_{n}(x) = 0.\\end{align}\n\nThere is a much simpler proof that does not require the continuity of\n\n$f'$\n\n. Assume for the sake of contradiction that\n\n$\\lim_{x\\rightarrow\\infty}f'(x) = b > 0$\n\n (without loss of generality). Choose\n\n$M$\n\n large enough so that\n\n$f'(x)\\geq b/2$\n\n for\n\n$x\\geq M$\n\n. Then, using the mean value theorem for\n\n$x\\geq M$\n\n, we get\n\n$$f(x) = f(M)+f'(\\zeta(x, M))(x-M)\\geq f(M)+b(x-M)/2.$$\n\n Take the limit as\n\n$x\\rightarrow\\infty$\n\n to obtain a contradiction.",
"is_accepted": false,
"score": 1
}
|
CC BY-SA (Stack Exchange content)
|
5,108,452
|
Demonstrate $I(a) = \int_1^\infty \frac{\sqrt{a+x}}{a+x^2}\,dx > \frac{\pi}{2}\quad \forall (a>0)$
|
I am trying to show that
$$
I(a) = \int_1^\infty \frac{\sqrt{a+x}}{a+x^2}\,dx > \frac{\pi}{2}\quad (a>0).
$$
Using
$x = \sqrt{a}\,t$
we get
$$
I(a) = \int_{1/\sqrt{a}}^\infty \frac{\sqrt{1 + t/\sqrt{a}}}{1+t^2}\,dt.
$$
Differentiating under the integral sign yields
$$
I'(a) = \frac{1}{2a^{3/2}} \frac{\sqrt{1+1/a}}{1+1/a}
- \frac{1}{4a^{3/2}} \int_{1/\sqrt{a}}^\infty \frac{t}{(1+t^2)\sqrt{1+t/\sqrt{a}}}\,dt.
$$
The first term is positive and the integral term is negative so I need to show the whole expression is negative.
I tried the bound
$$
\sqrt{1 + t/\sqrt{a}} \le 1 + t/\sqrt{a} \;\Rightarrow\; \frac{1}{\sqrt{1 + t/\sqrt{a}}} \ge \frac{1}{1 + t/\sqrt{a}},
$$
and for
$t \ge 1/\sqrt{a}$
also
$$
\frac{1}{1 + t/\sqrt{a}} \ge \frac{1}{1 + t^2},
$$
so
$$
\int_{1/\sqrt{a}}^\infty \frac{t}{(1+t^2)\sqrt{1+t/\sqrt{a}}}\,dt
\ge \int_{1/\sqrt{a}}^\infty \frac{t}{(1+t^2)^2}\,dt
= \frac{1}{2(1+1/a)}.
$$
Plugging this in gives
$$
I'(a) \le \frac{1}{2a^{3/2}(1+1/a)} \Bigl( \sqrt{1+1/a} - \tfrac{1}{4} \Bigr).
$$
But
$\sqrt{1+1/a} > 1 > 1/4$
, so the right-hand side is
positive
and the bound is useless for proving
$I'(a)<0$
.
A solution I found claims a tighter lower bound using
$$
\int_{1/\sqrt{a}}^\infty \frac{t}{(1+t^2)^2}\,dt
= \frac{\pi}{4} - \frac{1}{2}\arctan\!\left(\frac{1}{\sqrt{a}}\right)
- \frac{1}{2} \cdot \frac{1/\sqrt{a}}{1+1/a},
$$
but this antiderivative seems to belong to
$\int \frac{1}{(1+t^2)^2}\,dt$
, not to the
$t/(1+t^2)^2$
we actually have.
|
Joelle
|
https://math.stackexchange.com/questions/5108452/demonstrate-ia-int-1-infty-frac-sqrtaxax2-dx-frac-pi2-q
|
{
"answer_id": 5108490,
"answer_link": null,
"answer_owner": "Claude Leibovici",
"answer_text": "$$I(a) = \\int_1^\\infty \\frac{\\sqrt{a+x}}{a+x^2}\\,dx$$\n\n$$\\sqrt{a+x}=u \\qquad \\implies \\qquad I(a)=\\int_{\\sqrt{a+1}}^\\infty \\frac{2 u^2}{u^4-2 a u^2+a\\left(a+1\\right)}\\,du$$\n\nWrite\n\n$$u^4-2 a u^2+a\\left(a+1\\right)=(u^2-\\alpha)(u^2-\\beta) \\quad \\text{with}\\quad (\\alpha,\\beta)=a\\pm i\\sqrt{a}$$\n\nUsing partial fraction decomposition\n\n$$\\frac{2 u^2}{u^4-2 a u^2+a\\left(a+1\\right)}=\\frac{2}{\\alpha -\\beta }\\Big(\\frac{1}{u^2-\\alpha }-\\frac{1}{u^2-\\beta } \\Big)$$\n\n So, two simple integrals.\n\nThe final result is not the most pleasant but, at least expanded for large values of\n\n$a$\n\n, we have\n\n$$I(a)\\sim \\frac \\pi 2+\\frac{\\log (a)-2+4 \\log (2)}{4 \\sqrt{a}}+\\frac{\\pi }{16 a}-\\frac{3 \\log (a)-13+12 \\log (2)}{96 a^{3/2}}+\\cdots$$\n\nTrying for\n\n$a=25$\n\n, the above truncated expansion gives\n\n$1.77781$\n\n while the exact value is\n\n$1.77772$\n\nNotice that\n\n$I(0)=2$\n\n; so a very small range of variation.",
"is_accepted": false,
"score": 2
}
|
CC BY-SA (Stack Exchange content)
|
1,441,747
|
The definition of strong continuity via joint continuity
|
A semigroup $S(t)$ on a Banach space $E$ is a family of bounded linear operators $\{S(t)\}_{t\ge 0}$ with the property that $S(t)S(s)=S(t+s)$ for any $s,t\ge 0$ and that $S(0)=I$. A semigroup is furthermore called
strongly continuous
if the map $(x,t)\mapsto S(t)x$ is continuous.
I was told that this is equivalent of saying $t\to S(t)x$ is continuous for every $x$.
How can I see the equivalence of two ways of defining strong continuity?
Could anyone expand what $(x,t)\mapsto S(t)x$ is continuous really mean? Can one show this via the usual strategy of 2-sided continuity? How could this be the same as saying $t\to S(t)x$ is continuous for every $x$?
Appreciate for any helps.
|
math101
|
https://math.stackexchange.com/questions/1441747/the-definition-of-strong-continuity-via-joint-continuity
|
{
"answer_id": 4856698,
"answer_link": null,
"answer_owner": "G. Bellaard",
"answer_text": "To avoid confusion let me give the two definitions of strong continuity different names, and from here on out I will not use the term\n\nstrongly continuous\n\n.\n\nA one-parameter semigroup\n\n$S(t)$\n\n is called\n\ntime continuous\n\n if for all\n\n$x \\in E$\n\n we have that\n\n$\\mathbb{R}_{\\geq 0} \\ni t \\mapsto S(t)x \\in E$\n\n is continuous.\n\nA one-parameter semigroup\n\n$S(t)$\n\n is called\n\neverywhere continuous\n\n if\n\n$E \\times \\mathbb{R}_{\\geq 0} \\ni (x,t) \\mapsto S(t)x \\in E$\n\n is continuous.\n\nWe see that time continuity follows immediately from everywhere continuity.\n\nSo, it seems that everywhere continuity is stronger in this sense.\n\nHowever, we can proof that the continuity everywhere follows from the time continuity.\n\nPick an arbitrary\n\n$x \\in E$\n\n and\n\n$t>0$\n\n.\n\nBy the time continuity we can find a\n\n$\\delta_1>0$\n\n such that for all\n\n$|s-t|<\\delta_1$\n\n we have that\n\n$||S(s)x - S(t)x|| < \\varepsilon/2$\n\n.\n\nFor any\n\n$y \\in E$\n\n we have, again by time continuity, that\n\n$$ \\sup_{|s-t| \\leq \\delta_1} ||S(s) y|| < \\infty $$\n\nBecause we have bounded linear operators we can apply the uniform boundedness principle and get\n\n$$ M := \\sup_{|s-t| \\leq \\delta_1} ||S(s)|| < \\infty $$\n\nNow let\n\n$\\delta_2 = \\varepsilon / (2 M)$\n\n and consider\n\n$||y -x || < \\delta_2$\n\n.\n\nWe then have the chain inequalities\n\n\\begin{align}\n\n||S(s)y - S(t)x||\n\n&= ||S(s)y - S(s)x + S(s)x - S(t)x|| \\\\\\\\\n\n&\\leq ||S(s)y - S(s)x || + ||S(s)x - S(t)x|| \\\\\\\\\n\n&\\leq ||S(s)||\\ ||y - x|| + ||S(s)x - S(t)x|| \\\\\\\\\n\n&\\leq \\varepsilon / 2 + \\varepsilon/2 \\\\\\\\\n\n&= \\varepsilon\n\n\\end{align}\n\nIt seems this is one of the special cases where separate continuity\n\ndoes\n\n imply joint continuity.\n\nThe same question and a shorter answer can be found\n\nhere",
"is_accepted": false,
"score": 1
}
|
CC BY-SA (Stack Exchange content)
|
5,107,138
|
General relation between Jordan measure and Riemann integral
|
Following my
previous question
, suppose that
$f: A \to \mathbb{R}^{\ge0}$
is a bounded function which is defined over a bounded subset
$A \subset \mathbb{R}^n$
. Denote the Jordan inner measure and outer measure of the region between
$f$
and
$0$
by
$m_{∗,(J)}(B)$
and
$m^{∗,(J)} (B)$
respectively. Also denote the Darboux lower integral and upper integral of
$f$
over
$A$
by
$\underline{\int_A} f$
and
$\overline{\int_A} f$
respectively. I think in general it holds that
$m_{∗,(J)}(B) = \underline{\int_A} f$
and
$m^{∗,(J)} (B) = \overline{\int_A} f$
, so Riemann integral and Jordan measure are essentially equivalent. Is this statement correct?
|
S.H.W
|
https://math.stackexchange.com/questions/5107138/general-relation-between-jordan-measure-and-riemann-integral
|
{
"answer_id": 5107688,
"answer_link": null,
"answer_owner": "Guilherme Gondin",
"answer_text": "Yes, you're nearly correct: it holds that the Jordan measures of the ordinate set of a non negative bounded function will equal its respective Darboux integrals in general, but if the domain of the function is just bounded, not a rectangle, you will also need the condition that the set of discontinuities of the function have a 0 Lebesgue measure, or else the two may not be equal.\n\nAlso it will be required that A itself is Jordan measurable, but I'm assuming that's already implied in your definition.",
"is_accepted": false,
"score": 1
}
|
CC BY-SA (Stack Exchange content)
|
5,108,438
|
Finding $\int e^{-x}\bigg(\frac{1+\tanh(1+x)}{1+x}\bigg)^2 dx$
|
I need help with this integral:
$$\int e^{-x}\bigg(\frac{1+\tanh(1+x)}{1+x}\bigg)^2 dx$$
Mathematica could not find a closed form. I tried Feynman's tricks but could not get rid of the denominator (which was my idea since the numerator alone gives the hypergeometric function).
Actually, other sigmoid functions would also work for me, such as
$\arctan$
or the logistic function, but the square is important. However, the
$\exp$
and denominator are non-negotiable.
|
Camilo
|
https://math.stackexchange.com/questions/5108438/finding-int-e-x-bigg-frac1-tanh1x1x-bigg2-dx
|
{
"answer_id": null,
"answer_link": null,
"answer_owner": null,
"answer_text": "(Нет ответов)",
"is_accepted": false,
"score": 0
}
|
CC BY-SA (Stack Exchange content)
|
1,791,361
|
Is Leibnizian calculus embeddable in first order logic?
|
We just published an article making what we feel is a plausible case in favor of an affirmative answer in
Foundations of Science
, see preprint
here
. The basic argument is that while such a requirement may seem very limitative, such an embedding seems possible with a small number of additional ingredients like a black box for returing the sum of a series, and I was curious how well-supported this appears and if there are aspects that may have been overlooked.
Crosslisted at
HSM
without generating much response.
Note.
Leibnizian calculus
is calculus as it was practiced by Leibniz, in the same sense as
Euclidean geometry
could be interpreted as geometry as practiced by Euclid (though the term often has a different meaning). An example that we give in the paper of the type of mathematics that would not be Leibnizian calculus is a proof of the extreme value theorem, a 19th century argument.
|
Mikhail Katz
|
https://math.stackexchange.com/questions/1791361/is-leibnizian-calculus-embeddable-in-first-order-logic
|
{
"answer_id": null,
"answer_link": null,
"answer_owner": null,
"answer_text": "(Нет ответов)",
"is_accepted": false,
"score": 0
}
|
CC BY-SA (Stack Exchange content)
|
5,108,399
|
A few clarifications about multiplication in subgradient calculus
|
In the
subgradient calculus linearity properties
, the appropriate side of the addition rule utilizes Minkowski addition of sets. Ordinarily in linearity, a scaling rule agrees with, and
is basically redundant given
, an addition rule (I can't imagine scaling failing for only irrational numbers except by adversarial construction).
Is
$\alpha\partial f$
in the subgradient calculus scaling rule a multiplication built on top of Minkowski addition, i.e., does
$2\partial f$
mean
$\partial f + \partial f$
? The alternative is vector-like element-wise scaling of the set, which obviously outputs only a subset of the output under the interpretation of multiplication as repeated Minkowski addition with some continuation for non-integer values of
$\alpha$
, but it's not obvious to me what that continuation would be.
Then for the affine transformation of variables rule, is matrix multiplication by a set as in
$A^T\partial f(Ax + b)$
built on top of this "Minkowski multiplication" via some basic linear algebra? It is not clear how to ask this more precisely because I'm not sure if
$f$
is required to be scalar-valued, vector-valued, matrix-valued, or if this rule is valid for all three so long as the dimensions are valid for any of the three basic notions of matrix multiplication once the semantics of matrix-set multiplication have been unpacked.
As there also seems to be some notation overloading of
$\partial$
in the affine transformation of variables rule (perhaps similar to the familiar partial derivative
$\partial$
notation overloading from multivariable calculus), I think it would be illustrative for an answer to actually apply this rule to a toy example.
|
user10478
|
https://math.stackexchange.com/questions/5108399/a-few-clarifications-about-multiplication-in-subgradient-calculus
|
{
"answer_id": null,
"answer_link": null,
"answer_owner": null,
"answer_text": "(Нет ответов)",
"is_accepted": false,
"score": 0
}
|
CC BY-SA (Stack Exchange content)
|
5,108,362
|
Showing the strict mononticy of the function $(a^\frac{1}{x}+b^\frac{1}{x})^x$
|
Let
$a,b\in \mathbb{R}$
with
$a,b>0$
. Define the function
$f\colon [1,\infty)\rightarrow \mathbb{R}$
by
$$
f(x)=(a^\frac{1}{x}+b^\frac{1}{x})^x
$$
This function seems to be strictly growing, but I've had a hard time showing this. How can this be shown?
Edit: This post was closed without a reason given. I suspect, that is the case because it reminded some of math.stackexchange.com/q/4094/42969, but this only shows the weak monotonicity of the function, and is thus not useful in showing this particular result.
Secondly, having perused the guidelines, I realize that I didn't add my own attempts at solving this. I will add that I have tried differentiating this function, but it just turns into a mess and likewise taking the logarithm doesn't provide much value. Not knowing anymore reliable ways to determine the monotonicity of a function, I came here for help.
|
redib
|
https://math.stackexchange.com/questions/5108362/showing-the-strict-mononticy-of-the-function-a-frac1xb-frac1xx
|
{
"answer_id": 5108372,
"answer_link": null,
"answer_owner": "almost_okay",
"answer_text": "Since\n\n$f(x) > 0$\n\n for\n\n$a, b > 0$\n\n and\n\n$x \\ge 1$\n\n, we can consider\n\n$$\n\ng(x) = \\ln f(x) = x \\ln\\big(a^{1/x} + b^{1/x}\\big).\n\n$$\n\nDifferentiating, we obtain\n\n$$\n\ng'(x) = \\ln\\big(a^{1/x} + b^{1/x}\\big)\n\n- \\frac{1}{x}\\,\\frac{\\ln(a)\\,a^{1/x} + \\ln(b)\\,b^{1/x}}{a^{1/x} + b^{1/x}}.\n\n$$\n\nLet\n\n$u = a^{1/x} > 0$\n\n and\n\n$v = b^{1/x} > 0$\n\n. Then\n\n$$\n\ng'(x) = \\ln(u + v) - \\frac{u \\ln u + v \\ln v}{u + v}.\n\n$$\n\nMultiplying both sides by\n\n$u + v$\n\n gives\n\n$$\n\n(u + v) g'(x) = u \\ln\\!\\left(\\frac{u + v}{u}\\right) + v \\ln\\!\\left(\\frac{u + v}{v}\\right).\n\n$$\n\nSince each logarithm is positive for\n\n$u, v > 0$\n\n, we have\n\n$g'(x) > 0$\n\n.\n\nBecause\n\n$f'(x) = f(x) g'(x)$\n\n and\n\n$f(x) > 0$\n\n, it follows that\n\n$f'(x) > 0$\n\n.\n\nHence\n\n$f(x)$\n\n is strictly increasing for\n\n$a, b > 0$\n\n and\n\n$x \\ge 1$\n\n.",
"is_accepted": true,
"score": 4
}
|
CC BY-SA (Stack Exchange content)
|
5,108,171
|
Closed form of $\int_0^{\infty} \frac{\sin (\tan x) \cos ^{2n-1} x}{x} d x?$
|
Being attracted by the answer in the
post
$$\int_0^{\infty} \frac{\sin (\tan x) }{x} d x = \frac{\pi}{2}\left(1- \frac 1e \right) , $$
I started to investigate and surprisingly found that
$$
\int_0^{\infty} \frac{\sin (\tan x) \cos x}{x} d x=\frac{\pi}{2}\left(1-\frac{1}{e}\right),
$$
having the same answer as the first one.
Then I tried a bit further with
$n\ge 1$
,
$$
I_n=\int_0^{\infty} \frac{\sin (\tan x) \cos ^{2n-1} x}{x} d x
$$
Using the
Lobachevsky Integral Formula
, we have
$$
\begin{aligned}
I_n & =\int_0^{\infty} \frac{\sin (\tan x) \cos ^{2n-1} x}{x} d x \\&= \int_0^{\infty} \frac{\sin x}{x} \cdot \frac{\sin (\tan x) \cos ^{2n-1} x}{\sin x} d x\\
& =\int_0^{\frac \pi 2} \frac{\sin (\tan x) \cos ^{2n-1} x}{\sin x} d x\\
\end{aligned}
$$
Now consider the parametrised integral
$$
I(a)=\int_0^{\frac \pi 2} \frac{\sin (a\tan x) \cos ^{2 n-1} x}{\sin x} d x
$$
whose derivative w.r.t.
$a$
is
$$
I^{\prime}(a)=\int_0^{\frac \pi 2} \cos ^{2 n-2} x \cos (a \tan x) d x .
$$
Putting
$t=\tan x$
and using the contour integration along anti-clockwise direction of the path
$$\gamma=\gamma_{1} \cup \gamma_{2} \textrm{ where } \gamma_{1}(t)=t+i 0(-R \leq t \leq R) \textrm{ and } \gamma_{2}(t)=R e^{i t} (0<t<\pi), $$
we have
$$
\begin{aligned}
I^{\prime}(a)&=\int_0^{\infty} \frac{\cos (a t)}{\left(1+t^2\right)^n} d t \\
& =\frac{1}{2} \Re \oint_\gamma \frac{e^{i a z}}{\left(1+z^2\right)^n} d z \\
& = \frac\pi{(n-1)!}\cdot\Re\left[i \lim _{z \rightarrow i} \frac{d^{n-1}}{d z^{n-1}}\left(\frac{e^{i a z}}{(z+i)^n}\right)\right] \textrm{ as } R\to +\infty
\end{aligned}
$$
Hence when
$n=1,$
$$
I^{\prime}(a)=\int_0^{\frac{\pi}{2}} \frac{\cos (a t)}{1+t^2} d t=\frac{\pi}{2} e^{-a}
$$
$$
\begin{aligned}
\int_0^{\infty} \frac{\sin (\tan x) \cos x}{x} d x& =\int_0^1 \frac{\pi}{2} e^{-a} d a \\
& =-\frac{\pi}{2}\left(e^{-1}-1\right) \\
& =\frac{\pi}{2}\left(1-e^{-1}\right)
\end{aligned}
$$
When
$n=2$
,
$$\begin{aligned}
I^{\prime}(a) & = \pi \Re \left[i\cdot -\frac{i}{4} e^{-a}(1+a)\right] \\
& =\frac{\pi}{4} e^{-a}(1+a)
\end{aligned}
$$
Integrating back yields
$$
\begin{aligned}
\int_0^{\infty} \frac{\sin (\tan x) \cos^3 x}{x} d x& =\int_0^1 I^{\prime}(a) d a \\
& =\int_0^1 \frac{\pi}{4} e^{-a}(1+a) d a \\
& =\frac{\pi}{4}\left(2-\frac{3}{e}\right)
\end{aligned}
$$
When
$n=3$
,
$$\begin{aligned}
I^{\prime}(a) & =\frac\pi 2\Re \left[i\cdot -\frac{1}{8} i\left(a^2+3 a+3\right) e^{-a}\right] \\
& = \frac{\pi}{16} \left(a^2+3 a+3\right) e^{-a}
\end{aligned}
$$
Integrating back yields
$$
\begin{aligned}
\int_0^{\infty} \frac{\sin (\tan x) \cos^5x}{x} d x& =\int_0^1 I^{\prime}(a) d a \\
& =\int_0^1 \frac{\pi}{16} \left(a^2+3 a+3\right) e^{-a}da \\
& =\frac{\pi}{8}\left(4-\frac{7}{e}\right)
\end{aligned}
$$
When
$n=4$
,
$$
\begin{aligned}
I_4 & =\frac{\pi}{3!6} \int_0^1\left(a^3+6 a^2+15 a+15\right) e^{-a} d a \\
& =\frac{\pi}{96}\left(48-\frac{91}{e}\right)
\end{aligned}
$$
In this way, we can find the integral with odd powers of
$\cos x$
one by one, my question is:
How can we find the closed form of
$$\int_0^{\infty} \frac{\sin (\tan x) \cos ^{2n-1} x}{x} d x?$$
|
Lai
|
https://math.stackexchange.com/questions/5108171/closed-form-of-int-0-infty-frac-sin-tan-x-cos-2n-1-xx-d-x
|
{
"answer_id": 5108184,
"answer_link": null,
"answer_owner": "xpaul",
"answer_text": "Note, for\n\n$t\\in(0,1)$\n\n,\n\n\\begin{eqnarray}\n\nF(t)&=&\\sum_{n=0}^\\infty I_nt^n =\\sum_{n=0}^\\infty\\int_0^{\\infty} \\frac{\\sin (\\tan x) t^n\\cos ^{2n-1} x}{x} d x\\\\\n\n&=&\\int_0^{\\infty} \\frac{\\sin (\\tan x)}{x(1-t\\cos^2x)\\cos x} d x=\\int_0^{\\infty}\\frac{\\sin x}{x} \\frac{\\sin (\\tan x)}{\\sin x(1-t\\cos^2x)\\cos x} d x\\\\\n\n&=&\\int_0^{\\frac\\pi2}\\frac{\\sin (\\tan x)}{\\sin x(1-t\\cos^2x)\\cos x} d x=\\int_0^{\\frac\\pi2}\\frac{\\sin (\\tan x)}{\\tan x(1-t\\cos^2x)} \\sec^2xd x\\\\\n\n&\\overset{\\tan x\\to x}=&\\int_0^{\\infty}\\frac{\\sin x}{x(1-\\frac{t}{1+x^2})}dx=\\int_0^{\\infty}\\frac{(x^2+1)\\sin x}{x(x^2+1-t)}dx\\\\\n\n&=&\\frac1{1-t}\\int_0^{\\infty}\\bigg[\\frac{\\sin x}{x}-t\\frac{x\\sin x}{x^2+1-t}\\bigg]dx\\\\\n\n&=&\\frac\\pi{2(1-t)}\\bigg[1-t e^{-\\sqrt{1-t}}\\bigg].\n\n\\end{eqnarray}\n\nHere\n\n$$ \\int_0^\\infty\\frac{x\\sin x}{x^2+a^2}dx=\\frac\\pi2 e^{-a} $$\n\nis used. Since\n\n$$ F(t)=\\sum_{n=0}^\\infty \\frac{F^{(n)}(0)}{n!}t^n $$\n\none has\n\n$$ I_n=\\frac{F^{(n)}(0)}{n!} $$\n\nwhich is not easy to get for large\n\n$n$\n\n.\n\nUpdate: I calculated some\n\n$I_n$\n\n:\n\n$$ I_1=\\frac{(e-1)\\pi}{2e}, I_2=\\frac{(2e-3)\\pi}{4e}, I_3=\\frac{(4e-7)\\pi}{8e}, I_4=\\frac{(48e-91)\\pi}{96e}. $$",
"is_accepted": false,
"score": 5
}
|
CC BY-SA (Stack Exchange content)
|
2,541,733
|
Evaluate $\int_{0}^{\infty}e^{-x^2}\ln(x)dx$
|
Can a step-by-step answer be shown how to prove: $$\int_{0}^{\infty}e^{-x^2}\ln(x)dx = -\frac{{\pi^\frac{1}{2}}}{4}(\gamma+\ln(4))$$
I have a feeling differentiating under the integral sign could be done, but I'm not sure how.
|
Tom Himler
|
https://math.stackexchange.com/questions/2541733/evaluate-int-0-inftye-x2-lnxdx
|
{
"answer_id": 2541758,
"answer_link": null,
"answer_owner": "Leucippus",
"answer_text": "Use\n\n$$\\int_{0}^{\\infty} e^{- x^{2}} \\, x^{u -1} \\, dx = \\frac{1}{2} \\, \\Gamma\\left(\\frac{u}{2}\\right)$$\n\nand differentiate with respect to $u$ to obtain\n\n$$\\int_{0}^{\\infty} e^{- x^{2}} \\, x^{u -1} \\, \\ln(x) \\, dx = \\frac{1}{4} \\, \\Gamma\\left(\\frac{u}{2}\\right) \\, \\psi\\left(\\frac{u}{2}\\right).$$\n\nSet $u =1$ and use the appropriate value of the digamma function to obtain the desired result.",
"is_accepted": true,
"score": 5
}
|
CC BY-SA (Stack Exchange content)
|
5,108,215
|
Trigonometric Heap
|
I had come across a beautiful math question given to me by my fellow mates-
$\eta = \cos{x}\cos{3x}\cos{9x}...$
I tried it by simplifying the expression further into manageable parts, then by using
$ \cos{x}+\cos{y}=2\cos\left(\frac{x+y}{2}\right)\cos\left(\frac{x-y}{2}\right)$
and similar expressions for sum and product of the trigonometric ratios involving cosines.
I thought, at even another instant, that the expansion of cosine of x might help, but I am now stuck to articulate that too.
Do help me articulate it further, and find the value of eta(if possible).
|
Dhairya Kumar
|
https://math.stackexchange.com/questions/5108215/trigonometric-heap
|
{
"answer_id": 5108226,
"answer_link": null,
"answer_owner": "bjcolby15",
"answer_text": "Hint: For the first three terms, then use the product-to-sum formula\n\n$$\\cos u \\cos v = \\frac {1}{2}[\\cos (u-v) + \\cos (u+v)]$$\n\n first for\n\n$\\cos 3x \\cos 9x$\n\n, and then distribute\n\n$\\cos x$\n\n and use the product-to-sum formula again.\n\nETA: The product is\n\n$$\\prod_{n=0}^{\\infty} \\cos (3^{n}x)$$\n\n so for the infinite stack, I estimate it looks something like\n\n$$\\frac {1}{2^{n-1}}\\bigg[\\sum_{n=1}^{\\infty} (\\cos (3(n-1)x +\\sum_{n=1}^{\\infty}\\cos (3(n+1)x)\\bigg]$$",
"is_accepted": false,
"score": 0
}
|
CC BY-SA (Stack Exchange content)
|
791,372
|
Double Integral $\int_0^\infty \int_0^\infty \frac{\log x \log y}{\sqrt {xy}}\cos(x+y)\,dx\,dy=(\gamma+2\log 2)\pi^2$
|
Hi I am trying to solve this double integral
$$
I:=\int_0^\infty \int_0^\infty \frac{\log x \log y}{\sqrt {xy}}\cos(x+y)\,dx\,dy=(\gamma+2\log 2)\pi^2.
$$
Thank you.
The constant in the result is given by $\gamma\approx .577$, and is known as the Euler-Mascheroni constant. I was thinking to write
$$
I=\Re \bigg[\int_0^\infty \int_0^\infty \frac{\log x \log y}{\sqrt{xy}}\, e^{i(x+y)}\, dx\, dy\bigg]
$$
and using Leibniz's rule for differentiation under the integral sign to write
$$
I(\eta, \xi)=\Re\bigg[ \int_0^\infty \int_0^\infty \ \frac{\log (\eta x)\log(\xi y)}{\sqrt{xy}} e^{i(x+y)}dx\,dy. \bigg]\\
$$
After taking the derivatives it became obvious that I need to try another method since the x,y constants cancel out. How can we solve this integral I? Thanks.
|
Jeff Faraci
|
https://math.stackexchange.com/questions/791372/double-integral-int-0-infty-int-0-infty-frac-log-x-log-y-sqrt-xy-co
|
{
"answer_id": 791419,
"answer_link": null,
"answer_owner": "Zaid Alyafeai",
"answer_text": "Using the identity\n\n$$\\cos(x+y)=\\cos(x)\\cos(y)-\\sin(x)\\sin(y)$$\n\nThe integral can be written\n\n$$\n\nI=\\int_0^\\infty \\int_0^\\infty \\frac{\\log x \\log y}{\\sqrt {xy}}\\left(\\cos(x)\\cos(y)-\\sin(x)\\sin(y)\\right)\\,dx\\,dy $$\n\nNow by splitting the integrals\n\n$$\\int_0^\\infty \\int_0^\\infty \\frac{\\log x \\log y}{\\sqrt {xy}}\\cos(x)\\cos(y)\\,dx\\,dy-\\int_0^\\infty \\int_0^\\infty \\frac{\\log x \\log y}{\\sqrt {xy}}\\sin(x)\\sin(y)\\,dx\\,dy\n\n$$\n\nNotice by symmetry of the integrals we have\n\n$$\\left(\\int^\\infty_0 \\frac{\\log x }{\\sqrt {x}}\\cos(x)\\,dx \\right)^2-\\left(\\int^\\infty_0 \\frac{\\log x }{\\sqrt {x}}\\sin(x)\\,dx \\right)^2\n\n$$\n\nBoth inegrals are solvable by using the mellin transforms\n\n$$\\int^\\infty_0 x^{s-1}\\sin(x)\\,dx = \\Gamma (s) \\sin\\left( \\frac{\\pi s}{2} \\right)$$\n\n$$\\int^\\infty_0 x^{s-1}\\cos(x)\\,dx = \\Gamma (s) \\cos\\left( \\frac{\\pi s}{2} \\right)$$\n\nBy differentiation under the integral sign and using $s=\\frac{1}{2}$.\n\n$$\\int^\\infty_0 \\frac{\\log x }{\\sqrt {x}}\\cos(x)\\,dx =-\\frac{1}{2} \\sqrt{\\frac{π}{2}} \\left(2 \\gamma +π+\\log(16) \\right) $$\n\n$$\\int^\\infty_0 \\frac{\\log x }{\\sqrt {x}}\\sin(x)\\,dx=\\frac{1}{2} \\sqrt{\\frac{π}{2}} (-2 \\gamma +π- \\log(16))\n\n$$\n\nCollecting the results together we have\n\n$$I=(\\gamma+2\\log 2)\\pi^2$$",
"is_accepted": true,
"score": 23
}
|
CC BY-SA (Stack Exchange content)
|
4,624,159
|
Why zero derivative doesn't mean car is static?
|
I watched 3blue1brown about derivative paradox .
He asked that does car move when t=0 ?
Car move with
$S = t^3$
And derivative is
$3t^2 $
And his answer is :
"The issue is the question make no sense. It reference the Idea of change in moment but that doesn't actually exist.
That's just not the derivative measures.
What it means for the derivative of function
$S$
to be zero is that the best constant approximation for the car's velocity around that point. For example if you look at an actual change in time what it mean for the derivative of this motion to be zero is that for the smaller smaller nudges in time this average velocity approach zero. (picture 1)
But that's not to say that the car is static. Approximating it's movement with a velocity of zero is after all just an approximation.
So whenever you hear people refer derivative as instantaneous rate of change a pharase which intrinsically oxymoronic I want you to think that as a conceptual shorthand for the best constant approximation for the rate of change . "
And my question came up :
Why did he say that when derivative is zero the car is not static?
And what does this part mean "Approximating it's movement with a velocity of zero is after all just an approximation ?
|
Heroz
|
https://math.stackexchange.com/questions/4624159/why-zero-derivative-doesnt-mean-car-is-static
|
{
"answer_id": 5108274,
"answer_link": null,
"answer_owner": "TurlocTheRed",
"answer_text": "Suppose an object is moving at constant speed\n\n$v_0$\n\n. We say it's distance traveled over a time interval\n\n$\\Delta t$\n\n is\n\n$d=v_0\\Delta t$\n\n. Even if\n\n$\\Delta t$\n\n is zero, we can't call the object static.\n\nNow suppose the velocity isn't constant, there is constant acceleration. Then letting\n\n$v_0$\n\n be velocity at the beginning of the interval,\n\n$d=v_0t+(1/2)at^2.$\n\n Further, since the acceleration is constant, the final velocity is\n\n$v_f=v_0+a\\Delta t$\n\n. The average velocity is initial plus final divided by 2,\n\n$v_{avg}=[v_0+(v_0+a\\Delta)]/2=v_0+a\\Delta t/2$\n\n. Notice\n\n$v_{avg}\\Delta t=$\n\n the previous expression for change in position (ignoring initial position,\n\n$x_0$\n\n).\n\nFor any non-zero value of\n\n$\\Delta t$\n\n,\n\n$v_f$\n\n is different from\n\n$v_0$\n\n.\n\n$v_f$\n\n is the current velocity of the object. So even if\n\n$v_0$\n\n is zero, for the smallest duration, there's still non-zero velocity.\n\nFor any sufficiently smooth trajectory,\n\n$x=x_0+v_0t+(a/2)t^2+O(t^3)$\n\n by Taylor's Theorem.\n\nThis means\n\n$\\frac{dx}{dt}= v_0+at+O(t^2)$\n\n.\n\nSomething being static means no change between instances rather than 0 as an instantaneous rate of change. You can't have motion in 0 seconds whatever your velocity. Even if you have 0 velocity at an instant, the meaning given to the first derivative, the object isn't static.\n\nBy Taylor's Theorem, nothing is static if any of its derivatives of motion are non-zero.\n\nSomething is static only if both the instantaneous rate of change and the average rate of change are zero for non-zero duration. The derivative is just the instantaneous speed and not necessarily the average, and it's defined using a duration approaching zero. So the derivative at a specific time can't give you sufficient information to tell whether or not a quantity is static.",
"is_accepted": false,
"score": 0
}
|
CC BY-SA (Stack Exchange content)
|
5,107,889
|
Check whether saddle point or not
|
The question says
Let
$f,g: \mathbb{R^2}\to \mathbb{R}$
defined as
$f(x,y)=x^2 -\frac{3}{2}xy^2$
and
$g(x,y)=4x^4-5x^2y+y^2$
for all
$(x,y)\in \mathbb{R^2}$
,
consider the following statements and check which one is true
P:
$f$
has a saddle point
Q:
$g$
has a saddle point
I know the condition for the saddle point..that
$f_{xx}f_{yy}-(f_xf_y)^2 <0$
but in this case at
$(0,0)$
,
$f_{xx}f_{yy}-(f_{xy})^2$
is coming equal to
$0$
for both
$f$
and
$g$
. so unable to conclude... I was thinking to use derivative some way..but not getting...and too much time I have given to this question.
Thanks in advance!
|
math student
|
https://math.stackexchange.com/questions/5107889/check-whether-saddle-point-or-not
|
{
"answer_id": 5107892,
"answer_link": null,
"answer_owner": "JC Q",
"answer_text": "We can check them by definition.\n\nFor\n\n$f$\n\n, in every neighborhood of\n\n$(0,0)$\n\n,\n\n$f(\\delta,0)=\\delta^2>0$\n\n,\n\n$f\\left(\\delta,\\sqrt{\\delta}\\right)=-\\dfrac{1}{2}\\delta^2<0$\n\n, so\n\n$f(0,0)=0$\n\n is not an extremum.\n\nFor\n\n$g$\n\n, in every neighborhood of\n\n$(0,0)$\n\n,\n\n$g(\\delta,0)=4\\delta^4>0$\n\n,\n\n$g(\\delta,2\\delta^2)=-2\\delta^4<0$\n\n, so\n\n$g(0,0)=0$\n\n is not an extremum.\n\n(\n\n$\\delta>0$\n\n)",
"is_accepted": true,
"score": 3
}
|
CC BY-SA (Stack Exchange content)
|
5,108,134
|
Evaluate $\lim_{x\to0}\frac{x^2\sin\frac{1}{x}+3x\sin\frac{1}{x}}{x\sin\frac{1}{x}}$
|
Evaluate
$$\lim_{x\to0}\frac{x^2\sin\frac{1}{x}+3x\sin\frac{1}{x}}{x\sin\frac{1}{x}}$$
My Attempt
Let
$\frac{1}{x}= t$
and
$x\to 0 \implies t\to \infty$
. So we would need to evaluate this
$$\lim_{t\to \infty}\frac{\frac{1}{t^2}\sin t+3\frac{1}{t}\sin t}{\frac{1}{t}\sin t}=\lim_{t\to \infty}\frac{\frac{1}{t}\sin t(\frac{1}{t}+3)}{\frac{1}{t}\sin t}=3$$
But someone has an ideas that this limit isn't exists
|
user62498
|
https://math.stackexchange.com/questions/5108134/evaluate-lim-x-to0-fracx2-sin-frac1x3x-sin-frac1xx-sin-frac1
|
{
"answer_id": 5108139,
"answer_link": null,
"answer_owner": "Adam Rubinson",
"answer_text": "For\n\n$x\\neq 0,\\ x\\neq \\frac{1}{n\\pi},\\ n\\in\\mathbb{Z}\\setminus\\{0\\},$\n\n$$\\frac{x^2\\sin\\frac{1}{x}+3x\\sin\\frac{1}{x}}{x\\sin\\frac{1}{x}}$$\n\n$$=\\frac{x^2 + 3x}{x}\\left(\\frac{\\sin\\frac{1}{x}}{\\sin\\frac{1}{x}}\\right)$$\n\n$$=\\left(\\frac{x + 3}{1}\\right) 1.$$\n\nNow you take\n\n$\\lim_{x\\to0}.$\n\nBut if you include\n\n$x= \\frac{1}{n\\pi},\\ n\\in\\mathbb{Z}\\setminus\\{0\\}$\n\n in the domain, then the limit does not exist (do you see why?).",
"is_accepted": false,
"score": 3
}
|
CC BY-SA (Stack Exchange content)
|
5,108,198
|
Integration by substitution with $x$ in the denominator
|
I'm going to calculate the following indefinite integral
$$\int 3x\sqrt{5x^2+7}dx$$
using the change of variable
$u=5x^2+7$
,
$du=10xdx$
. From this last expression, can I solve for
$dx$
, that is,
$\dfrac{du}{10x}=dx$
?
If we can solve for
$x$
, we have:
$$\int 3x\sqrt{5x^2+7}dx=\int 3x\sqrt{u}\frac{du}{10x};$$
can I eliminate
$x$
?
|
Octavius
|
https://math.stackexchange.com/questions/5108198/integration-by-substitution-with-x-in-the-denominator
|
{
"answer_id": 5108200,
"answer_link": null,
"answer_owner": "Átila Correia",
"answer_text": "In order to apply the substitution method, you can proceed as follows:\n\n\\begin{align*}\n\n\\int 3x\\sqrt{5x^{2} + 7}\\mathrm{d}x & = \\frac{3}{10}\\int10x\\sqrt{5x^{2} + 7}\\mathrm{d}x\\\\\n\n& = \\frac{3}{10}\\int\\sqrt{5x^{2} + 7}\\mathrm{d}(5x^{2} + 7)\\\\\n\n& = \\frac{3}{10}\\int u^{1/2}\\mathrm{d}u\\\\\n\n& = \\frac{1}{5}u^{3/2} + C\\\\\n\n& = \\frac{1}{5}(5x^{2} + 7)^{3/2} + C\n\n\\end{align*}",
"is_accepted": false,
"score": 4
}
|
CC BY-SA (Stack Exchange content)
|
5,108,032
|
Evaluate $\int \frac{e^x [\operatorname{Ei}(x) \sin(\ln x) - \operatorname{li}(x) \cos(\ln x)]}{x \ln x} \, \mathrm {dx}$
|
Evaluate:
$$\int \frac{e^x [\operatorname{Ei}(x) \sin(\ln x) - \operatorname{li}(x) \cos(\ln x)]}{x \ln x} \, \mathrm {dx}$$
My approach:
$$\int \frac{e^x [\operatorname{Ei}(x) \sin(\ln x) - \operatorname{li}(x) \cos(\ln x)]}{x \ln x} \, \mathrm {dx}$$
$$\to\int\frac{e^x\operatorname{Ei}(x)\sin(\ln x)}{x \ln x}-\int\frac{e^x\operatorname{li}(x)\cos(\ln x)}{x \ln x}$$
Observe this term:
$$\int\frac{e^x\operatorname{Ei}(x)\sin(\ln x)}{x \ln x}$$
Notice
$$\frac{d}{dx}(\operatorname{Ei}(x))=\frac{e^x}{x}$$
So I tried some u-sub
like
$\frac{\operatorname{Ei}(x)}{\ln x}$
,
$\frac{\operatorname{li}(x)}{\ln x}$
but I think it's some other u-substitute. (I tried to show effort but everything stops here)(I create this before but I forgot the trick)
|
Andre Lin
|
https://math.stackexchange.com/questions/5108032/evaluate-int-fracex-operatornameeix-sin-ln-x-operatornamelix
|
{
"answer_id": 5108216,
"answer_link": null,
"answer_owner": "Andre Lin",
"answer_text": "Apologies for answering my own question:\n\nGiven:\n\n$$\\int \\frac{e^x [\\operatorname{Ei}(x) \\sin(\\ln x) - \\operatorname{li}(x) \\cos(\\ln x)]}{x \\ln x} \\, \\mathrm {dx}$$\n\n$$\n\nf(x) = \\frac{e^x}{\\ln x} \\left[ \\operatorname{Ei}(x) \\sin(\\ln x) - \\operatorname{li}(x) \\cos(\\ln x) \\right]\n\n$$\n\nUsing the product rule:\n\n$$\n\nu = \\frac{e^x}{\\ln x}, \\quad v = \\operatorname{Ei}(x) \\sin(\\ln x) - \\operatorname{li}(x) \\cos(\\ln x)\n\n$$\n\n$$f'(x) = u' \\cdot v + u \\cdot v'$$\n\nDifferentiate them separately:\n\n$$\n\nu' = \\frac{e^x (\\ln x - \\frac{1}{x})}{(\\ln x)^2}\n\n$$\n\n$$\n\nv'= \\frac{e^x}{x} \\sin(\\ln x) + \\frac{\\operatorname{Ei}(x)}{x} \\cos(\\ln x) - \\frac{1}{\\ln x} \\cos(\\ln x) + \\frac{\\operatorname{li}(x)}{x} \\sin(\\ln x)\n\n$$\n\nCombine:\n\n$$\n\nf'(x) = \\frac{e^x (\\ln x - \\frac{1}{x})}{(\\ln x)^2} \\cdot \\left[ \\operatorname{Ei}(x) \\sin(\\ln x) - \\operatorname{li}(x) \\cos(\\ln x) \\right] + \\frac{e^x}{\\ln x} \\cdot \\left[ \\frac{e^x}{x} \\sin(\\ln x) + \\frac{\\operatorname{Ei}(x)}{x} \\cos(\\ln x) - \\frac{1}{\\ln x} \\cos(\\ln x) + \\frac{\\operatorname{li}(x)}{x} \\sin(\\ln x) \\right]\n\n$$\n\nSimplify:\n\n$$\n\nf'(x) = \\frac{e^x}{x \\ln x} \\left[ \\operatorname{Ei}(x) \\sin(\\ln x) - \\operatorname{li}(x) \\cos(\\ln x) \\right]\n\n$$\n\n(Notice that this is the same with our original integral)\n\nSo:\n\n$$\n\n\\int \\frac{e^x \\left[ \\operatorname{Ei}(x) \\sin(\\ln x) - \\operatorname{li}(x) \\cos(\\ln x) \\right]}{x \\ln x} dx = \\boxed{{\\frac{e^x}{\\ln x} \\left[ \\operatorname{Ei}(x) \\sin(\\ln x) - \\operatorname{li}(x) \\cos(\\ln x) \\right] + C}}\n\n$$",
"is_accepted": true,
"score": 2
}
|
CC BY-SA (Stack Exchange content)
|
5,108,182
|
Find the $n^{th}$ derivative of $f(x)=\frac{x}{x^{2}+a^{2}}$
|
I need clarity in finding out the
$n^{th}$
Derivative of
$$f(x)=\frac{x}{x^{2}+a^{2}}$$
My Thought
Let's Assume
$x=a\tan\theta$
$$\implies f(x)=\frac{a\tan\theta}{a^{2}\sec^{2}\theta}$$
$$\implies f(x)=\frac{1}{a}(\sin\theta)(\cos\theta)$$
$$\implies f(x)=\frac{1}{2a}(\sin2\theta)$$
Now,
$$f_1(x)=\frac{1}{2a}(2\cos2\theta)$$
$$\implies f_2(x)=\frac{1}{2a}(-4\sin2\theta)$$
Therefore,
$$f_n(x)=\frac{1}{2a}\left(2^n\sin\left(\frac{n\pi}{2}+2\theta\right)\right)$$
$$f_n(x)=\frac{1}{2a}\left(2^n\sin\left(\frac{n\pi}{2}+2\tan^{-1}\left(\frac{x}{a}\right)\right)\right)$$
I Need Clarification on Whether this Approach is Correct or Wrong
|
Bachelor
|
https://math.stackexchange.com/questions/5108182/find-the-nth-derivative-of-fx-fracxx2a2
|
{
"answer_id": 5108204,
"answer_link": null,
"answer_owner": "Tom-Tom",
"answer_text": "Using simple fractions, you can take advantage of the general rule\n\n$$ \\dfrac{\\mathrm d^n}{\\mathrm dx^n} \\frac{1}{x+b} = (-1)^n n!\n\n \\frac{1}{(x+b)^{n+1}}. $$\n\nSince we have\n\n$$\\frac{x}{x^2+a^2} = \\frac{\\frac12}{x+\\mathrm i a}\n\n +\\frac{\\frac12}{x-\\mathrm i a}$$\n\nwe immediately get\n\n$$ \\begin{split}\\dfrac{\\mathrm d^n}{\\mathrm dx^n} \\frac{x}{x^2+a^2}\n\n &= (-1)^n\\frac{n!}{2} \\left(\\frac{1}{(x+\\mathrm i a)^{n+1}} + \\frac{1}{(x-\\mathrm i a)^{n+1}}\\right)\\\\\n\n & = (-1)^n \\frac{n!}{(x^2+a^2)^{n+1}}\n\n \\left(\\sum_{k=0}^{\\lfloor (n+1)/2\\rfloor}\\binom {n+1}{2k} (-1)^k a^{2k}x^{n+1-2k}\\right).\n\n\\end{split}$$\n\nThe last equality results from the cancellation of odd powers of\n\n$a$\n\nin the sum\n\n$(x+\\mathrm i a)^{n+1}+(x-\\mathrm i a)^{n+1}$\n\n.",
"is_accepted": false,
"score": 4
}
|
CC BY-SA (Stack Exchange content)
|
4,311,494
|
Can someone explain the actual use of idea of limits in layman terms for me as an absolute beginner in calculus?
|
As a beginner in calculus I have always struggled I the area of limits not when I solve higher order thinking questions but just getting the basic idea and the notion of finding limits for a function. It would be a great relief if someone could help me with this query ?
|
ram kumar
|
https://math.stackexchange.com/questions/4311494/can-someone-explain-the-actual-use-of-idea-of-limits-in-layman-terms-for-me-as-a
|
{
"answer_id": 4311549,
"answer_link": null,
"answer_owner": "user2661923",
"answer_text": "I disagree with the comments.\n\nFurther, since this is an interpretation question, I feel justified in providing an answer even though the OP (i.e. original poster) has shown no work.\n\nI will explain the notion of limits in the simplified world of single variable functions, where both the domain and range of the function is some subset of the Real Numbers. This should give you a reasonable intuitive grasp of the idea behind the limit.\n\nThen, you will have to broaden your intuition to consider functions that have other domains or other ranges.\n\nThe first concept to consider is the notion of a\n\nneighborhood\n\n. The simplest example is to consider a fixed value\n\n$a \\in \\Bbb{R}$\n\n. Then, for a small positive value\n\n$\\delta$\n\n, the neighborhood of radius\n\n$\\delta$\n\n around the value\n\n$a$\n\n is regarded as the set of all\n\n$x \\in \\Bbb{R}$\n\n such that\n\n$-\\delta < (x-a) < \\delta.$\n\nTypically, the shorthand expression for this is\n\n$|x-a| < \\delta.$\n\n Typically, in the definition of a limit, one is concerned with those values of\n\n$x$\n\n that are in the neighborhood of radius\n\n$\\delta$\n\n around\n\n$a$\n\n, but where\n\n$x \\neq a.$\n\nTypically, this is expressed as\n\n$0 < |x-a| < \\delta.$\n\nThen, you have to understand the idea of (for a specific\n\n$\\epsilon > 0$\n\n) the neighborhood of radius\n\n$\\epsilon$\n\n around some fixed finite value\n\n$L$\n\n.\n\nBasically, this neighborhood is expressed as the set of all\n\n$y$\n\n, such that\n\n$|y - L| < \\epsilon.$\n\nNow, you are ready for the intuitive definition of a limit.\n\nSuppose that you see the assertion that\n\n$\\displaystyle \\lim_{x \\to a} f(x) = L$\n\n.\n\nAssigning the variable\n\n$y$\n\n to represent\n\n$f(x)$\n\n, what this assertion signfies, is that for any\n\n$\\epsilon > 0$\n\n there exists a\n\n$\\delta > 0$\n\n such that\n\nIf\n\n$x$\n\n is in a neighborhood of radius\n\n$\\delta$\n\n around\n\n$a$\n\n, and\n\n$x \\neq a$\n\n,\n\nThen\n\n$y = f(x)$\n\n is in a neighborhood of radius\n\n$\\epsilon$\n\n around\n\n$L$\n\n.\n\nMore formally, the assertion is written:\n\n$\\displaystyle \\lim_{x \\to a} f(x) = L$\n\n signifies that\n\nFor all\n\n$\\epsilon > 0$\n\n, there exists a\n\n$\\delta > 0$\n\n (where the choice of\n\n$\\delta$\n\n often depends on the choice of\n\n$\\epsilon)$\n\nsuch that\n\n$0 < |x - a| < \\delta \\implies |f(x) - L| < \\epsilon.$\n\nAs a very simple concrete example, suppose that\n\n$f(x) = 2x$\n\n, and you are asked to prove that\n\n$\\displaystyle \\lim_{x\\to 2} f(x) = 4.$\n\nIt turns out that for this particular problem, you can specify\n\n$\\displaystyle \\delta = \\frac{\\epsilon}{2}.$\n\nThen, if\n\n$\\displaystyle 0 < |x - 2| < \\delta = \\frac{\\epsilon}{2}$\n\n then you can conclude that\n\n$|f(x) - 4| = |2x - 4| = 2|x - 2| < 2\\delta = \\epsilon.$\n\nThis constitutes a proof that\n\n$\\displaystyle \\lim_{x \\to 2} f(x) = 4 ~: ~f(x) = 2x.$\n\nThe foundation of the proof was that you were able to identify a relationship between\n\n$\\delta$\n\n and\n\n$\\epsilon ~\\left(\\text{i.e. that} ~\\displaystyle \\delta = \\frac{\\epsilon}{2}\\right)$\n\n that allowed the required constraint to be satisfied.",
"is_accepted": false,
"score": 4
}
|
CC BY-SA (Stack Exchange content)
|
2,701,131
|
Find the value of $\theta$ on $\pi/2 \le \theta \le \pi$ at which the curve $r=\theta - \sin (3\theta)$ is closest to the pole.
|
Find the value of $\theta$ on $\pi/2 \le \theta \le \pi$ at which the curve $r=\theta - \sin (3\theta)$ is closest to the pole.
How can I approach this problem? I thought to find the values of theta where $r=0$, but apparently that's not right. Calculators are allowed.
|
space
|
https://math.stackexchange.com/questions/2701131/find-the-value-of-theta-on-pi-2-le-theta-le-pi-at-which-the-curve-r
|
{
"answer_id": 2701556,
"answer_link": null,
"answer_owner": "Rory Daulton",
"answer_text": "You are right that the first step is to find where $r=0$. As you have found, the only solution is $\\theta=0$ which is outside your allowed values of $\\theta$, so this step fails. The given curve does not go through the origin.\n\nYour next step is to find where $r$ is a minimum. The function for $r$ is continuous so we can use the usual calculus methods. We find the derivative and find where it equals zero.\n\n$$\\begin{align}\n\n0 & = \\frac d{d\\theta}\\left( r \\right) \\\\[2ex]\n\n & = \\frac d{d\\theta}\\left( \\theta-\\sin(3\\theta) \\right) \\\\[2ex]\n\n & = 1 - 3\\cos(3\\theta) \\\\[2ex]\n\n\\cos(3\\theta) & = \\frac 13 \\\\[2ex]\n\n3\\theta & = 2k\\pi\\pm\\cos^{-1}\\left(\\frac 13\\right) \\\\[2ex]\n\n\\theta & = \\frac{2k\\pi}3\\pm\\frac 13\\cos^{-1}\\left(\\frac 13\\right) \\\\[2ex]\n\n\\end{align}$$\n\nThe only values of $\\theta$ that fit in your required interval have $k=1$:\n\n$$\\begin{align}\n\n\\theta & = \\frac{2\\pi}3\\pm\\frac 13\\cos^{-1}\\left(\\frac 13\\right) \\\\[2ex]\n\n & \\approx 1.6840752966129, \\quad 2.5047149081735\n\n\\end{align}$$\n\nI'll let you finish form here. Note that there is no value of $\\theta$ that makes $r$ undefined, so we have found all the critical points. Find which of those two values of $\\theta$ has the minimum value of $r$ (the other has a maximum value). Compare that value of $r$ with those at the endpoints of the given interval and find the absolute minimum of $r$ with its corresponding value of $\\theta$. Ask if you need more help.\n\nHere is a polar graph of your problem, done on the TI-Nspire CX Graphing Calculator emulator. This confirms that the correct answer is the larger of the two values of $\\theta$ above, $2.5047149081735$. This graph also shows the corresponding value of $r$ and the cartesian coordinates.",
"is_accepted": false,
"score": 1
}
|
CC BY-SA (Stack Exchange content)
|
5,108,174
|
Different (but equivalent) expression of a pullback
|
Consider the map
$\varphi: M \to N$
,
$x^i$
a coordinate system on
$M$
and
$x'^i$
a coordinate system on
$N$
.
$\alpha$
is a form.
I was given the "fact" that
$$(\varphi^*\alpha)_i(p) =\frac{\partial x'^j}{\partial x^i}\big(p\big)\;\alpha_j(x')$$
and
$$(\varphi^*\alpha)_i(p) =\frac{\partial x^k}{\partial x'^i}\big(\varphi(p)\big)\;\alpha_k\big(\varphi(p)\big)$$
are indeed
the same
relation, even if "viewed from different perspectives".
Unfortunately I could not reconcile them. Can you give a suggestion?
|
Lo Scrondo
|
https://math.stackexchange.com/questions/5108174/different-but-equivalent-expression-of-a-pullback
|
{
"answer_id": null,
"answer_link": null,
"answer_owner": null,
"answer_text": "(Нет ответов)",
"is_accepted": false,
"score": 0
}
|
CC BY-SA (Stack Exchange content)
|
5,108,048
|
Do you determine the number system of a definition (using = or :=) after evaluating, or is it declared beforehand?
|
When you have a definition (usually using the "
$:=$
" or the normal equality symbol "
$=$
") in math, do you determine the number system of the output/variable (usually on the LHS of the "
$:=$
" or "
$=$
" symbol) after evaluating the formula given for it (usually on the RHS of the definition/equality symbol), or do you already have to declare the number system for the output (LHS of equality) beforehand (like when you just state the definition. So then after evaluating the formula on the RHS, we must find solutions that match our pre-declared number system for the output on the LHS)?
I'm not sure, but I think that since it's a definition, it's defined as whatever the other thing/formula is equal to (and whatever number system it exists in)(on the RHS), so if the formula evaluates to a real or complex or infinite number, then the thing being defined (on the LHS) is also in the real or complex or extended real (for infinite) number systems (i.e., we found out the number systems after evaluating, and we didn't declare it beforehand). But I'm also confused because this contradicts what happens for functions. For example, if we are defining a function (like
$y=\sqrt{x})$
(or using the := symbol,
$y:=\sqrt{x}$
), then we must define the number system of the codomain (i.e., the output
$f(x)$
or
$y$
of the function that's being defined) beforehand (like
$y \in \Bbb{R}$
or
$y \in \Bbb{C}$
). So, for defining functions, the formula/rule for the function doesn't tell us its number system, and we have to declare it beforehand.
Also (similar question as above), let's say we have something like the limit definition of a derivative or an infinite sum (limit of partial sums). Then do we find the number system of the output after evaluating the limit (i.e., we find out after evaluating the limits that a derivative and infinite sum must be real numbers (or extended reals if the limit goes to infinity, right?)? Or do we have to declare the number system of the output beforehand, when we are just stating the definition (i.e., we must declare that a derivative and infinite sum must be in the real numbers from the beginning, and then we find solutions that exist in the reals by evaluating the limit, which would then verify our original assumption/declaration since we found solutions in the real numbers)? But then for this specific method (where we declare the number system beforehand), then if we get a limit of infinity, we define it to be DNE/undefined (since we usually like to work in a real number field), but our original declaration was that a derivative and infinite sum must be real numbers only. But from our formula (on the RHS) and from the definition of a limit, we can get either a real number or infinity (extended reals), so then how would this work (like would infinity be a valid value/solution or not, and would it be an undefined or defined answer)? So basically, whenever we have these types of definitions in math (like formulas), does that mean we find the number system of the output (what we're defining) after evaluating the formula, or do we declare the number system it has to be (then we find solutions in that number system using the formula) beforehand?
Also (another example related to the same question above), if we have a formula like
$A=\pi r^2$
(or
$A:=\pi r^2$
for a definition) (area of a circle), or any other formula (for example, arithmetic mean formula, density formula, velocity/speed formula, integration by parts formula, etc.), then do we determine the number system of the "object being defined" (on the LHS) after evaluating the formula (on the RHS), or is it declared beforehand (like for the whole equation or just the LHS object)? For example, for
$A=\pi r^2$
(or
$A:=\pi r^2$
), do we determine that area (
$A$
) must be a real number after finding that formula is also a real number (since if
$r$
is a real number, then
$\pi r^2$
is also a real number based on real number operations) (similar to my explanation in paragraph 2 of how I think definitions work)? Or do we have to declare beforehand that area (
$A$
) must be a real number, and then we must find solutions from the formula (
$\pi r^2$
) that are also real numbers (which is always true for this example since
$\pi r^2$
is always real) for the equation/definition to be valid (similar to how functions and codomains work)?
Sorry for the long question, and if it's confusing. Please let me know if any clarification is needed. Any help regarding the assumptions of existence and number systems in equations/definitions/formulas would be greatly appreciated. Thank you!
EDIT: I am adding these 3 options to my question to make it clearer:
Option #1: Explicitly declaring the number system for the output:
Like we declare beforehand that for the definition
$A:=B$
(or
$A=B$
) where
$A$
is the output and
$B$
is a formula,
$A \in \Bbb{R}$
, or we use functional-definition (like
$f:\Bbb{R} \to \Bbb{R}$
, where we define the number system of the output (which would be
$A$
for this example) beforehand as well. We also have to declare the number system for the operations and numbers being used for the formula for
$B$
(i.e., we declare the general/ambient number system for the operations).
Option #2: Implicitly declaring the number system for everything:
Like for
$A:=B$
(or
$A=B$
), we declare that the general/ambient number system for the whole equation/definition to be
$\Bbb{R}$
, so then this would include the operations in the formula for
$B$
, the output of
$B$
, and the value of
$A$
(everything in the equation).
Option #3: Determining the number system for
$A$
after evaluating
$B$
(the RHS):
Like if we have
$A:=B$
(or
$A=B$
, but for this example, this only applies to
$A=B$
(using an equality symbol), we declare that the general/ambient number system for
$B$
is
$\Bbb{R}$
, so the operations and output for
$B$
must be in
$\Bbb{R}$
, and since
$A$
is
defined to be equal to
$B$
(not just equal to
$B$
), then
$A$
must also be in
$\Bbb{R}$
. Also, I think this option only applies where it is an explicit definition (
$A:=B$
), and usually does not apply for a general equality (
$A=B$
). However, it can sometimes apply to a general equality (
$A=B$
) only if it's similar to a formula or definition, not a relationship (like
$V=IR$
(Ohm's Law) or integration by parts (IBP is a relationship, not a formula/definition, since it's proven from the product rule, so all integrals have to exist beforehand, I think), since these are relationships between variables/quantities, so you need to know the number system for every variable beforehand (i.e., for
$V=IR$
, we need to know
$V, I, R \in \Bbb{R}$
, right?)).
So, which is correct from options 1, 2, and 3, or are all of them correct? Thank you!
|
Aaditya Visavadiya
|
https://math.stackexchange.com/questions/5108048/do-you-determine-the-number-system-of-a-definition-using-or-after-evaluat
|
{
"answer_id": 5108053,
"answer_link": null,
"answer_owner": "Ethan Bolker",
"answer_text": "There is no rule that answers your question. The point of written mathematics is to communicate between writer and reader. The writer must provide whatever is necessary. How much is necessary depends on how much context the reader and writer share.\n\nThat principle applies much more generally than when you are dealing with what you call \"different kinds of numbers\". Definitions and formulas appear in more advanced mathematics where the objects need not be numbers of any kind.\n\nThe definition of the derivative looks the same whether you are studying elementary calculus of complex analysis.\n\nIf you are solving a quadratic equation the context matters and the writer should tell you in advance if it's not clear from the surrounding material.\n\nIf you are writing formulas for the areas of geometric objects it's implicit that the variables represent real numbers.\n\nThe domain and the codomain are officially part of the definition of a function. If there's any doubt about what they are in any particular case the author should clarify in advance.",
"is_accepted": false,
"score": 3
}
|
CC BY-SA (Stack Exchange content)
|
1,851,459
|
Understanding this proof that $\lim\limits_{h\to 0}\frac{\cos(h)-1}{h}=0$
|
I need help understanding how this limit is proved? :
Show that $$\lim_{h\to 0} \frac{\cos (h)-1}{h}=0$$
Proof
:
Using the half angle formula, $\cos h = 1-2 \sin^2(h/2)$
$$\lim_{h\to 0} \frac{\cos (h)-1}{h}\\=\lim_{h\to 0}( -\frac{2 \sin^2(h/2)}{h})\\=-\lim_{\theta \to 0}\frac{\sin \theta}{\theta} \sin \theta\ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \text{(Let $\theta=h/2$)} \\ = -(1)(0)\\=0$$
I have no idea how this proof is done, so I apologize for the lack of my own thoughts in this question. I understand limits and know sin, cos, tan, but I am just very lost as what they did in each step. Can someone please explain all the steps of the proof as well as the half-angle formula. Thanks!
|
BlueMagic1923
|
https://math.stackexchange.com/questions/1851459/understanding-this-proof-that-lim-limits-h-to-0-frac-cosh-1h-0
|
{
"answer_id": 1851473,
"answer_link": null,
"answer_owner": "Bernard",
"answer_text": "The simplest proof is this:\n\n$$\\frac{\\cos h-1}h=\\frac{(\\cos h-1)(\\cos h+1)}{(\\cos h+1)h}=\\frac{\\cos^2h-1}{(\\cos h+1)h}=-\\frac{\\sin^2h}{(\\cos h+1)h}=-\\frac{\\sin h}h\\cdot\\frac{\\sin h}{\\cos h+1}.$$\n\nThe first fraction tends to $1$, the second tends to $\\dfrac 02=0$, hence the limit is $\\color{red}0$.\n\nFor the proof you mention, at the third line, you should have\n\n$$=\\lim_{h\\to 0}\\Bigl( -\\frac{2 \\sin^2(h/2)}{h}\\Bigr)=\\lim_{h\\to 0}\\Bigl( -\\frac{\\sin^2(h/2)}{h/2}\\Bigr)=\\dots$$",
"is_accepted": false,
"score": 10
}
|
CC BY-SA (Stack Exchange content)
|
4,965,083
|
Understanding "Both differential and integral calculus make use of the notion of convergence of infinite series to a well-defined limit".
|
I was reading the book "Algorithms for Optimization" and in the Introduction part of the book it is written that:
Modern calculus stems from the developments of Gottfried Wilhelm Leibniz (1646–1716) and Sir Isaac Newton (1642–1727). Both differential and integral calculus make use of the notion of convergence of infinite series to a well-defined limit.
I'm wondering what does the last sentence mean? I'm familiar with the Riemann integral for calculating the definite integrals but how the indefinite integrals and differentiations are related to the "notion of convergence of infinite series"?
|
user1380196
|
https://math.stackexchange.com/questions/4965083/understanding-both-differential-and-integral-calculus-make-use-of-the-notion-of
|
{
"answer_id": 5108147,
"answer_link": null,
"answer_owner": "Alessandro",
"answer_text": "Maybe this fact will go beyond the scope of the question but I think it might be helpful.\n\nLet\n\n$(B, \\|\\cdot\\|)$\n\n be a normed vector space. These kind of spaces naturally arise in analysis. In particular, we are more interested in complete normed vector spaces, called Banach spaces, in order to ensure that every Cauchy sequence has a limit. This property (completeness) turns out to be crucial when dealing with this kind of spaces,for example when working with\n\n$L^p$\n\n spaces (but there are much more examples of Banach spaces).\n\nNow, if we want to establish completeness for the normed space\n\n$B$\n\n we would have to prove that every Cauchy sequence converges to an element of the space. It turns out that this condition is equivalent to the requirement that every absolutely convergent series in norm has actually a well defined limit.\n\nPlainly, given\n\n$(v_n) \\in B$\n\n this condition can be rephrased as:\n\n$$\n\n\\sum_{n=0}^\\infty \\|v_n\\| < \\infty \\rightarrow \\exists v\\in B : \\lim_{N \\to \\infty} \\sum_{n=0}^N v_n = v\n\n$$\n\nThis simple fact, somehow connects the notion of convergent sequence and series. Indeed, it is only a straightforward application of the fact that for every Cauchy sequence\n\n$(a_n)_{n\\in \\mathbb{N}}$\n\n, possibly thinning the sequence, you may assume\n\n$\\sum_{n=0}^\\infty (a_{n+1}-a_n)$\n\n is absolutely convergent in norm.",
"is_accepted": false,
"score": 0
}
|
CC BY-SA (Stack Exchange content)
|
1,100,368
|
Closed form for ${\large\int}_0^\infty\frac{x-\sin x}{\left(e^x-1\right)x^2}\,dx$
|
I'm interested in a closed form for this simple looking integral:
$$I=\int_0^\infty\frac{x-\sin x}{\left(e^x-1\right)x^2}\,dx$$
Numerically,
$$I\approx0.235708612100161734103782517656481953570915076546754616988...$$
Note that if we try to split the integral into two parts, each with only one term in the numerator, then both parts will be divergent.
|
Laila Podlesny
|
https://math.stackexchange.com/questions/1100368/closed-form-for-large-int-0-infty-fracx-sin-x-leftex-1-rightx2-dx
|
{
"answer_id": 1100403,
"answer_link": null,
"answer_owner": "Jack D'Aurizio",
"answer_text": "$$\\frac{x-\\sin x}{x^2}=\\sum_{n\\geq 1}\\frac{(-1)^{n+1}}{(2n+1)!}x^{2n-1},$$\n\nand since:\n\n$$ \\int_{0}^{+\\infty}\\frac{x^{2n-1}}{e^x-1}\\,dx = (2n-1)!\\cdot \\zeta(2n),$$\n\nwe have:\n\n$$\\begin{eqnarray*} &&\\int_{0}^{+\\infty}\\frac{x-\\sin x}{x^2(e^x-1)}\\,dx = \\sum_{n\\geq 1}\\frac{(-1)^{n+1}}{2n(2n+1)}\\zeta(2n)\\\\&=&\\color{red}{\\sum_{n\\geq 1}\\left(-1+n\\arctan\\frac{1}{n}+\\frac{1}{2}\\,\\log\\left(1+\\frac{1}{n^2}\\right)\\right)}\\\\&=&\\color{blue}{\\log\\sqrt{\\frac{\\sinh \\pi}{\\pi}}+\\sum_{n\\geq 1}\\left(-1+n\\arctan\\frac{1}{n}\\right)}.\\tag{1}\\end{eqnarray*} $$\n\nCombining this identity with the\n\nrobjonh's answer to another question\n\n, we finally get:\n\n$$\\color{purple}{\\int_{0}^{+\\infty}\\frac{x-\\sin x}{x^2(e^x-1)}\\,dx=\\frac{1}{2}+\\frac{5\\pi}{24}-\\log\\sqrt{2\\pi}+\\frac{1}{4\\pi}\\operatorname{Li}_2(e^{-2\\pi})}.\\tag{2}$$\n\nOn the other hand, the identity claimed by user111187,\n\n$$ \\int_{0}^{+\\infty}\\frac{x-\\sin x}{x(e^x-1)} = \\gamma+\\Im\\log\\Gamma(1+i)\\tag{3} $$\n\nfollows from the\n\nintegral representation for the $\\log\\Gamma$ function\n\n and for the\n\nEuler-Mascheroni constant\n\n. By considering the Weierstrass product for the $\\Gamma$ function,\n\n$$\\Gamma(z+1) = e^{-\\gamma z}\\prod_{n\\geq 1}\\left(1+\\frac{z}{n}\\right)^{-1}e^{\\frac{z}{n}}$$\n\nwe have:\n\n$$ \\log\\Gamma(z+1) = -\\gamma z + \\sum_{n\\geq 1}\\left(\\frac{z}{n}-\\log\\left(1+\\frac{z}{n}\\right)\\right)$$\n\nso:\n\n$$ \\int_{0}^{+\\infty}\\frac{x-\\sin x}{x(e^x-1)}\\,dx = \\sum_{n\\geq 1}\\left(\\frac{1}{n}-\\arctan\\frac{1}{n}\\right).\\tag{4}$$",
"is_accepted": true,
"score": 15
}
|
CC BY-SA (Stack Exchange content)
|
5,108,131
|
What is the fixed point of $\sin(\cos(\tan(x))) = x$?
|
The fixed point of
$\sin(\cos(\tan(x))) = x$
?
|
Shan yu Liew
|
https://math.stackexchange.com/questions/5108131/what-is-the-fixed-point-of-sin-cos-tanx-x
|
{
"answer_id": null,
"answer_link": null,
"answer_owner": null,
"answer_text": "(Нет ответов)",
"is_accepted": false,
"score": 0
}
|
CC BY-SA (Stack Exchange content)
|
5,108,129
|
How to prove it
|
Theorem (Riemann).
If
$f(x)$
is Riemann integrable in the interval
$a \leq x \leq b$
, then:
$$\lim_{k \to +\infty} \int_a^b f(x) \sin kx \; dx = 0 \;.$$
|
Wayne yeung
|
https://math.stackexchange.com/questions/5108129/how-to-prove-it
|
{
"answer_id": null,
"answer_link": null,
"answer_owner": null,
"answer_text": "(Нет ответов)",
"is_accepted": false,
"score": 0
}
|
CC BY-SA (Stack Exchange content)
|
5,108,112
|
F(Z)= 1/(1+Z2) IS MEROMORPHIC OR NOT
|
F(Z)= 1/(1+Z2) IS MEROMORPHIC OR NOT
|
PRAKASH K
|
https://math.stackexchange.com/questions/5108112/fz-1-1z2-is-meromorphic-or-not
|
{
"answer_id": null,
"answer_link": null,
"answer_owner": null,
"answer_text": "(Нет ответов)",
"is_accepted": false,
"score": 0
}
|
CC BY-SA (Stack Exchange content)
|
4,429,681
|
If $\int_n^m{f(x,y)}dy=g(x)$, is there a way to find or approximate $f(x,y)$ given $g(x)$?
|
If I'm given
$f(x,y)$
, when
$$\int_n^m{f(x,y)}dy=g(x) ,$$
then I know how that I can at least approximate
$g(x)$
using a Riemann sum. However, if I am instead given
$g(x)$
I don't know how to even approximate
$f(x,y)$
, other than by trying out every possible function I think that
$f(x,y)$
might be. Is there a way to find or approximate
$f(x,y)$
that's better than guess and check given
$g(x)$
?
|
Anders Gustafson
|
https://math.stackexchange.com/questions/4429681/if-int-nmfx-ydy-gx-is-there-a-way-to-find-or-approximate-fx-y-giv
|
{
"answer_id": 4429684,
"answer_link": null,
"answer_owner": "SolubleFish",
"answer_text": "This equation is far from uniquely determining\n\n$f$\n\n, and there are many solutions.\n\nFor example, if\n\n$\\chi:[n,m]\\to\\mathbb R$\n\n is a continuous function whose integral is\n\n$1$\n\n (and there are many of those), then\n\n$f(x,y) = g(x) \\chi(y)$\n\n is a solution.",
"is_accepted": false,
"score": 1
}
|
CC BY-SA (Stack Exchange content)
|
5,108,078
|
Reverse-engineering for an integral by setting the function equal to the quotient rule?
|
I'm currently learning integrals (antiderivative is what we're calling it in class). It's my first day (It's very fun!). I'm trying to solve the integral of
$$\frac{(x+1)^2}{3x}$$
I understand you can just simplify the terms of
$\frac{(x+1)^2}{3x}$
into
$\frac{x^2}{3x} + \frac{2x}{3x} + \frac{1}{3x}$
and then easily find the integral, but I'm trying to do it a different way using the quotient rule. I want to set g'h - gh' =
$(x+1)^2$
and
$h^2$
= 3x and then 'reverse engineer' from there. Any way to do it? My professor tried it and was unable to do it, she said it wasn't possible because you can get h and h', but not g and g'. But I want to understand why. I really feel like this more 'rigorous way' should be possible. Or maybe I'm just very confused/lost...
Thanks!
|
Shaheer Zaighum
|
https://math.stackexchange.com/questions/5108078/reverse-engineering-for-an-integral-by-setting-the-function-equal-to-the-quotien
|
{
"answer_id": 5108083,
"answer_link": null,
"answer_owner": "Anne Bauval",
"answer_text": "This idea is of no help, because knowing\n\n$h$\n\n (\n\n$=\\pm\\sqrt{3x}$\n\n) and\n\n$k$\n\n (\n\n$=(x+1)^2$\n\n), the standard method to solve\n\n$g'h-gh'=k$\n\n is to let\n\n$g=fh$\n\n and look for\n\n$f$\n\n such that\n\n$(fh)'h-(fh)h'=k$\n\n, i.e.\n\n$f'=k/h^2$\n\n, which leads you back to your initial problem.",
"is_accepted": true,
"score": 2
}
|
CC BY-SA (Stack Exchange content)
|
5,107,693
|
How to extend this function into positive real numbers from natural numbers
|
So consider the following sum
$$
H(n) = \sum_{k=1}^{n} \frac{1}{k}
$$
If you consider this sum as a function of n, the domain of this function is natural numbers.
However, the domain of this function can be extended into positive real numbers using Euler's formula
$$H(n) = \int_{0}^{1} \frac{1 - x^{n}}{1 - x} \, dx$$
This function above will be identical to the first one for positive integers, but will also be defined for positive real numbers.
The reason this works is that the expression under the integral is just a sum of geometric sequence formula, and integrating from 0 to 1 gives the sum of harmonic series.
$$\int_{0}^{1} \left( 1 + x + x^{2} + x^{3} + \ldots + x^{n-2} + x^{n-1} \right) \, dx$$
But writing this in the form like this
$\int_{0}^{1} \frac{1 - x^{n}}{1 - x} \, dx$
, then allows n to be any positive real number.
I was wondering if there is any similar (or not so similar) way to extend a more complicated sum function into positive real numbers from natural numbers. Here is the function:
$$P(n) \;=\; \sum_{k=1}^{n+1} \frac{i^{k}}{(z+i)^{k}\,\Gamma(n+2-k)}$$
$i$
is the imaginary unit and
$z$
is just some number. I understand that this expression is much more complicated, than the previous one, but I want to see if I can coherently define for example
$P(2.5)$
,
$P(3.2)$
, etc in terms of z.
I tried using an approach similar to the Euler's approach in harmonic sum case, but could not find an integral which would fit. But maybe I am missing something. If there is any other, more advanced approach I can take, I would gladly look into it as well.
|
Egor Zaytsev
|
https://math.stackexchange.com/questions/5107693/how-to-extend-this-function-into-positive-real-numbers-from-natural-numbers
|
{
"answer_id": 5107735,
"answer_link": null,
"answer_owner": "Claude Leibovici",
"answer_text": "As said in comments, the incomplete gamma function or, better, the exponential integral function are the solutions.\n\n$$P_n(z) \\;=\\; \\sum_{k=1}^{n+1} \\frac{i^{k}}{(z+i)^{k}\\,\\Gamma(n+2-k)}=\\frac{e^{1-i z} (1-i z)\\,\\, E_{-(n+1)}(1-iz)-1}{\\Gamma (n+2)}\\tag 1$$\n\nDefining\n\n$t=(1-iz)$\n\n the expression is just\n\n$$\\frac {e^t}{t^{n+1} }\\,\\frac{\\Gamma (n+2,t)}{\\Gamma (n+2)}$$\n\nI do not think that interpolation is required since the\n\n$n$\n\n in the summation does not play the same role as the\n\n$n$\n\n in the function.\n\nFor a random test, using\n\n$z=\\pi$\n\n, interpolation of the function using for knots\n\n$n=1,2,\\cdots,10$\n\n (this is a very wide range).\n\nThe function gives\n\n$$P_{2.5}(\\pi)=-0.0529129 + 0.103660 \\,i$$\n\nSpline interpolation gives\n\n$-0.0521662 + 0.106454\\,i$\n\nHermite interpolation gives\n\n$ -0.0521662 +0.108183 \\,i$\n\nSimilarly, the function gives\n\n$$P_{3.2}(\\pi)=-0.0357736 + 0.0351827\\, i$$\n\nSpline interpolation gives\n\n$ -0.0360989+0.0345025 \\,i$\n\nHermite interpolation gives\n\n$-0.0365992+ 0.0355283 \\,i$",
"is_accepted": true,
"score": 3
}
|
CC BY-SA (Stack Exchange content)
|
5,069,339
|
Alternatives or generalisations of $\int_0^1 \frac{\ln \left(x^2-x+1\right)}{x \ln x} d x$
|
The beautiful result of the integral
$$
\int_0^1 \frac{\ln \left(x^2-x+1\right)}{x \ln x} d x=\ln 2\ln3
$$
attracts me to tackle it. Noting that
$x^3+1=(x+1)(x^2-x+1)$
, we can split the integral into two parts as:
$$
\begin{aligned}
\int_0^1 \frac{\ln \left(x^2-x+1\right)}{x \ln x} d x
& =\int_0^1 \frac{\ln \left(x^3+1\right)-\ln(x+1)}{x \ln x} d x=J(3)-J(1),
\end{aligned}
$$
where
$J(a)=\int_0^1 \frac{\ln \left(x^a+1\right)-\ln \left(x+1\right)}{x \ln x} d x$
whose derivative w.r.t.
$a$
is
$$
\begin{aligned}
J^{\prime}(a) & =\int_0^1 \frac{x^a \ln x}{x \ln x\left(x^a+1\right)} d x \\
& =\int_0^1 \frac{x^{a-1}}{x^a+1} d x \\
& =\frac{1}{a}\left[\ln \left(x^a+1\right)\right]_0^1 \\
& =\frac{1}{a}\ln 2
\end{aligned}
$$
Integrating back yields
$$
\int_0^1 \frac{\ln \left(x^2-x+1\right)}{x \ln x} d x =J(3)-J(1)=\int_1^3 J^{\prime}(a) da =\ln 2 \int_1^3 \frac{1}{a} d a =\ln 2 \ln 3
$$
Generalisation 1
For any natural number
$n$
,
$$
\begin{aligned}
I_n & =\int_0^1 \frac{\ln \left(x^{2n}-x^{2 n-1}+x^{2 n-2}-\cdots+1\right)}{x \ln x} d x \\
& =\int_0^1 \frac{\ln \left(x^{2 n+1}+1\right)-\ln (x+1)}{x \ln x} d x\\&=
\int_1^{2 n+1} \frac{1}{a} \ln 2 d a\\&=\ln 2 \ln (2 n+1)
\end{aligned}
$$
For example,
$$
\int_0^1 \frac{\ln \left(x^8-x^7+x^6-x^5+x^4-x^3+x^2-x+1\right)}{x \ln x}dx=\ln 2\ln 9
$$
Generalisation 2
For any natural number
$n$
,
$$
\int_0^1 \frac{\ln (x^{2n}-x^n+1)}{x \ln x} d x = \int_0^1 \frac{\ln \left(x^{3 n}+1\right)-\ln \left(x^n+1\right)}{x \ln x} d x =\int_n^{3 n} \frac{1}{a} \ln 2 d a =\ln 2 \ln 3
$$
which is
independent
of the choice of
$n$
.
My question
Are there any alternatives or generalisations of the integral?
Your comments and alternatives/generalisations are highly appreciated.
|
Lai
|
https://math.stackexchange.com/questions/5069339/alternatives-or-generalisations-of-int-01-frac-ln-leftx2-x1-rightx
|
{
"answer_id": 5069666,
"answer_link": null,
"answer_owner": "user953715",
"answer_text": "Here is a result I found (which covers all OP's generalisations):\n\n$$\\forall n\\ge2, V_n:=\\int_{0}^{1}\\left(\\frac{\\ln\\Phi_n(x)}{x\\ln x}+\\frac{\\Lambda_1(n)}{1-x}\\right)dx=\\frac{1}{2}\\Lambda_2(n)$$\n\nWhere\n\n$\\Phi_n(x)$\n\n is the\n\n$n$\n\n-th cyclotomic polynomial,\n\n$\\Lambda_{k}(n)$\n\n is the generalized Von Mangoldt\n\nfunction\n\n.\n\nProof:\n\nFirst, consider the integral\n\n$$I(s)=\\int_{0}^{1}\\left(\\frac{\\ln(1-x^s)}{x\\ln x}+\\frac{\\ln s}{1-x}-\\frac{\\ln(1-x)}{\\ln x}\\right)dx$$\n\nDifferentiate w.r.t.\n\n$s$\n\n$$I'(s)=\\int_{0}^{1}\\left(\\frac{-x^{s-1}}{1-x^{s}}+\\frac{1}{s(1-x)}\\right)dx=\\int_{0}^{1}\\sum_{k=0}^{\\infty}\\left(\\frac{1}{s}x^k-x^{sk+s-1}\\right)dx$$\n\n$$=\\lim_{x\\to 1^{-}}\\sum_{k=0}^{\\infty}\\frac{x^{k+1}-x^{s(k+1)}}{s(k+1)}=\\frac{1}{s}\\lim_{x\\to 1^{-}}\\ln\\left(\\frac{1-x^s}{1-x}\\right)=\\frac{\\ln s}{s}$$\n\nNow, integrate to get\n\n$I(s)$\n\n$$I(s)=I(1)+\\int_{1}^{s}I'(t)dt=I(1)+\\frac{1}{2}\\ln^2s$$\n\nSince\n\n$n\\ge2,\\sum_{d|n}\\mu(d)=0$\n\n and\n\n$\\Phi_n(x)=\\prod_{d|n}(1-x^d)^{\\mu(\\frac{n}{d})}$\n\n , we have:\n\n$$V_n=\\int_{0}^{1}\\left(\\frac{\\ln\\Phi_n(x)}{x\\ln x}+\\frac{\\Lambda_1(n)}{1-x}\\right)dx=\\sum_{d|n}\\mu\\left(\\frac{n}{d}\\right)I(d)$$\n\n$$=\\sum_{d|n}\\mu\\left(\\frac{n}{d}\\right)\\left(I(1)+\\frac{1}{2}\\ln^2 d\\right)=\\frac{1}{2}\\Lambda_2(n)$$\n\nas desired. We can also define\n\n$V_1:=I(1)=\\int_{0}^{1}\\frac{\\psi^{(0)}(x+1)+\\gamma}{x}dx=1.2577468869...$\n\nExample:\n\n$$V_6=\\int_{0}^{1}\\left(\\frac{\\ln\\Phi_6(x)}{x\\ln x}+\\frac{\\Lambda_1(6)}{1-x}\\right)dx=\\int_{0}^{1}\\frac{\\ln(x^2-x+1)}{x\\ln x}dx$$\n\n$$=\\frac{1}{2}\\Lambda_2(6)=\\frac{1}{2}\\sum_{d|6}\\mu\\left(\\frac{6}{d}\\right)\\ln^2 d=\\ln(2)\\ln(3)$$\n\n For a bonus (see this\n\nanswer\n\n), we can calculate\n\n$\\Lambda_{2}(n)$\n\n as follow:\n\n$$\\Lambda_{2}(n)=\\sum_{d|n}\\Lambda_{1}(d)\\Lambda_{1}\\left(\\frac{n}{d}\\right)$$\n\nFinally, there is a nice example that I think it's worth mentioning (proof left as exercise)\n\n$$\\int_{0}^{1}\\frac{\\ln(x^8+x^7-x^5-x^4-x^3+x+1)}{x\\ln x}dx=0$$",
"is_accepted": false,
"score": 8
}
|
CC BY-SA (Stack Exchange content)
|
5,108,054
|
When showing differentiability at the split of a piecewise function, can one simply use the derivatives of each piece?
|
Background
Let's say you have a piecewise function like
$$\displaystyle
f(x) = \begin{cases}
x^2+2 & x<0 \\
2e^x & x \geq 0
\end{cases}
$$
You may assume continuity is already proven.
Goal
Prove or disprove differentiability at
$x=0$
.
Method 1 - The shortcut
One easy way is to differentiate the two pieces, and show that they do not equal each other at
$x=0$
$$\displaystyle
f'(x) = \begin{cases}
2x & x<0 \\
2e^x & x \geq 0
\end{cases}
$$
Evaluated at
$x=0$
the two cases give different values, thus no differentiatiability at the split.
Method 2 - Rigor
Evaluate the limit definition of the derivative from LHS and RHS.
I.e., one would show that
$$\displaystyle
\lim\limits_{h \to 0^-} \frac{f(0+h)-f(0)}{h} \quad \neq \quad \lim\limits_{h \to 0^+} \frac{f(0+h)-f(0)}{h}
$$
and get the same result.
Question
Is method 1 always sufficient?
|
Alec
|
https://math.stackexchange.com/questions/5108054/when-showing-differentiability-at-the-split-of-a-piecewise-function-can-one-sim
|
{
"answer_id": 5108056,
"answer_link": null,
"answer_owner": "Wang YeFei",
"answer_text": "I think that your\n\n$f'(x) = \\begin{cases} 2x, x < 0 \\\\ 2e^x, x \\ge 0 \\end{cases}$\n\n is incorrect because if it were true then the formula says\n\n$f'(0) = 2e^{0} = 2$\n\n. But\n\n$f'(x)$\n\n does not exist at\n\n$x = 0$\n\n, and your method\n\n$2$\n\n just proved it that it doesn't exist. So method\n\n$1$\n\n you talked about is not sufficient.",
"is_accepted": false,
"score": 0
}
|
CC BY-SA (Stack Exchange content)
|
1,162,663
|
Stuck on proving $\int_{-\infty}^\infty \cos(\frac{\pi}{a}x)\cos(\frac{3\pi}{a} x) \, \mathrm{d}x$ = 0
|
Can someone please help me to show how
$$\int_{-\infty}^\infty \cos(\frac{\pi}{a}x)\cos(\frac{3\pi}{a} x) \, \mathrm{d}x = 0$$
Attempt:
Trig Identity yields
$$= \frac{1}{2} \int_{-\infty}^\infty \cos(\frac{4\pi}{a}x) + \cos(\frac{2\pi}{a} x) \, \mathrm{d}x$$
$$= \frac{a}{2} (\frac{\sin(\frac{4\pi}{a}x)}{4\pi} + \frac{\sin(\frac{2\pi}{a}x)}{2\pi}) $$ evaluated from $-\infty$ to $\infty$
What is a nontrivial way to show that the last expression is zero?
My course notes says something about stretching of the sine function, not good enough for me.
|
Olórin
|
https://math.stackexchange.com/questions/1162663/stuck-on-proving-int-infty-infty-cos-frac-piax-cos-frac3-pia
|
{
"answer_id": 1162688,
"answer_link": null,
"answer_owner": "Mark Viola",
"answer_text": "The limit does not exist.\n\nHowever, in the theory of generalized functions (i.e., distribution theory), the limit\n\n$\\lim_{x\\to \\infty} \\sin(ax) = 0$\n\n.\n\nNOTE:\n\nThis is NOT a classical limit, but has a rigorous interpretation in the context of distribution theory. Formal (non-rigorous) application of distribution theory is used pervasively by physicists and engineers to obtain results (usually correct) very quickly without the need to enforce rigor. Examples include formal applications of the \"Dirac Delta\" and its derivatives and extending the Fourier transform to the space of objects (tempered distributions) that are not\n\n$L^1$\n\n or\n\n$L^2$\n\n functions (e.g., the Fourier transforms of\n\n$1$\n\n,\n\n$H$\n\n,\n\n$|x|^\\alpha$\n\n).",
"is_accepted": true,
"score": 2
}
|
CC BY-SA (Stack Exchange content)
|
5,103,316
|
$\text{erfc}$ Integral
|
Is there any closed form for
$$I(m,n) = \int_{0}^{\infty} \text{erfc}^n(x^m) \mathrm dx \tag1$$
I was able to obtain a closed form when
$n = 1$
.
\begin{align}
I(m, 1) &= \int_{0}^{\infty} \text{erfc}(x^m) \mathrm dx \tag2\\ &= \left[ x \cdot \text{erfc}(x^m) \right]_{0}^{\infty} - \int_{0}^{\infty} x \left( -\frac{2m}{\sqrt{\pi}} x^{m-1} e^{-x^{2m}} \right) \mathrm dx \tag3 \\
&= \int_{0}^{\infty} \frac{2m}{\sqrt{\pi}} x^m e^{-x^{2m}} \mathrm dx \tag4 \\
&= \frac{2m}{\sqrt{\pi}} \int_{0}^{\infty} (u^{1/2}) (e^{-u}) \left( \frac{\mathrm du}{2m u^{1 - 1/(2m)}} \right) \tag5 \\
&= \frac{1}{\sqrt{\pi}} \int_{0}^{\infty} u^{(1/(2m) - 1/2)} e^{-u} \mathrm du \tag6 \\
&= \frac{1}{\sqrt{\pi}} \Gamma\left(\frac{1}{2} + \frac{1}{2m}\right) \tag7
\end{align}
Where in
$(2)$
, I used integration by parts; in
$(4)$
the substitution
$u = x^{2m}$
was performed; and in
$(6)$
, I used the definition of the
$\Gamma$
function. Thus the case
$m=1$
admits a rather simple closed form.
Here is the solution for
$I(0.5, 4)$
I saw on
Instagram
.
\begin{align}
I &= \int_{0}^{\infty} \text{erfc}^4(\sqrt{x}) \mathrm dx \\ &= \frac{16}{\pi^2} \int_{0}^{\infty} \int_{0}^{1} \int_{0}^{1} \frac{\exp \left ( -z \left \{ 2 + x^{-2} + y^{-2} \right \}\right )}{(1+x^2)(1+y^2)} \mathrm dx \, \mathrm dy \, \mathrm dz \\
&= \frac{16}{\pi^2}\int_{0}^{1} \int_{0}^{1} \frac{1}{(1+x^2)(1+y^2)(2 + x^{-2} + y^{-2})} \mathrm dx \, \mathrm dy \\
&= \frac{16}{\pi^2} \int_{0}^{1} \frac{y^2}{(1+y^2)^2} \left (\frac{\pi}{4} - \frac{1}{\sqrt{2 + y^{-2}}} \tan^{-1} \sqrt{2 + y^{-2}} \right ) \mathrm dy \\
&= \frac{4}{\pi}I_1 - \frac{16}{\pi^2} I_2
\end{align}
where
$I_1$
is the nicer
$y$
-looking integral and
$I_2$
is the scarier
$\tan^{-1}$
integral.
$I_1 = \pi/8 - 1/4$
is easily solved.
$I_2$
is harder... By making the substitution
$v = \sqrt{2 + y^{-2}}$
, some algebra, then the substitution
$v = \sqrt{2} \cosh t$
, some integration by parts, ... , one is able to deduce the beautiful result that
$$I = \frac12 - \frac{1}{\pi} \left (6 - \frac{8}{\sqrt{3}} \right )$$
Here
is a related post.
This led me to believe there could be some general form to the integral
$(1)$
. If not, I would still appreciate the evaluation of a specific subcase of
$I(m,n)$
. Thanks
|
Maxime Jaccon
|
https://math.stackexchange.com/questions/5103316/texterfc-integral
|
{
"answer_id": 5103633,
"answer_link": null,
"answer_owner": "Brightsun",
"answer_text": "$\\newcommand{\\Li}{\\mathrm{Li}}\n\n\\newcommand{\\logr}[1]{\\log\\left(#1\\right)}\n\n\\newcommand{\\HypF}[4]{{}_{2}F_{1}\\left(\\begin{array}{cc} {#1} ,{#2} \\\\{#3} \\end{array} ;{#4} \\right)}\n\n\\newcommand{\\HypthreeFtwo}[6]{{}_{3}F_{2}\\left(\\begin{array}{cc} {#1} ,{#2} ,{#3} \\\\ {#4} , {#5}\\end{array};{#6}\\right)}\n\n\\renewcommand{\\a}{\\alpha}\n\n\\renewcommand{\\b}{\\beta}\n\n\\newcommand{\\Res}{\\mathbf{Res}}\n\n\\newcommand{\\Z}{\\mathbb{Z}}\n\n\\newcommand{\\Q}{\\mathbb{Q}}\n\n\\newcommand{\\N}{\\mathbb{N}}\n\n\\newcommand{\\R}{\\mathbb{R}}\n\n\\newcommand{\\C}{\\mathbb{C}}\n\n\\newcommand{\\am}{\\mathrm{am}}\n\n\\newcommand{\\sn}{\\mathrm{sn}}\n\n\\newcommand{\\cn}{\\mathrm{cn}}\n\n\\newcommand{\\dn}{\\mathrm{dn}}\n\n\\newcommand{\\ns}{\\mathrm{ns}}\n\n\\newcommand{\\nc}{\\mathrm{nc}}\n\n\\newcommand{\\nd}{\\mathrm{nd}}\n\n\\newcommand{\\scn}{\\mathrm{sc}}\n\n\\newcommand{\\cs}{\\mathrm{cs}}\n\n\\newcommand{\\sd}{\\mathrm{sd}}\n\n\\newcommand{\\ds}{\\mathrm{ds}}\n\n\\newcommand{\\cd}{\\mathrm{cd}}\n\n\\newcommand{\\dc}{\\mathrm{dc}}\n\n\\newcommand{\\dilogarithm}[1]{\\mathrm{Li}_2\\left({#1} \\right) }\n\n\\newcommand{\\trilogarithm}[1]{\\mathrm{Li}_3\\left({#1} \\right) }\n\n\\newcommand{\\polylogarithm}[2]{\\mathrm{Li}_{#1}\\left(#2\\right)}\n\n\\newcommand{\\risingfactorial}[2]{{#1}^{\\overline{#2}}\n\n}\n\n\\newcommand{\\fallingfactorial}[2]{{#1}^{\\underline{#2}}\n\n}\n\n\\renewcommand{\\sl}[1]{\\mathrm{sl}{(#1)}}\n\n\\newcommand{\\lem}{\\varpi}\n\n\\newcommand{\\erf}{\\mathrm{erf}}\n\n\\newcommand{\\erfc}{\\mathrm{erfc}}\n\n\\newcommand{\\cadd}[1][0pt]{\\mathbin{\\genfrac{}{}{#1}{0}{}{+}}}\n\n\\newcommand{\\Cdots}[1][0pt]{\\genfrac{}{}{#1}{0}{\\mbox{}}{\\cdots}}$\n\nAuthor of the solution of\n\n$I(0.5,4)$\n\n here. You probably saw the solution on my\n\nInstagram post\n\n. Personally, I highly doubt there will be a general formula even for fixed\n\n$n$\n\n or fixed\n\n$m$\n\n. I will give two particular cases of\n\n$n=3$\n\n$$\\int_0^{\\infty}\\erfc^3(x)\\, dx=\\frac{3}{\\sqrt{\\pi}}-\\frac{6\\sqrt 2}{\\pi^{3/2}}\\tan^{-1}\\sqrt{2}\\tag{1}$$\n\n$$\\int_0^\\infty\\erfc^3(\\sqrt{x})\\, dx=\\frac{1}{2}-\\frac{3-\\sqrt 3}{\\pi}\\tag{2}$$\n\nI will post a detailed solution when I have time but I can tell that I solved both\n\n$(1)$\n\n and\n\n$(2)$\n\n starting by integration by parts.\n\n$(5)$\n\n is used to solve\n\n$(1)$\n\n$$\\int \\erfc (x)\\, dx=x\\erfc(x)-\\frac{e^{-x^2}}{\\sqrt{\\pi}}\\tag{3}$$\n\n$$\\int x\\erfc(x) dx=\\frac{x^2}{2}\\erfc(x)-\\frac{x}{2\\sqrt{\\pi}}e^{-x^2}+\\frac{1}{4}\\erf(x)\\tag{4}$$\n\n$$\\int_0^{\\infty} e^{-yt^2}\\erfc(tx)\\, dt=\\frac{1}{\\sqrt{\\pi y}}\\tan^{-1}\\frac{\\sqrt{y}}{x}\\tag{5}$$",
"is_accepted": true,
"score": 5
}
|
CC BY-SA (Stack Exchange content)
|
2,498,628
|
Proof only by transformation that : $ \int_0^\infty \cos(x^2) dx = \int_0^\infty \sin(x^2) dx $
|
This was a question in our exam and I did not know which change of variables or trick to apply
How to show by inspection ( change of variables or whatever trick ) that
$$ \int_0^\infty \cos(x^2) dx = \int_0^\infty \sin(x^2) dx \tag{I} $$
Computing the values of these integrals are known as routine. Further from their values, the equality holds. But can we show equality beforehand?
Note
: I am not asking for computation since it can be found
here
and we have as well that,
$$ \int_0^\infty \cos(x^2) dx = \int_0^\infty \sin(x^2) dx =\sqrt{\frac{\pi}{8}}$$
and the result can be recover here,
Evaluating $\int_0^\infty \sin x^2\, dx$ with real methods?
.
Is there any trick to prove the equality in (I) without computing the exact values of these integrals beforehand?
|
Guy Fsone
|
https://math.stackexchange.com/questions/2498628/proof-only-by-transformation-that-int-0-infty-cosx2-dx-int-0-infty
|
{
"answer_id": 2507570,
"answer_link": null,
"answer_owner": "Guy Fsone",
"answer_text": "<Here is what I found\n\nEmploying the change of variables\n\n$2u =x^2$\n\n We get\n\n$$I=\\int_0^\\infty \\cos(x^2) dx =\\frac{1}{\\sqrt{2}}\\int^\\infty_0\\frac{\\cos(2x)}{\\sqrt{x}}\\,dx$$\n\n$$ J=\\int_0^\\infty \\sin(x^2) dx=\\frac{1}{\\sqrt{2}}\\int^\\infty_0\\frac{\\sin(2x)}{\\sqrt{x}}\\,dx $$\n\nSummary:\n\n We will prove that\n\n$J\\ge 0$\n\n and\n\n$I\\ge 0$\n\n so that, proving that\n\n$I=J$\n\n is equivalent to\n\n$$ \\color{blue}{0= (I+J)(I-J)=I^2 -J^2 =\\lim_{t \\to 0}I_t^2-J^2_t}$$\n\nWhere,\n\n$$I_t = \\int_0^\\infty e^{-tx^2}\\cos(x^2) dx~~~~\\text{and}~~~ J_t = \\int_0^\\infty e^{-tx^2}\\sin(x^2) dx$$\n\n$t\\mapsto I_t$\n\n and\n\n$t\\mapsto J_t$\n\n are clearly continuous due to the present of the integrand factor\n\n$e^{-tx^2}$\n\n.\n\nHowever, By Fubini we have,\n\n\\begin{split}\n\nI_t^2-J^2_t&=& \\left(\\int_0^\\infty e^{-tx^2}\\cos(x^2) dx\\right) \\left(\\int_0^\\infty e^{-ty^2}\\cos(y^2) dy\\right) \\\\&-& \\left(\\int_0^\\infty e^{-tx^2}\\sin(x^2) dx\\right) \\left(\\int_0^\\infty e^{-ty^2}\\sin(y^2) dy\\right) \\\\\n\n&=& \\int_0^\\infty \\int_0^\\infty e^{-t(x^2+y^2)}\\cos(x^2+y^2)dxdy\\\\\n\n&=&\\int_0^{\\frac\\pi2}\\int_0^\\infty re^{-tr^2}\\cos r^2 drd\\theta\\\\&=&\\frac\\pi4 Re\\left( \\int_0^\\infty \\left[\\frac{1}{i-t}e^{(i-t)r^2}\\right]' dr\\right)\\\\\n\n&=&\\color{blue}{\\frac\\pi4\\frac{t}{1+t^2}\\to 0~~as ~~~t\\to 0}\n\n\\end{split}\n\nTo end the proof:\n\n Let us show that\n\n$I> 0$\n\n and\n\n$J> 0$\n\n. Performing an integration by part we obtain\n\n$$J = \\frac{1}{\\sqrt{2}} \\int^\\infty_0\\frac{\\sin(2x)}{x^{1/2}}\\,dx=\\frac{1}{\\sqrt{2}}\\underbrace{\\left[\\frac{\\sin^2 x}{x^{1/2}}\\right]_0^\\infty}_{=0} +\\frac{1}{2\\sqrt{2}} \\int^\\infty_0\\frac{\\sin^2 x}{x^{3/2}}\\,dx\\color{red}{>0}$$\n\nGiven that\n\n$\\color{red}{\\sin 2x= 2\\sin x\\cos x =(\\sin^2x)'}$\n\n. Similarly we have,\n\n$$I = \\frac{1}{\\sqrt{2}}\\int^\\infty_0\\frac{\\cos(2x)}{\\sqrt{x}}\\,dx=\\frac{1}{2\\sqrt{2}}\\underbrace{\\left[\\frac{\\sin 2 x}{x^{1/2}}\\right]_0^\\infty}_{=0} +\\frac{1}{4\\sqrt{2}} \\int^\\infty_0\\frac{\\sin 2 x}{x^{3/2}}\\,dx\\\\=\n\n \\frac{1}{4\\sqrt{2}}\\underbrace{\\left[\\frac{\\sin^2 x}{x^{1/2}}\\right]_0^\\infty}_{=0} +\\frac{3}{8\\sqrt{2}} \\int^\\infty_0\\frac{\\sin^2 x}{x^{5/2}}\\,dx\\color{red}{>0}$$\n\nConclusion:\n\n$~~~I^2-J^2 =0$\n\n,\n\n$I>0$\n\n and\n\n$J>0$\n\n impliy\n\n$I=J$\n\n. Note that we did not attempt to compute neither the value of\n\n$~~I$\n\n nor\n\n$J$\n\n.\n\nExtra-to-the answer\n\n However using similar technique in above prove one can easily arrives at the following\n\n$$\\color{blue}{I_tJ_t = \\frac\\pi8\\frac{1}{t^2+1}}$$\n\n from which one get the following explicit value of\n\n$$\\color{red}{I^2=J^2= IJ = \\lim_{t\\to 0}I_tJ_t =\\frac{\\pi}{8}}$$\n\nSee also\n\nhere\n\n for more on (The Fresnel Integrals Revisited)",
"is_accepted": true,
"score": 32
}
|
CC BY-SA (Stack Exchange content)
|
9,286
|
Evaluation of Gaussian integral $\int_{0}^{\infty} \mathrm{e}^{-x^2} dx$
|
How to prove
$$\int_{0}^{\infty} \mathrm{e}^{-x^2}\, dx = \frac{\sqrt \pi}{2}$$
|
Jichao
|
https://math.stackexchange.com/questions/9286/evaluation-of-gaussian-integral-int-0-infty-mathrme-x2-dx
|
{
"answer_id": 9292,
"answer_link": null,
"answer_owner": "Ross Millikan",
"answer_text": "This is an old favorite of mine.\n\nDefine $$I=\\int_{-\\infty}^{+\\infty} e^{-x^2} dx$$\n\nThen $$I^2=\\bigg(\\int_{-\\infty}^{+\\infty} e^{-x^2} dx\\bigg)\\bigg(\\int_{-\\infty}^{+\\infty} e^{-y^2} dy\\bigg)$$\n\n$$I^2=\\int_{-\\infty}^{+\\infty}\\int_{-\\infty}^{+\\infty}e^{-(x^2+y^2)} dxdy$$\n\nNow change to polar coordinates\n\n$$I^2=\\int_{0}^{+2 \\pi}\\int_{0}^{+\\infty}e^{-r^2} rdrd\\theta$$\n\nThe $\\theta$ integral just gives $2\\pi$, while the $r$ integral succumbs to the substitution $u=r^2$\n\n$$I^2=2\\pi\\int_{0}^{+\\infty}e^{-u}du/2=\\pi$$\n\nSo $$I=\\sqrt{\\pi}$$ and your integral is half this by symmetry\n\nI have always wondered if somebody found it this way, or did it first using complex variables and noticed this would work.",
"is_accepted": true,
"score": 222
}
|
CC BY-SA (Stack Exchange content)
|
5,108,014
|
Reference Request: Change of Variables for Infinite Series
|
In this post I want to describe a technique for proving the convergence/divergence of infinite series which I have been thinking about. I am curious whether there are texts which describe this technique so I can explore the idea further.
The idea essentially introduces a way to “change variables” in a given series. I think this is best understood through some examples.
Consider first
$$\sum_{n=1}^\infty \frac{1}{2^{\sqrt{n}}}.$$
The idea is to sort the terms into chunks based on which consecutive squares they fall between. We rewrite the expression with this idea in mind:
$$\sum_{k=1}^\infty \sum_{k^2\leq n<(k+1)^2}\frac{1}{2^{\sqrt{n}}}.$$
Now on each chunk, we estimate the series from above by the largest term in the chunk times the number of terms in each chunk. This becomes:
$$\sum_{k=1}^\infty \sum_{k^2\leq n<(k+1)^2}\frac{1}{2^{\sqrt{n}}}\leq \sum_{k=1}^\infty \frac{(k+1)^2-k^2}{2^{\sqrt{k^2}}}=\sum_{k=1}^\infty \frac{2k+1}{2^k}.$$
The ratio test then implies the bigger series converges, and so our original series converges as well.
Here's another example:
$$\sum_{n=2}^\infty \frac{1}{n^{1+\frac{1}{\sqrt{\log n}}}}=\sum_{k=1}^\infty \sum_{2^k\leq n<2^{k+1}}\frac{1}{n^{1+\frac{1}{\sqrt{\log n}}}}\leq \sum_{k=1}^\infty \frac{2^{k+1}-2^k}{(2^k)^{1+\frac{1}{\sqrt{\log 2^k}}}}=\sum_{k=1}^\infty \frac{2^k}{2^k\cdot 2^{\sqrt{k}/\sqrt{\log 2}}}=\sum_{k=1}^\infty \frac{1}{\left(2^{1/\sqrt{\log 2}}\right)^{\sqrt{k}}}.$$
The steps in this second example are the same as in the first, we just select exponential chunks instead of quadratic ones. We can check that
$$2^{\frac{1}{\sqrt{\log 2}}}\approx 3.537$$
and so by a simple generalization of the first example we can show that the series
$$\sum_{n=2}^\infty \frac{1}{n^{1+\frac{1}{\sqrt{\log n}}}}$$
converges. It seems like this technique is really good at understanding "edge cases" where the usual convergence tests from calculus are inconclusive. For instance, the ratio test is inconclusive in each of the above examples.
The above examples suggest the following general result, which is a generalization of the Cauchy condensation test (see
this post
): Suppose
$(a_n)$
is a nonincreasing sequence, and
$f:\mathbb{N}\to\mathbb{N}$
is a function such that
$f(1)=1$
and
$f(n)\geq n$
for all
$n$
. If
$$\sum_{n=1}^\infty \Delta[f](n)a_{f(n)}$$
converges, then
$\sum_{n=1}^\infty a_n$
converges, where
$\Delta[f](n)=f(n+1)-f(n)$
is the difference operator. Moreover, if
$$\sum_{n=1}^\infty \Delta[f](n)a_{f(n+1)}$$
diverges, then
$\sum_{n=1}^\infty a_n$
diverges. The proof of this statement is just a generalization of the arguments in the above examples, taking note of the fact that we can produce lower bounds on the series by taking the smallest term in each chunk times the number of terms in the chunk. The growth condition
$f(n)\geq n$
is just to ensure each chunk is nonempty. Additionally, if we know some more information about
$f$
, namely that
$C\Delta[f](n)\leq \Delta[f](n+1)$
for some constant
$C>0$
, then
$$C\sum_{n=1}^\infty \Delta[f](n)a_{f(n)}\leq \sum_{n=1}^\infty a_n\leq \sum_{n=1}^\infty \Delta[f](n)a_{f(n)},$$
and so in such a case we recover the same type of explicit bounds as in the Cauchy condensation test.
My main question is whether there is a source in which this result or a similar generalization of the Cauchy condensation test appears.
|
Eli Seamans
|
https://math.stackexchange.com/questions/5108014/reference-request-change-of-variables-for-infinite-series
|
{
"answer_id": null,
"answer_link": null,
"answer_owner": null,
"answer_text": "(Нет ответов)",
"is_accepted": false,
"score": 0
}
|
CC BY-SA (Stack Exchange content)
|
1,003,056
|
For continuous $f,g: [0,1] \to [0,1]$ with $f \circ g = g \circ f$ , there exists $x$ such that $f(x)=g(x)$
|
I have been stuck on this for hours now:
Let
$f,g: [0,1] \to [0,1]$
be continuous such that
$f \circ g = g \circ f$
. Show that there exists
$x \in [0,1]$
such that
$f(x)=g(x)$
My attempt
: It is easy to proof that
$f,g$
have both a fix point in the interval
$[0,1]$
. That means there exist
$a,b \in [0,1]$
such that
$f(a)=a$
and
$g(b)=b$
. Now we also know that
$$f(g(x))=g(f(x)), \text{ for all } x $$
So I can make use of that and say for example that:
$$f(g(b))=g(f(b))=f(b) $$
Which shows that
$f(b)$
is yet another fix point of
$g$
. Similarly, by the same argument I'd get:
$$f(g(a))=g(f(a))=g(a) $$
and therefore
$g(a)$
is yet another fix point of
$f$
. While this seems great and all but I am very unsure if my next steps are correct:
Define
$h: [0,1] \to [0,1]$
such that
$h(x)=g(x)-f(x)$
. Of course
$h$
is continuous, because
$f,g$
are. Then I'd obtain:
$$h(a)=g(a)-f(a)=g(a)-a \geq 0 \\h(b)=g(b)-f(b)=b-g(f(b))... $$
Where I am not quite sure if those two inequalities are right at all or just misleading me.
|
Spaced
|
https://math.stackexchange.com/questions/1003056/for-continuous-f-g-0-1-to-0-1-with-f-circ-g-g-circ-f-there-exist
|
{
"answer_id": 1003078,
"answer_link": null,
"answer_owner": "Hagen von Eitzen",
"answer_text": "Your idea is fine. A priori, $h(x)=g(x)-f(x)$ can take on any value $\\in[-1,1]$. But if we assume the claim of the problem statement is wrong, it never is $0$, hence by the IVT is either alsways $>0$ or always $<0$. Wlog. $h(x)>0$ for all $x$. Especially, whenever $a$ is a fixed point of $f$, we have $g(a)>f(a)=a$ as you showed. As you also showed, $g$ maps fixed point sof $f$ to fixed points of $f$. That is $a_0=a$, $a_{n+1}=g(a_n)$ givs us a strictly increasing sequence of fixed points of $f$. As the sequence is bounded, it must convereg to some $\\tilde a\\in[0,1]$. Then by continuity, $g(\\tilde a)=\\tilde a$ and $f(\\tilde a)=\\tilde a$, hence $h(\\tilde a)=0$, contradiction.",
"is_accepted": true,
"score": 2
}
|
CC BY-SA (Stack Exchange content)
|
1,901,686
|
Doubt in step in the proof of Theorem 6.11 in Rudin's book
|
I want to understand a step in baby Rudin's theorem 6.11. The theorem says the following: Let
$f$
be Riemann-Stieljes integrable in
$[a,b]$
. Let
$m\leq f\leq M$
. Let
$\phi:[m,M]\to\mathbb{R}$
be continuous. Then
$h(x)=\phi(f(x))$
is Riemann-Stieljes integrable in
$[a,b]$
.
The proof goes like this. Since
$\phi$
is continuous in a compact set it is uniformly continuous in
$[m,M]$
. Let
$\epsilon>0$
. Then we can pick
$\delta<\epsilon$
such that
$|s,t|\leq\delta$
imply that
$|\phi(s)-\phi(t)|<\epsilon$
, where
$s,t\in[m,M]$
.
Let
$P$
be a partition of
$[a,b]$
.
$$
a=x_0\leq x_2\leq\ldots\leq x_n=b
$$
let
$\Delta x_i=x_i-x_{i-1}$
. Let
$M_i=\sup\{f(\Delta x_i)\}$
and
$m_i=\inf\{f(\Delta x_i)\}$
. Let
$M_i^*=\sup\{h(\Delta x_i)\}$
and
$m_i^*=\inf\{h(\Delta x_i)\}$
.
Let's divide the intervals in
$P$
in two categories. if
$M_i-m_i<\delta$
then
$i\in A$
. If not
$i\in B$
.
Ok, so far so good, no problems. The next thing he says is, for
$i\in A$
our choice of
$\delta$
shows that
$M_i^*-m_i^*\leq\epsilon$
. Can you prove this?
|
PhoenixPerson
|
https://math.stackexchange.com/questions/1901686/doubt-in-step-in-the-proof-of-theorem-6-11-in-rudins-book
|
{
"answer_id": 1901700,
"answer_link": null,
"answer_owner": "user66081",
"answer_text": "If $i \\in A$, i.e. $M_i - m_i < \\delta$, then $|f(x) - f(y)| \\leq M_i - m_i < \\delta$ for all $x,y$ from the $i$-th interval.\n\nThen $|h(x) - h(y)| = |\\phi(f(x)) - \\phi(f(y))| < \\epsilon$ for any $x,y$ from the $i$-th interval -- by the choice of $\\epsilon/\\delta$.\n\n$M_i^* - m_i^* \\stackrel{\\text{def}}{=} \\sup_{\\text{such } x} h(x) - \\inf_{\\text{such } y} h(y) \\leq \\sup_{\\text{such x,y}}|h(x) - h(y)| \\leq \\epsilon$, where \"such\" means \"from the interval $[x_{i-1}, x_i]$\".",
"is_accepted": true,
"score": 2
}
|
CC BY-SA (Stack Exchange content)
|
5,107,848
|
Explanation on Landau $o$ little for $ \lim_{x\to 0} \frac{\log(\cos x) - \sinh(\alpha x)}{x^{6\alpha}} $
|
Calculate without Hopital
$$
\lim_{x\to 0} \frac{\log(\cos x) - \sinh(\alpha x)}{x^{6\alpha}}
$$
For
$\alpha>0$
we are in a
$\dfrac{0}{0}$
case; now
$\sinh(\alpha\,x) \sim \alpha\,x$
, while
$$
\log(\cos x) = \log(1+(\cos x-1)) \sim \cos x - 1 \sim -\frac{x^2}{2}
$$
is
$o(x)$
, then is little "
o
" than
$\alpha\,x$
; it follows that
$$
\lim_{x\to 0} \frac{\color{red}{\log(\cos x) - \sinh(\alpha x)}}{x^{6\alpha}}
= \lim_{x\to 0} \frac{\color{red}{-\sinh(\alpha\,x)}}{x^{6\alpha}}
= \lim_{x\to 0} \frac{-\alpha\,x}{x^{6\alpha}}
= -\alpha \, \lim_{x\to 0} x^{1-6\alpha}.
$$
This limit is zero for
$1-6\alpha>0$
, i.e., for
$\alpha<1/6$
; it is
$-1/6$
for
$\alpha=1/6$
, and
$-\infty$
for
$\alpha>1/6$
.
I have read that with the small “
o
” I can write
$$-\sinh(\alpha\,x)$$
instead of
$$\log(\cos x) -\sinh(\alpha\,x)$$
Why, is there a condition? Which?
I have done this, as alternative to small "
o
":
Consider
$$
\lim_{f(x)\to 0} \frac{\sinh(f(x))}{f(x)} = 1
$$
$$
\begin{align}
\lim_{x\to 0}\frac{\log(1-1+\cos x) - \sinh(\alpha x)}{x^{6\alpha}}
& =\lim_{x\to 0}\frac{\log(1-(1-\cos x)) - \sinh(\alpha x)}{x^{6\alpha}}\notag\\
&=\lim_{x\to 0} \frac{\log(1-(1-\cos x))}{x^{6\alpha}}-\lim_{x\to 0}\frac{\alpha x}{x^{6\alpha}}\cdot\frac{\sinh(\alpha x)}{\alpha x}\notag\\
&=\lim_{x\to 0} \frac{-(1-\cos x)-\alpha x}{x^{6\alpha}}
\end{align}
$$
If
$6\alpha=1$
:
$$
\lim_{x\to 0} \frac{-(1-\cos x)-\alpha x}{x^{6\alpha}}
=\lim_{x\to 0} \frac{-(1-\cos x)-\frac{1}{6}x}{x}
=\lim_{x\to 0}\frac{-(1-\cos x)}{x}-\frac{1}{6}
=-\frac{1}{6}
$$
Let
$\alpha<0$
:
$$
\lim_{x\to 0} \frac{-(1-\cos x)-\alpha x}{x^{6\alpha}}
$$
Then
$x^{6\alpha}\to \infty$
and the numerator tends to
$0$
, so the limit is
$0$
.
For
$\alpha>0$
:
$$
\lim_{x\to 0} \frac{-(1-\cos x)-\alpha x}{x^{6\alpha}}
=\lim_{x\to 0} \frac{-(1-\cos x)}{x^{6\alpha}}-\alpha \lim_{x\to 0}x^{1-6\alpha}
$$
This limit is zero for
$1-6\alpha>0$
, i.e., for
$\alpha<1/6$
; it equals
$-1/6$
for
$\alpha=1/6$
, and it is
$-\infty$
for
$\alpha>1/6$
.
Finally:
$$
\lim_{x\to 0} \frac{\log(\cos x) - \sinh(\alpha x)}{x^{6\alpha}} =
\begin{cases}
0 & \text{for } \alpha<1/6, \\[2mm]
-1/6 & \text{for } \alpha=1/6,\\[2mm]
-\infty & \text{for } \alpha>1/6.
\end{cases}
$$
|
Sebastiano
|
https://math.stackexchange.com/questions/5107848/explanation-on-landau-o-little-for-lim-x-to-0-frac-log-cos-x-sinh
|
{
"answer_id": 5107885,
"answer_link": null,
"answer_owner": "JC Q",
"answer_text": "The process you showed is very clear and complete. It is OK to try not to replace infinitesimals with others(me neither), but that does not mean to be unable to understand the validness of a replacement.\n\nIf you translate the original process with a none-\"\n\n$o$\n\n\" one, it should look like\n\n$$\n\n\\begin{aligned}\n\n\\lim_{x\\to0}\\frac{\\ln(\\cos x)}{\\sinh(\\alpha x)}&=-\\frac{1}{\\alpha}\\lim_{x\\to0}\\frac{\\ln(1+(\\cos x-1))}{\\cos x-1}\\frac{1-\\cos x}{x^2}\\frac{\\alpha x}{\\sinh(\\alpha x)}x\\\\\n\n&=-\\frac{1}{\\alpha}\\cdot1\\cdot\\frac{1}{2}\\cdot1\\cdot0\\\\\n\n&=0\n\n\\end{aligned}\n\n$$\n\nwhere\n\n$\\lim\\limits_{x\\to0}\\dfrac{\\ln(1+(\\cos x-1))}{\\cos x-1}=1$\n\n is shown by substitution and\n\n$\\ln(1+u)\\sim u$\n\n,\n\n$\\lim\\limits_{x\\to0}\\dfrac{1-\\cos x}{x^2}=\\dfrac{1}{2}$\n\n by\n\n$1-\\cos x\\sim\\dfrac{1}{2}x^2$\n\n,\n\n$\\lim\\limits_{x\\to0}\\dfrac{\\alpha x}{\\sinh(\\alpha x)}$\n\n by substitution and\n\n$\\sinh u\\sim u$\n\n.\n\nThe posted answer introduces\n\n$o(\\cdot)$\n\n symbol to make it easier. By\n\n$\\log(\\cos x)=o(x)$\n\n and\n\n$\\sinh(\\alpha x)\\sim\\alpha x$\n\n, it shows\n\n$$\n\n\\begin{aligned}\n\n\\lim_{x\\to0}\\frac{\\log(\\cos x)-\\sinh(\\alpha x)}{x^{6\\alpha}}&=\\lim_{x\\to0}\\frac{\\sinh(\\alpha x)}{x^{6\\alpha}}\\left(\\frac{\\log(\\cos x)}{\\sinh(\\alpha x)}-1\\right)\\\\\n\n&=\\lim_{x\\to0}\\frac{\\sinh(\\alpha x)}{x^{6\\alpha}}(0-1)\\\\\n\n&=\\lim_{x\\to0}\\frac{-\\sinh(\\alpha x)}{x^{6\\alpha}}\n\n\\end{aligned}\n\n$$\n\nand from then on discusses the behavior of the latter limit.",
"is_accepted": true,
"score": 4
}
|
CC BY-SA (Stack Exchange content)
|
2,563,659
|
Fourier series of a piecewise continuous (constant) function. Is my solution correct?
|
Given the function
$$
\phi(x)
=\begin{cases}
1 & 0<x\leq 1 \\
2 & 1<x \leq 2 \\
3 & 2<x \leq 3 \\
4 & 3<x \leq 4
\end{cases}
$$
First, I extended
$\phi$
to a periodic function on
$[-4, 4]$
such that
$\phi(x+4) = \phi(x)$
for all
$x \in [-4, 0]$
.
(a) To what values does the Fourier series converge at
$x=0, 1, 4, 7.4, 40$
?
SOLUTION: At
$x=0, 4, 40$
, the Fourier series converges to
$\frac{\phi(0+) + \phi(0-)}{2} = \frac{1+4}{2} = 2.5$
.
At
$x=7.4$
, the Fourier series converges to
$\phi(7.4 \bmod{4}) = \phi(3.4) = 4$
. Finally, at
$x=1$
, the series converges to
$\frac{1+2}{2} = 1.5$
.
(b) Does the Fourier series converge uniformly to
$\phi$
?
SOLUTION: No, since it does not converge to
$\phi$
point-wise.
(c) Find
$a_0$
.
SOLUTION:
$$
a_0 = \frac{1}{4}\int_{-4}^4 \phi(x)\cos(0)\,\mathrm{d}{x} = \frac{1}{4}(20) = 5
$$
|
Quoka
|
https://math.stackexchange.com/questions/2563659/fourier-series-of-a-piecewise-continuous-constant-function-is-my-solution-cor
|
{
"answer_id": null,
"answer_link": null,
"answer_owner": null,
"answer_text": "(Нет ответов)",
"is_accepted": false,
"score": 0
}
|
CC BY-SA (Stack Exchange content)
|
5,106,472
|
Expressing $\sum_{n=1}^{\infty}x^{n^a}$ as an infinite product of reciprocals
|
Taking inspiration from the power series
$\sum_{n=0}^{\infty} x^n=\frac1{1-x}$
and the identity
$\Pi_{n=0}^{\infty}(1+x^{2^n})=\sum_{n=0}^{\infty}x^n$
, can we write similar infinite product identities for the sum
$\sum_{n=0}^{\infty}x^{n^a}$
, where
$a$
is not necessarily an integer? Could Gamma functions have a role in this?
|
vidyarthi
|
https://math.stackexchange.com/questions/5106472/expressing-sum-n-1-inftyxna-as-an-infinite-product-of-reciprocals
|
{
"answer_id": 5106475,
"answer_link": null,
"answer_owner": "Claude Leibovici",
"answer_text": "This is not an infinite product\n\nIf you replace the summation by the integral\n\n$$\\int x^{n^a}\\,dn=-\\frac n a \\,\\, \\left(-n^a \\log (x)\\right)^{-1/a}\\,\\,\n\n \\Gamma \\left(\\frac{1}{a},-n^a \\log (x)\\right)$$\n\nIf\n\n$0<x<1$\n\n and\n\n$a>0$\n\n$$\\int_0^\\infty x^{n^a}\\,dn=\\Gamma \\left(1+\\frac{1}{a}\\right)\n\n (-\\log (x))^{-1/a} \\tag 1$$\n\nThe summation can be approximated using the simplest form of Euler-Maclaurin summation. Using\n\n$k=-\\log(x)$\n\n, it write\n\n$$S\\sim \\frac 1a k^{-1/a} \\,\\Gamma \\left(\\frac{1}{a},k\\right)+\\sum_{n=0}^7 \\beta_n\\, a^n \\tag 2$$\n\n where the coefficients are\n\n$$\\left(\n\n\\begin{array}{cc}\n\n n & b_n \\\\\n\n 0 & \\frac{e^{-k}}{2}+1 \\\\\n\n 1 & \\frac{-3+3 e-7 e^2+105 e^3}{1260 e^4} \\\\\n\n 2 & \\frac{-441+250 e-210 e^2}{25200 e^4} \\\\\n\n 3 & \\frac{-116+15 e+12 e^2}{4320 e^4} \\\\\n\n 4 & \\frac{7-4 e}{576 e^4} \\\\\n\n 5 & \\frac{7525-240 e}{302400 e^4} \\\\\n\n 6 & -\\frac{3}{1600 e^4} \\\\\n\n 7 & -\\frac{199}{100800 e^4} \\\\\n\n\\end{array}\n\n\\right)$$\n\nJust a few numbers for illustration\n\n$$\\left(\n\n\\begin{array}{ccccc}\n\n a & x & (1) & (2) & \\text{summation}\\\\\n\n \\frac{1}{5} & \\frac{1}{4} & 23.4371 & 24.2465 & 24.2462 \\\\\n\n \\frac{1}{5} & \\frac{1}{2} & 749.987 & 750.679 & 750.679 \\\\\n\n \\frac{1}{5} & \\frac{3}{4} & 60900.0 & 60900.6 & 60900.6 \\\\\n\n& & & & \\\\\n\n \\frac{1}{4} & \\frac{1}{4} & 6.49815 & 7.29143 & 7.29100 \\\\\n\n \\frac{1}{4} & \\frac{1}{2} & 103.970 & 104.650 & 104.649 \\\\\n\n \\frac{1}{4} & \\frac{3}{4} & 3503.97 & 3504.55 & 3504.55 \\\\\n\n& & & & \\\\\n\n \\frac{1}{3} & \\frac{1}{4} & 2.25209 & 3.01943 & 3.01888 \\\\\n\n \\frac{1}{3} & \\frac{1}{2} & 18.0167 & 18.6764 & 18.6759 \\\\\n\n \\frac{1}{3} & \\frac{3}{4} & 252.007 & 252.585 & 252.581 \\\\\n\n& & & & \\\\\n\n \\frac{1}{2} & \\frac{1}{4} & 1.04068 & 1.76061 & 1.75983 \\\\\n\n \\frac{1}{2} & \\frac{1}{2} & 4.16274 & 4.78883 & 4.78822 \\\\\n\n \\frac{1}{2} & \\frac{3}{4} & 24.1660 & 24.7283 & 24.7224 \\\\\n\n& & & & \\\\\n\n 1 & \\frac{1}{4} & 0.72135 & 1.33464 & 1.33333 \\\\\n\n 1 & \\frac{1}{2} & 1.44270 & 2.00065 & 2.00000 \\\\\n\n 1 & \\frac{3}{4} & 3.47606 & 4.01135 & 4.00000 \\\\\n\n& & & & \\\\\n\n 2 & \\frac{1}{4} & 0.75269 & 1.25830 & 1.25391 \\\\\n\n 2 & \\frac{1}{2} & 1.06447 & 1.56556 & 1.56447 \\\\\n\n 2 & \\frac{3}{4} & 1.65230 & 2.17657 & 2.15230 \\\\\n\n& & & & \\\\\n\n 3 & \\frac{1}{4} & 0.80086 & 1.23990 & 1.25002 \\\\\n\n 3 & \\frac{1}{2} & 1.00902 & 1.47282 & 1.50391 \\\\\n\n 3 & \\frac{3}{4} & 1.35271 & 1.86404 & 1.85054 \\\\\n\n& & & & \\\\\n\n\\end{array}\n\n\\right)$$",
"is_accepted": false,
"score": 2
}
|
CC BY-SA (Stack Exchange content)
|
5,107,891
|
Closed form of $\Omega = \int\limits_3^\infty {\frac{{{x^4}}}{{\sqrt {{{\left( {{x^4} + 2{x^2} + 4} \right)}^3}} }}dx} $
|
I found this problem from a FB page
$$\Omega = \int\limits_3^\infty {\frac{{{x^4}}}{{\sqrt {{{\left( {{x^4} + 2{x^2} + 4} \right)}^3}} }}dx}$$
Here what I tried to find the closed form:
$${\text{We have}}:\frac{d}{{dx}}\left( {\frac{x}{{\sqrt {{x^4} + 2{x^2} + 4} }}} \right) = \frac{{4 - {x^4}}}{{\sqrt {{{\left( {{x^4} + 2{x^2} + 4} \right)}^3}} }}$$
$$\begin{gathered}
\Rightarrow \Omega = \int\limits_3^\infty {\frac{{{x^4}}}{{\sqrt {{{\left( {{x^4} + 2{x^2} + 4} \right)}^3}} }}dx} = - \int\limits_3^\infty {\frac{{\left( {4 - {x^4}} \right) - 4}}{{\sqrt {{{\left( {{x^4} + 2{x^2} + 4} \right)}^3}} }}dx} \hfill \\
= - \int\limits_3^\infty {\frac{{4 - {x^4}}}{{\sqrt {{{\left( {{x^4} + 2{x^2} + 4} \right)}^3}} }}dx} + 4\underbrace {\int\limits_3^\infty {\frac{1}{{\sqrt {{{\left( {{x^4} + 2{x^2} + 4} \right)}^3}} }}dx} }_I \hfill \\
= - \left[ {\frac{x}{{\sqrt {{x^4} + 2{x^2} + 4} }}} \right]_3^\infty + 4I = \frac{3}{{\sqrt {103} }} + 4I \hfill \\
\end{gathered}$$
$$I = \int\limits_3^\infty {\frac{1}{{\sqrt {{{\left( {{x^4} + 2{x^2} + 4} \right)}^3}} }}dx} \overbrace = ^{\sqrt 2 x \to x}\frac{1}{{4\sqrt 2 }}\int\limits_{\frac{3}{{\sqrt 2 }}}^\infty {\frac{1}{{\sqrt {{{\left( {{x^4} + {x^2} + 1} \right)}^3}} }}dx}$$
$I$
looks like an elliptic integral, but I can't find a good substitution to reduce this integral.
Your comments and alternatives are highly appreciated.
|
OnTheWay
|
https://math.stackexchange.com/questions/5107891/closed-form-of-omega-int-limits-3-infty-fracx4-sqrt-left
|
{
"answer_id": 5107904,
"answer_link": null,
"answer_owner": "Claude Leibovici",
"answer_text": "The last integral\n\n$$ I=\\frac{dx}{{\\sqrt {{{\\left( {{x^4} + {x^2} + 1} \\right)}^3}} }}$$\n\n is not so bad if you write\n\n$$x^4+x^2+1=(x^2+a)(x^2+b)\\qquad \\text{where} \\qquad (a,b)=\\frac {1\\pm i\\sqrt 3 }2$$\n\nTake a look\n\nhere\n\n and simplify.\n\nIf no mistake on my side\n\n$$I=\\frac{x \\left( (a+b)x^2+(a^2+b^2)\\right)}{a b (a-b)^2\n\n \\sqrt{\\left(x^2+a\\right) \\left(x^2+b\\right)}}+$$\n\n$$\\frac{i}{a \\sqrt{b} (a-b)^2}\\Bigg((a-b) F\\left(i \\sinh\n\n ^{-1}\\left(\\frac{x}{\\sqrt{a}}\\right)|\\frac{a}{b}\\right)+(a+b) E\\left(i\n\n \\sinh ^{-1}\\left(\\frac{x}{\\sqrt{a}}\\right)|\\frac{a}{b}\\right) \\Bigg)$$",
"is_accepted": false,
"score": 4
}
|
CC BY-SA (Stack Exchange content)
|
5,107,692
|
prove $\left(\sum_{n=-\infty}^{+\infty} q^{n^2}\right)^2 = 1 + 4\sum_{n=1}^{\infty} \frac{q^n}{1+q^{2n}}$
|
Let
$$
\theta(q) = \sum_{n=-\infty}^{+\infty} q^{n^2}.
$$
So
$$
\theta(q)^2
= \left(\sum_{m\in \mathbb{Z}} q^{m^2}\right)\left(\sum_{n\in \mathbb{Z}} q^{n^2}\right)
= \sum_{N=0}^{\infty} r_2(N)\, q^N,
$$
where
$r_2(N)$
counts ordered pairs
$(m,n)\in\mathbb{Z}^2$
with
$m^2+n^2=N$
, including signs and order, and
$r_2(0)=1$
.
Meaning the identity to prove is equivalent to showing that for all
$N\ge 1$
the coefficient of
$q^N$
on the right-hand side equals
$r_2(N)$
:
$$
1 + 4\sum_{n=1}^{\infty} \frac{q^n}{1+q^{2n}}
= 1 + \sum_{N=1}^{\infty} \Big(4\cdot \text{coeff}_{q^N}\Big[\sum_{n=1}^{\infty} \frac{q^n}{1+q^{2n}}\Big]\Big)\, q^N.
$$
How to move forward with this? Is there a more straightforward way with series or integral manipulations only?
|
Boyce.E
|
https://math.stackexchange.com/questions/5107692/prove-left-sum-n-infty-infty-qn2-right2-1-4-sum-n-1-inf
|
{
"answer_id": 5107844,
"answer_link": null,
"answer_owner": "CosmicOscillator",
"answer_text": "If we express a positive integer in the form\n\n$N=2^{a_0}p_1^{2a_1}\\cdots p_r^{2a_r}q_1^{b_1}\\cdots q_s^{b_s}$\n\n, where\n\n$p_i$\n\n are\n\n$3\\pmod 4$\n\n primes and\n\n$q_i$\n\n and\n\n$1\\pmod 4$\n\n primes, then we have:\n\n$$r_2(N)=\\begin{cases} 0 & \\text{if any $a_i$ is a half-integer}\\\\ 4(b_1+1)(b_2+1)\\cdots (b_r+1) & \\text{if all $a_i$ are integers}\\end{cases}$$\n\n(See\n\nhere\n\n).\n\nWe can write the RHS of the equation as\n\n$$1+4\\sum_{n=1}^\\infty\\frac{q^n}{1+q^{2n}}=1+4\\sum_{n=1}^\\infty\\sum_{i=0}^\\infty (-1)^{i}q^{(2i+1)n},$$\n\nwhich expresses\n\n$\\frac{q^n}{1+q^{2n}}$\n\n as an infinite geometric series. A\n\n$q^N$\n\n term is only reached when\n\n$(2i+1)n=N$\n\n. This means each odd factor of\n\n$N$\n\n gives us a unique contribution to the coefficient when it is reached by\n\n$2i+1$\n\n. If this odd factor is\n\n$1\\pmod 4$\n\n, then\n\n$i$\n\n is even, so the contribution is\n\n$+1$\n\n. Otherwise, it is\n\n$-1$\n\n.\n\nThus, the coefficient of\n\n$q^N$\n\n on the RHS for positive\n\n$N$\n\n is\n\n$$4\\cdot(\\#(\\text{1 mod 4 factors of $N$})-\\#(\\text{3 mod 4 factors of $N$})).$$\n\n(We don't need to worry about\n\n$N=0$\n\n since its clear both sides match with a coefficient of\n\n$1$\n\n.)\n\nAs above, write\n\n$N=2^{a_0}p_1^{2a_1}\\cdots p_r^{2a_r}q_1^{b_1}\\cdots q_s^{b_s}=2^{a_0}PQ$\n\n. The property of an odd factor being\n\n$1\\pmod 4$\n\n or\n\n$3\\pmod 4$\n\n is solely determined by its prime factors in\n\n$P$\n\n, since any factor from\n\n$Q$\n\n is\n\n$1\\pmod 4$\n\n. So we just need to compute the desired difference in\n\n$P$\n\n, then multiply by\n\n$(b_1+1)\\cdots (b_r+1)$\n\n, the number of factors of\n\n$Q$\n\n.\n\nIf at least one of the\n\n$a_i$\n\ns is a half-integer, then WLOG let\n\n$a_1$\n\n be a half-integer. We can pair up any factor\n\n$p_1^{e_1}p_2^{e_2}\\cdots p_r^{e_r}$\n\n with\n\n$p_1^{2a_1-e_1}p_2^{e_2}\\cdots p_r^{e_r}$\n\n. If one of them\n\n$1\\pmod 4$\n\n, then the other must be\n\n$3 \\pmod 4$\n\n since they differ by a factor of\n\n$p_1^{|2a_1-2e_1|}$\n\n, which is\n\n$3 \\pmod 4$\n\n due to\n\n$2a_1-2e_1$\n\n being odd. Thus, the number of\n\n$1\\pmod 4$\n\n and\n\n$3\\pmod 4$\n\n factors in\n\n$P$\n\n are the same, so the coefficient of\n\n$q^N$\n\n is\n\n$0$\n\n.\n\nIf all\n\n$a_i$\n\ns are integers, then consider\n\n$p_1^{2a_1-1}p_2^{2a_2}\\cdots p_r^{2a_r}$\n\n, which must have the same number of\n\n$1\\pmod 4$\n\n and\n\n$3\\pmod 4$\n\n factors from above. Thus, the desired difference is determined by all the factors divisible by\n\n$p_1^{2a_1}$\n\n, which is equivalent to the difference in\n\n$p_2^{2a_2}\\cdots p_r^{2a_r}$\n\n. Repeating this process, it suffices to find the difference in\n\n$p_r^{2a_r}$\n\n, which has\n\n$a_r+1$\n\n factors\n\n$1\\pmod 4$\n\n and\n\n$a_r$\n\n factors\n\n$3\\pmod 4$\n\n, so the difference is\n\n$1$\n\n. The coefficient of\n\n$q^N$\n\n is therefore\n\n$4(b_1+1)\\cdots (b_r+1)$\n\n.\n\nThis precisely matches with the terms\n\n$r_2(N)q^N$\n\n on the LHS, so we are done.",
"is_accepted": false,
"score": 1
}
|
CC BY-SA (Stack Exchange content)
|
4,326,247
|
This improper integral doesn't converge, does it?
|
I am interested in finding out whether my calculations correct.
I have to solve this exercise: it's an improper integral.
before using integration by parts, I've studied the bounds, in order to check where the function is undefined. And I got that when solving the function for
$x = 3$
the denominator is
$0$
, therefore the function is undefined (i.e. division by zero).
Therefore, I've taken the limit as
$x$
approaches
$3$
. And then I've solved the integral with
$x$
as the upper bound.
As stated above, I've used integration by parts by choosing
$$ u = x $$
(because the derivative of the polynomial, hopefully, is going to become some smaller value) And
$$ dv = (1 / (3 - x)) $$
because the antiderivative is simply equal to
$\log$
(natural log, i.e. with base
$e$
). Of course, the argument of
$\log$
must be the absolute value).
Now, by integrating by parts and after having evaluated the limit of the antiderivative, I found that the limit doesn't exist, because the limit of the function evaluated in the upper bound is undefined (i.e. the natural log is undefined for
$x = 3$
). Is it true? And if it's true, the integral doesn't converge, right?
My attempt:
$$
\begin{align*}
\int_1^3 \frac{x}{3-x}dx&=\lim_{x \rightarrow 3} \int_1^x x \cdot \frac{1}{3-x}dx\\\\
u=x, v'&=\frac{1}{3-x}\\
u'=dx, v&=\log(3-x)\\\\
\therefore \int_1^3 \frac{x}{3-x}dx&=x\log(3-x)|^x_1 - \int_1^x 1 \cdot \log(3-x)dx\\
&= \lim_{x\rightarrow3} [ x \cdot \log(3-x)|_1^x - \log(3-x)|_1^x ] = \underline{\text{DNE}}
\end{align*}
$$
Therefore, the integral doesn't converge.
|
Gabriel Burzacchini
|
https://math.stackexchange.com/questions/4326247/this-improper-integral-doesnt-converge-does-it
|
{
"answer_id": 4326249,
"answer_link": null,
"answer_owner": "5xum",
"answer_text": "You are correct that the integral does not converge, but you made some mistakes and overcomplicated the solution in general.\n\nThe mistake\n\n: If\n\n$v=\\log(3-x)$\n\n, then\n\n$v'=-\\frac{1}{3-x}$\n\n. You missed a minus sign.\n\nThe overcomplication\n\n:\n\nInstead of using per partes, you can rewrite\n\n$$\\frac{x}{3-x} = \\frac{x-3+3}{3-x} = \\frac{-(3-x)}{3-x} + \\frac{3}{3-x} = \\frac{3}{3-x} - 1$$\n\nand only integrate after this rearrangement. No need for per partes, a simple introduction of a new variable\n\n$u=3-x$\n\n is sufficient and you get (since\n\n$du = -dx$\n\n):\n\n$$\\int_1^3\\frac{x}{3-x}dx = 3\\int_1^3 \\frac{1}{3-x}dx - \\int_1^3 1dx = 3\\int_2^0-\\frac{1}{u}du - 2 = 3\\int_0^2\\frac1udu - 2$$\n\nnow you can either remember that the integral of\n\n$\\frac{1}{u}$\n\n diverges around\n\n$0$\n\n, or you can write it out, since\n\n$$\\int_0^2\\frac1udu=\\lim_{x\\to 0}\\int_x^2\\frac1udu = \\lim_{x\\to 0} (\\ln(2)-\\ln(x))$$\n\n and the limit above does not exist.",
"is_accepted": true,
"score": 2
}
|
CC BY-SA (Stack Exchange content)
|
5,103,934
|
On $\int_0^1\frac{a+2x}{\sqrt{x(1-x)(a+x)(1+a+x)}}dx=\pi$
|
Let
$a\ge0$
, show that:
$$ \int\limits_0^1\frac{a+2x}{\sqrt{x(1-x)(a+x)(1+a+x)}}dx = \pi \tag{1} $$
For
$a=0$
, the integral is:
$\displaystyle\int_0^1\frac{dx}{\sqrt{1-x^2}}=\frac{\pi}{2}$
Splitting the integral into two:
$$
\begin{align}
I &=\int\limits_0^1\frac{a+2x}{\sqrt{x(1-x)(a+x)(1+a+x)}}dx \\
&=2\int\limits_0^1\sqrt{\frac{a+x}{x(1-x)(1+a+x)}}\,dx \\
&-a\int\limits_0^1\frac{dx}{\sqrt{x(1-x)(a+x)(1+a+x)}} \\[5mm]
I &= 2 I_1 - a I_2
\end{align}
$$
$I_1$
&
$I_2$
are elliptic integrals of third & first kind, respectively.
How to cancel the effect of
$(a)$
to get a constant result of
$(\pi)$
?
The integral seems to hold for all
$a\in\mathbb{C}$
with
$\displaystyle I=\begin{cases} +\pi &\,:\,{\small\Re(a)\ge{\,\,0}} \\ -\pi &\,:\,{\small\Re(a)\le{-2}} \end{cases}$
|
Hazem Orabi
|
https://math.stackexchange.com/questions/5103934/on-int-01-fraca2x-sqrtx1-xax1axdx-pi
|
{
"answer_id": 5103954,
"answer_link": null,
"answer_owner": "Rishit Garg",
"answer_text": "The substitution\n\n$$ t= \\frac{x(a+x)}{1+a} $$\n\nsimplifies the integral to\n\n$$ \\int_0^1 \\frac{dt}{\\sqrt{t(1-t)}}$$\n\nwhich is equal to\n\n$\\pi$\n\n.",
"is_accepted": true,
"score": 9
}
|
CC BY-SA (Stack Exchange content)
|
1,526,107
|
Find the relative minimum, relative maximum, and point of inflection
|
I'm trying to find the relative minimum, Relative maximum, and point of inflection:
$$ f(x)= \frac{x^3}{x^2-64} $$
Please elaborate on the points of inflection and chart of signs, because I did that and I got two inflection points and the question asks for one. Thanks.
|
Jaime Aguilar
|
https://math.stackexchange.com/questions/1526107/find-the-relative-minimum-relative-maximum-and-point-of-inflection
|
{
"answer_id": 1526187,
"answer_link": null,
"answer_owner": "mzp",
"answer_text": "First calculate\n\n\\begin{align}\n\nf'(x) = \\; \\frac{x²(x²-192)}{(x²-64)²} , \\;\\;\\;\\;\\;\\text{and}\\;\\;\\;\\;\\;\n\nf''(x) = \\;\\frac{128x(x²+192)}{(x²-64)³}.\n\n\\end{align}\n\nAlso, observe that the function is discontinuous at $-8$ and $8$.\n\nPoints of Inflection\n\nDefinition:\n\n A point of inflection is a point at which the curve is continuous and changes from being concave $f''(x)<0$ to convex $f''(x)>0$ or vice versa.\n\nNext, notice that\n\n\\begin{align}\n\nf''(x)>0,& \\;\\; \\text{for all }\\; x \\in (-\\infty,-8) \\cap (0,8) \\\\[2ex]\n\nf''(x)<0,& \\;\\; \\text{for all }\\; x \\in (-8,0) \\cap (8,\\infty)\n\n\\end{align}\n\nSo, there is one inflection point at $0$.\n\nMaxima and Minima\n\nA similar analysis for the first derivative yields\n\n\\begin{align}\n\nf'(x)>0,& \\;\\; \\text{for all }\\; x \\in (-\\infty,192^{\\frac{1}{2}}) \\cap (192^{\\frac{1}{2}},\\infty) \\\\[2ex]\n\nf'(x)<0,& \\;\\; \\text{for all }\\; x \\in (-192^{\\frac{1}{2}},-8) \\cap (-8,0) \\cap (0,8)\\cap (8,192^{\\frac{1}{2}})\n\n\\end{align}\n\nNotice that the function converges to $-\\infty$ from the left and $\\infty$ from the right as it approaches $-8$ or $8$. Moreover, it has one local (relative) maximum at $-192^{\\frac{1}{2}}$ and one local minimum at $192^{\\frac{1}{2}}$.",
"is_accepted": false,
"score": 0
}
|
CC BY-SA (Stack Exchange content)
|
5,107,725
|
How can I derive a smooth, non-singular force formula from a uniformly dense rod in $\mathbb{R}^{1}$?
|
I am working in
$\mathbb{R}^{1}$
. Somewhere on this line, there is a line segment of uniform density; this line segment is bounded by the interval
$\lbrack a,b\rbrack$
. Additionally, somewhere on this line there is a single point
$d$
, which may or may not coincide with the line segment of uniform density.
I attempted to find an equation, which for any given real number
$d$
, will find the collective "pull" of the line segment
$\lbrack a,b\rbrack$
on
$d$
, as described by a force law that resembles the inverse square law in spirit, but avoids its singularity at
$x = d$
.
My theory is as follows:
The endpoints will have the strongest "pull" vectors (highest magnitude).
At the midpoint of
$\lbrack a,b\rbrack$
there will be a zero vector, as the opposing forces will cancel.
Interior points not coincident with the center will have an intermediate value, which is dependent on the collective effect of the rod (which varies over the distance it spans).
The magnitude will taper off as the value
$d$
goes from an endpoint of
$\lbrack a,b\rbrack$
in the direction away from its midpoint.
I derived the following equation, which I suspect can be simplified into an integral. Let
$n$
be the number of subintervals used to approximate the rod, and let
$o$
be the offset from
$d$
to the center of each subinterval:
$$
\begin{aligned}
o
&=
\begin{cases}
d-\bigl(a+c\cdot\frac{b-a}{n}\bigr)
& \mbox{ if }d-\bigl(a+c\cdot\frac{b-a}{n}\bigr)\neq0 \\
d-\Bigl(a+\bigl(c+\frac{1}{2}\bigr)\cdot\frac{b-a}{n}\Bigr)
& \mbox{ otherwise}
\end{cases} \\
F(d)
&=
\lim_{n\to\infty}
\sum_{c=0}^{n}
\frac{o}{(|o| + \epsilon)^2}
\cdot \frac{b-a}{n}
\end{aligned}
$$
This works numerically, but the presence of
$\epsilon$
is unsatisfying. I would like to replace this with a kernel that:
Is based on the actual interval structure of the rod (not just its endpoints),
Cancels at the midpoint of the rod,
Decays slowly outside the rod (e.g., like
$1/\delta$
or slower),
And avoids any singularities or artificial smoothing parameters like
$\epsilon$
.
Below is a numerical plot of the force function using a discretized rod and the kernel
$\frac{\delta}{(|\delta| + \epsilon)^2}$
:
Is there a way to derive a closed-form or series-based expression for
$F(d)$
that avoids the need for
$\epsilon$
, while preserving the physical behaviour described above?
I would very much like the resulting formula to be explicitly grounded in the mass and distance of each infinitesimal sub-interval that comprises the rod, not just a smoothed or symbolic approximation.
My intuition also seems to indicate that the derivative would be zero at the midpoint. It doesn't make sense to me that its absolute value would be in-differentiable at that point. I could be mistaken about this though, so I say it with hesitation.
|
Jasper
|
https://math.stackexchange.com/questions/5107725/how-can-i-derive-a-smooth-non-singular-force-formula-from-a-uniformly-dense-rod
|
{
"answer_id": 5107787,
"answer_link": null,
"answer_owner": "Christophe Boilley",
"answer_text": "$$F(d)=\\frac{2d-a-b}{2d^2-ad-bd+a^2+b^2}$$",
"is_accepted": true,
"score": 1
}
|
CC BY-SA (Stack Exchange content)
|
5,107,563
|
Inequality involving a differentiable function and its derivative
|
I am working on the following problem and would appreciate some help or a hint.
Problem:
Let
$f$
be differentiable and
$f'$
be continuous,
$\alpha \geq 0$
. Assume that for all
$a<b$
, we have
$f'(a) \leq \frac{\alpha}{2}(a-b) \quad \text{or} \quad f'(b) \geq \frac{\alpha}{2}(b-a).$
Prove that: for all
$x,y \in \mathbb{R}$
and
$z \in [x,y]$
,
$f(z) \leq \max\{f(x), f(y)\} + \frac{\alpha}{2}(z-x)(z-y).$
My thoughts:
I tried to consider the function
$g(t) = f(t) - \frac{\alpha}{2}(t-x)(t-y)$
, but i am not sure how to use the "OR" condition for the derivatives.
Thank you in advance.
|
Thinh Pham Quoc
|
https://math.stackexchange.com/questions/5107563/inequality-involving-a-differentiable-function-and-its-derivative
|
{
"answer_id": 5107778,
"answer_link": null,
"answer_owner": "Lukas",
"answer_text": "Hint: The problem claims that your function\n\n$g(t):= f(t) - \\frac{\\alpha}{2}(t-x)(t-y)$\n\n attains its maximum on the boundary of the interval\n\n$[x,y]$\n\n (note that\n\n$g(x)= f(x)$\n\n and\n\n$g(y)= f(y)$\n\n).\n\nAssume by contradiction that this is not the case, i.e. that there is an interior point\n\n$t_0 \\in (x,y)$\n\n at which\n\n$g$\n\n attains its maximum. Then use what you know about the first derivative around an interior maximum point to construct a contradiction.\n\nYou might also want to consider the case\n\n$\\alpha=0$\n\n as a model first.\n\nThen you know that for all\n\n$x<y$\n\n, we have\n\n$f'(x) \\le 0$\n\n or\n\n$f'(y) \\ge 0$\n\n and the claim is that\n\n$f(t) \\le \\max\\{f(x), f(y)\\}$\n\n for all\n\n$t \\in [x,y]$\n\n. Now if\n\n$f(t_0) > \\max\\{f(x), f(y)\\}$\n\n for some\n\n$t_0 \\in (x,y)$\n\n, there will be a point in\n\n$(x, t_0)$\n\n with positive derivative and a point in\n\n$(t_0, y)$\n\n with negative derivative. This will give you the desired contradiction in this case.\n\nFor completeness, the solution for\n\n$\\alpha>0$\n\n:\n\n Since\n\n$t_0$\n\n is an interior maximum point and\n\n$g'$\n\n is continuous, we have\n\n$g'(t)\\ge 0$\n\n in\n\n$(t_0-\\epsilon, t_0)$\n\n and\n\n$g'(t)\\le 0$\n\n in\n\n$(t_0, t_0 +\\epsilon)$\n\n for some positive\n\n$\\epsilon>0$\n\n. However, if we take\n\n$t_1 \\in (t_0 - \\epsilon, t_0)$\n\n and\n\n$t_2 \\in (t_0, t_0 + \\epsilon)$\n\n we have by the condition given in the problem that\n\n$f'(t_1) \\le \\frac{\\alpha}{2} (t_1 - y)$\n\n or\n\n$f'(t_2) \\ge \\frac{\\alpha}{2} (t-x)$\n\n. If the first is the case, then\n\n$g'(t_1) \\le - \\frac{\\alpha}{2}(t_1-x) < 0$\n\n, which is not true. If the second is the case, then\n\n$g'(t_2) \\ge - \\frac{\\alpha}{2}(t_2-y) > 0$\n\n, which is not true. This is the desired contradiction.",
"is_accepted": false,
"score": 2
}
|
CC BY-SA (Stack Exchange content)
|
4,857,242
|
Using Laplace Transform to solve non-linear ODE for pendulum motion and showing why it cannot be solved
|
After solving the linear version of the ODE for the Pendulum equation using Laplace transform, I tried to use LT to solve the non-linear ODE for pendulum motion.
$\frac{d^{2}\theta}{dt^{2}}+\frac{g}{l} \sin\theta=0$
However, I am not very familiar with LT and so i do not really understand how to convert the non-linear term into the Laplace domain? should I just use the sin identity, even though it doesn't look right? Once it is answered, I would also like to know why exactly it cannot be solved using LT.
Thanks in advance
|
Alex
|
https://math.stackexchange.com/questions/4857242/using-laplace-transform-to-solve-non-linear-ode-for-pendulum-motion-and-showing
|
{
"answer_id": 4857367,
"answer_link": null,
"answer_owner": "whpowell96",
"answer_text": "i do not really understand how to convert the non-linear term into the Laplace domain.\n\nNobody else really does either. Computing the Laplace transform of even basic nonlinearities requires approximation via infinite series, see\n\nhere\n\n. The only nonlinear way to combine functions that plays nice with the Laplace transform is convolution to my knowledge. The relation is given by\n\n$$\n\n\\mathcal{L}[u*v](s) = \\mathcal{L}[u](s)\\cdot\\mathcal{L}[v](s),\n\n$$\n\nwhere\n\n$$\n\n[u*v](t) = \\int_{-\\infty}^\\infty u(s)v(t-s)~\\mathrm{d}s.\n\n$$",
"is_accepted": true,
"score": 2
}
|
CC BY-SA (Stack Exchange content)
|
4,663,395
|
Does division by 0 really have anything to do with $\lim_{x \to 0} \frac{1}{x}$?
|
To my very limited knowledge, division by 0 is undefined precisely because it breaks the field axioms. No dividing by 0 if you want a field. However there do exist structures that are not fields which allow for division by 0. Like the Riemann sphere. Or wheel algebras.
However a very common argument/"proof" I often hear for why division by 0 is undefined, is that
$\lim_{x \to 0+} \frac{1}{x} = \infty$
whereas
$\lim_{x \to 0-} \frac{1}{x} = -\infty$
. And therefore since the limits go the opposite direction, this means we can't say
$\frac{1}{0} = \infty$
.
But I do not understand this line of reasoning. If the left and right limits differ, that would just mean the limit from both sides isn't defined. It wouldn't tell me anything about the function's value
at
0, only
near
0? And I don't see why a function's behaviour near 0 would have anything to do with it's value at 0.
Is there a missing step? Or is it just a fallacy?
Edit: A few people just agreeing with me haha, so let me try put it another way. Is there anything, anything at all, that we can conclude about division by 0, specifically from the differing left/right limits of
$\frac{1}{x}$
?
|
confusedscreaming
|
https://math.stackexchange.com/questions/4663395/does-division-by-0-really-have-anything-to-do-with-lim-x-to-0-frac1x
|
{
"answer_id": 5106901,
"answer_link": null,
"answer_owner": "jjagmath",
"answer_text": "Indeed, the reason for leaving\n\n$\\frac{1}{0}$\n\n undefined has nothing to do with continuity of the function\n\n$\\frac{1}{x}$\n\n. Otherwise we would need to leave\n\n$\\lfloor 1 \\rfloor$\n\n undefined, since we can't define the function\n\n$\\lfloor x \\rfloor$\n\n as a continuous function at\n\n$x=1$\n\n.",
"is_accepted": false,
"score": 5
}
|
CC BY-SA (Stack Exchange content)
|
5,107,710
|
Number of possibilities in the known universe?
|
this may be a rather simple one. I am not a math guy by any means, but you folks will have a good time with this one I think. I will posit what I think is a layman's answer and then you all have at it!
I got this question in my head when researching large, finite numbers. I was looking at Graham's Number at the time. This number seems to be a lot bigger than all the possibilities in the "observable" universe because expressing it in digits using only a Planck volume for each digit apparently takes more than all the Planck volumes in the observable universe. A lot more.
So the title is the question and I think the answer (knowing nothing about sub-atomic particle physics) Would be something like: Total of all sub-atomic (or just 'all') particles in the known universe times the Planck volumes in the observable universe. That total would be then multiplied by the power of the Planck volumes in the observable universe. I think this would put every particle in every combination of every Planck Volume. I know this is not exact (and I may be totally off), but you folks can fool with it how you like and tell us the results. And let me know if this is an expressible number, and if so, what that would (roughly) be taking into account some assumptions about the size of the universe and the Mass in the universe. I know these things cannot be fully known at this time, but assigning some values even if they are wrong would be kind of fun.
The reason I thought about this in the first place is I believe this is the largest number that matters to our existance. Any higher number seems like kind of a waste of time relative to our existence. And it looks like Graham's and others' large numbers like it are much larger than this number. Let me know! Thanks!
Got some info here on the site about the relative size of Graham's number to the universe, but Graham's Number appears to be incalculable in size.
universe sized cube and visualising really large numbers
|
Themblues
|
https://math.stackexchange.com/questions/5107710/number-of-possibilities-in-the-known-universe
|
{
"answer_id": null,
"answer_link": null,
"answer_owner": null,
"answer_text": "(Нет ответов)",
"is_accepted": false,
"score": 0
}
|
CC BY-SA (Stack Exchange content)
|
5,098,183
|
Can this be done without a calculator? Show that $\int_0^1(1-x^2)(1-x^4)(1-x^6)\dots dx<\frac{1}{\sqrt3}$
|
Let
$I=\displaystyle\int_0^1\prod_{k=1}^\infty\left(1-x^{2k}\right)dx$
.
Without a calculator, show that
$I<\frac{1}{\sqrt3}$
.
According to
Wolfram
,
$I\approx 0.999969\left(\frac{1}{\sqrt3}\right)$
.
Here is a graph of the integrand in blue, and
$y=\frac{1}{\sqrt3}$
in red.
Context
I was playing with integrals of power series and stumbled upon this curious numerical result.
My attempt
According to Euler's
Pentagonal Number Theorem
,
$$\prod_{k=1}^\infty\left(1-x^{2k}\right)=\sum_{k=-\infty}^\infty(-1)^kx^{k(3k+1)}$$
So
$\begin{align}
I&=\int_0^1\prod_{k=1}^\infty\left(1-x^{2k}\right)dx\\
&=\int_0^1\sum_{k=-\infty}^\infty(-1)^kx^{k(3k+1)}dx\\
&=\sum_{k=-\infty}^\infty\frac{(-1)^k}{3k^2+k+1}\\
&=1+\sum_{k=1}^\infty\left(\frac{(-1)^k}{3k^2+k+1}+\frac{(-1)^{-k}}{3k^2-k+1}\right)\\
&=1+\sum_{k=1}^\infty(-1)^k\frac{6k^2+2}{9k^4+5k^2+1}
\end{align}$
Then what? I'm not sure this is doable without a calculator.
Edit
From the OEIS:
A258408
is
$I=\displaystyle\int_0^1\prod_{k=1}^\infty\left(1-x^{2k}\right)dx=\frac{4\sqrt{\frac{3}{11}}\pi\sinh \left(\frac{\sqrt{11}\pi}{6}\right)}{2\cosh\left(\frac{\sqrt{11}\pi}{3}\right)-1}=0.577332\dots$
A258232
is
$\displaystyle\int_0^1\prod_{k=1}^\infty\left(1-x^k\right)dx=\frac{8\sqrt{\frac{3}{23}}\pi\sinh \left(\frac{\sqrt{23}\pi}{6}\right)}{2\cosh\left(\frac{\sqrt{23}\pi}{3}\right)-1}=0.368412\dots$
|
Dan
|
https://math.stackexchange.com/questions/5098183/can-this-be-done-without-a-calculator-show-that-int-011-x21-x41-x6
|
{
"answer_id": 5098188,
"answer_link": null,
"answer_owner": "Claude Leibovici",
"answer_text": "This is just limited to the computation of the integral.\n\nUse\n\n$$3k^2+k+1=3(k-a)(k-b) \\qquad \\text{with} \\quad (a,b)=-\\frac{1}{6} \\left(1\\pm i \\sqrt{11}\\right)$$\n\n$$3k^2-k+1=3(k-c)(k-d) \\qquad \\text{with} \\quad (c,d)=+\\frac{1}{6} \\left(1\\pm i \\sqrt{11}\\right)$$\n\nThen partial fraction decomposition. Sum up to\n\n$n$\n\n to face generalized harmonic numbers and use their asymptotics to obtain\n\n$$S=\\sum_{k=1}^\\infty(-1)^k\\frac{6k^2+2}{9k^4+5k^2+1}$$\n\n$$S=-1-\\frac{i \\pi \\left(\\tan \\left(\\frac{1}{12} \\left(5+i \\sqrt{11}\\right) \\pi \\right)+\\cot\n\n \\left(\\frac{1}{12} \\left(5+i \\sqrt{11}\\right) \\pi \\right)-2 \\csc \\left(\\frac{1}{6}\n\n \\left(\\pi +i \\sqrt{11} \\pi \\right)\\right)\\right)}{2 \\sqrt{11}}$$\n\nExpand the trigonometric functions to get\n\n$$\\small S=-1+4 \\pi \\sqrt{\\frac{3}{11}}\\,\\frac{\\sinh \\left(\\frac{\\sqrt{11} \\pi }{6}\\right)}{2 \\cosh \\left(\\frac{\\sqrt{11} \\pi }{3}\\right)-1}$$\n\n$$I=\\displaystyle\\int_0^1\\prod_{k=1}^\\infty\\left(1-x^{2k}\\right)\\,dx=4 \\pi \\sqrt{\\frac{3}{11}}\\,\\frac{\\sinh \\left(\\frac{\\sqrt{11} \\pi }{6}\\right)}{2 \\cosh \\left(\\frac{\\sqrt{11} \\pi }{3}\\right)-1}$$",
"is_accepted": false,
"score": 15
}
|
CC BY-SA (Stack Exchange content)
|
5,107,572
|
Solving $x^{x^3}=729$
|
I want to solve
$$x^{x^3}=729$$
so I tried like below:
$$\log_9{x^{x^3}}=\log_9{729}\\x^3\log_9x=3\\x^3=\frac{3}{\log_9x}\\x^3=3\log_x{9}\\\cdots$$
but I got stumped. Then I tried this:
$$x^{x^3}=729\\(x^{x^3}=729)^3\\x^{3x^3}=(9^3)^3$$
It can be rewritten as:
$$(x^3)^{x^3}=9^9\\x^3=9\\x=\sqrt[3]{9} $$
Then I checked it with Desmos:
It seems that I was correct. Anyway, I think my second try was somehow heuristic, and I'm asking for an analytic solution for that type of equation, if it exists. Or another point of view or idea.
|
Khosrotash
|
https://math.stackexchange.com/questions/5107572/solving-xx3-729
|
{
"answer_id": 5107582,
"answer_link": null,
"answer_owner": "Claude Leibovici",
"answer_text": "More amusing would be\n\n$$x^{x^3}=k$$\n\n Take logarithms\n\n$$x^3\\log(x)=\\log(k) \\implies x^3\\log(x^3)=3\\log(k)$$\n\n which gives\n\n$$x^3=\\frac{3 \\log (k)}{W(3 \\log (k))}$$\n\nEdit\n\nEven more general, using the same gymnastic,\n\n$$\\left(x^a\\right)^{x^b}=k \\quad \\implies \\quad x=\\Bigg(\\frac {\\frac b a \\log(k) } {W\\left(\\frac{b }{a}\\log (k)\\right) }\\Bigg)^{\\frac 1b}$$\n\n where\n\n$(a,b)$\n\n could be any real numbers",
"is_accepted": true,
"score": 6
}
|
CC BY-SA (Stack Exchange content)
|
5,107,573
|
Study of a series $ \sum_{n=1}^{\infty} \log \left[1+\left(n^{x}-n^{x} \cos \frac{1}{n^{2}}\right)\right] $
|
Study, as the real parameter
$x$
varies, the numerical series
$$
\sum_{n=1}^{\infty} \log \left[1+\left(n^{x}-n^{x} \cos \frac{1}{n^{2}}\right)\right]
$$
\sol
The same quantity
$n^x$
in the difference suggests that I can factor it out.
$$
a_n=\log\left[ 1 + n^x \left(1 - \cos\frac{1}{n^2}\right)\right]
=\log\left[ 1 + \frac{\left(1 - \cos\frac{1}{n^2}\right)}{\frac1{n^x}}\right].
$$
Now if
$x\leq 0$
, for
$n \to \infty$
,
$1/n^x \xrightarrow{n\to \infty} +\infty$
and since
$1 - \cos t \sim_0 t^2/2$
for
$t \to 0$
, we obtain
$$
1 - \cos \frac{1}{n^2} \sim_\infty \frac{1}{2 n^4}.
$$
thus
$$
a_n=\log \left(1+\frac{\frac{1}{2 n^4}}{\frac{1}{n^x}}\right)
=\log \left(1+\frac{n^x}{2n^4}\right)
=\log \left(1+\frac{1}{2}n^{x-4}\right)
$$
Since by assumption
$x\leq 0$
then
$x-4\leq -4<0$
, therefore the term
$n^{x-4}\to 0$
when
$n\to\infty$
.
Hence, since for
$u\to 0$
we have
$\log(1+u) \sim_0 u$
,
$$
\log \left(1+\frac{1}{2}n^{x-4}\right) \sim_\infty \frac{1}{2} n^{x-4}.
$$
Since
$p=x-4<0$
all terms diverge, therefore the original series should diverge.
If
$x>0$
then
$$
a_n=\log\left[ 1 + n^x \left(1 - \cos\frac{1}{n^2}\right)\right]
$$
gives an indeterminate form, for
$n\to \infty$
, of type
$\infty\cdot 0$
: what to do?
|
Sebastiano
|
https://math.stackexchange.com/questions/5107573/study-of-a-series-sum-n-1-infty-log-left1-leftnx-nx-cos-fra
|
{
"answer_id": 5107576,
"answer_link": null,
"answer_owner": "Sine of the Time",
"answer_text": "You have\n\n\\begin{align}\n\na_n:=\\log\\left( 1 + n^x \\left(1 - \\cos\\frac{1}{n^2}\\right)\\right) \\sim n^x\\cdot \\frac{1}{2n^4}=\\frac12 n^{x-4};\n\n\\end{align}\n\nthus\n\n$\\sum a_n$\n\n behaves like\n\n$\\sum n^{x-4}$\n\n, which converges iff\n\n$4-x>1$\n\n.",
"is_accepted": false,
"score": 1
}
|
CC BY-SA (Stack Exchange content)
|
5,107,602
|
Why the general solution of $2U_x-U_y=0$ is equivalent to a series and can be simplified to $f(x+2y)$?
|
The following is an example from my lecture notes:
$$2U_x-U_y=0$$
To solve this PDE, we use
$U=e^{\alpha x+\beta y}$
. Substituting it in the equation gives:
$$(2\alpha-\beta)e^{\alpha x+\beta y}=0 \quad\Rightarrow\quad \beta=2\alpha$$
So,
$$U(x,y)= e^{\alpha x+2\alpha y}= e^{\alpha(x+2y)}\;;\ \forall\alpha$$
Hence, the general solution of this equation is the linear combination of these solutions set:
$$U(x,y)=\sum_{i=0}^{\infty}A_ie^{i(x+2y)}=f(x+2y)$$
where,
$f$
is an arbitrary function.
I have problem understanding, how from
$e^{\alpha(x+2y)}\;;\ \forall\alpha$
we conclude
$U(x,y)= \sum_{i=0}^{\infty}A_ie^{i(x+2y)}$
and why it can be generalized to
$f(x+2y)$
?
For instance, comparing to ODE equations, I know that
$y_1=e^{x}$
and
$y_2=e^{2x}$
are solutions to
$y''-3y'+2y=0$
, and the general solution is linear combination of these two answers which is
$y=c_1e^x+c_2e^{2x}$
.
However, for the above PDE equation, I don't understand how the linear combination of the solutions is the given series. I mean, we got
$e^{\alpha(x+2y)}$
for all values
$\alpha$
as the solution, but the series incorporates only the non-negative integer values of
$\alpha$
(renamed to
$i$
later). Additionally, I don't see how the series can be simplified to
$f(x+2y)$
(Sure, it satisfies
$2U_x-U_y=0$
, but how we derived it from the abovementioned series?).
|
User
|
https://math.stackexchange.com/questions/5107602/why-the-general-solution-of-2u-x-u-y-0-is-equivalent-to-a-series-and-can-be-si
|
{
"answer_id": 5107611,
"answer_link": null,
"answer_owner": "JJacquelin",
"answer_text": "$$2U_x-U_y=0$$\n\nYou wrote:\n\n$\\color{red}{\\text{To solve this PDE, we use } U=e^{\\alpha x+\\beta y}}$\n\n.\n\nWhy using the exponential function? This seems to come out of nowhere.\n\nInstead of why not writing:\n\nTo solve this PDE, we use\n\n$U=f\\left(\\alpha x+\\beta y\\right)$\n\n.\n\nThis is coming out of nowhere as well but no more than above.\n\n$$U_x=\\alpha f'(X)\\quad ; \\quad U_y=\\beta f'(X)\\quad \\text{with} \\quad X=\\left(\\alpha x+\\beta y\\right)$$\n\n$$2U_x-U_y=2\\alpha f'(X)-\\beta f'(X)=(2\\alpha-\\beta)f'(X)=0$$\n\nFor example this is satisfied with\n\n$\\alpha=1$\n\n and\n\n$\\beta=2$\n\n leading to\n\n$X=x+2y$\n\n$$U=f\\left(x+ 2y\\right)\\quad \\text{is the general solution.}$$\n\nIsn't it simpler than a sum of exponential?\n\nNote that using the Method of Characteristics avoid coming of nowhere and is much more general to solve more complicated linear PDE.",
"is_accepted": false,
"score": 3
}
|
CC BY-SA (Stack Exchange content)
|
1,405,782
|
integral inequality involving $\sup|f'|$
|
Let $f:[0,1]\rightarrow \mathbb R$ be continuous function differentiable on $(0,1)$ with property that there exists $a \in (0,1]$ such that
$$\int_{0}^a f(x)dx=0$$
Prove that
$$\left|\int_{0}^1 f(x)dx \right|\le \dfrac {1-a} 2 \cdot \sup_{x\in (0,1)} |f'(x)|$$
Find the case of equality.
We have one solution using Mean value theorem.Is there another one?
|
Booldy
|
https://math.stackexchange.com/questions/1405782/integral-inequality-involving-supf
|
{
"answer_id": 5107613,
"answer_link": null,
"answer_owner": "T﹏T",
"answer_text": "We can also use Taylor's Mean Value Theorem to prove this\n\nLet\n\n$F(x)=\\int_0^x f(t)\\,dt$\n\n. Then\n\n$F(0)=0$\n\n,\n\n$F'(x)=f(x)$\n\n and\n\n$F''(x)=f'(x)$\n\n.\n\nBy assumption,\n\n$\\int_0^a f(x)\\,dx=0$\n\n, hence\n\n$F(a)=0$\n\n also\n\n$\\sup_{x\\in (0,1)} |f'(x)|=M$\n\n .\n\nApplying Taylor’s theorem with the Lagrange remainder around\n\n$x=a$\n\n, there exist\n\n$\\theta,\\epsilon\\in(0,1)$\n\n such that\n\n$F(1)=F(a)+(1-a)F'(a)+\\frac{(1-a)^2}{2}F''(\\theta)$\n\nand\n\n$F(0)=F(a)-aF'(a)+\\frac{a^2}{2}F''(\\epsilon)$\n\n.\n\nSince\n\n$F(a)=0$\n\n and\n\n$F(0)=0$\n\n, we get\n\n$0=-aF'(a)+\\frac{a^2}{2}F''(\\epsilon)$\n\n,\n\nhence\n\n$F'(a)=\\frac{a}{2}F''(\\epsilon)$\n\n.\n\nSubstituting this into the expression for\n\n$F(1)$\n\n gives\n\n$F(1)=\\frac{a(1-a)}{2}F''(\\epsilon)+\\frac{(1-a)^2}{2}F''(\\theta)$\n\n.\n\nTaking absolute values and using the triangle inequality,\n\n$|F(1)|\\le \\frac{a(1-a)}{2}|F''(\\epsilon)|+\\frac{(1-a)^2}{2}|F''(\\theta)|$\n\n$ \\leq \\frac{a(1-a)}{2}M+\\frac{(1-a)^2}{2}M = \\frac{1-a}{2}M$\n\nFinally, noting that\n\n$F''(x)=f'(x)$\n\n, we obtain\n\n$\\left|\\int_0^1 f(x)\\,dx\\right|\\le \\frac{1-a}{2}\\sup_{0<x<1}|f'(x)|$\n\n,\n\nwhich is the desired inequality.\n\nalso equality is achieved when\n\n$f''(x)=F'''(x)=0$\n\n i.e. for linear functions.\n\n$\\rlap \\smile {\\dot{}\\dot{}}$",
"is_accepted": false,
"score": 2
}
|
CC BY-SA (Stack Exchange content)
|
1,107,759
|
Understanding the arc length integral formula
|
I believe the proof in my book is slighty more informal than the proof that uses the Mean Value Theorem. Could someone tell me what exactly the difference is, and if there are any mistakes in the proof below? Thanks.
Proof of the arc length integral formula
Divide your interval $[a,b]$ into $n$ pieces of width $\Delta x$, then zoom into the subinterval $[x_{i-1},x_i]$. The arc length in this interval is approximately $$\sqrt{\Delta x^2+\Delta y^2}=\sqrt{1+\left(\frac{\Delta y}{\Delta x}\right)^2}\Delta x$$
As $\Delta x$ goes to zero, $\frac{\Delta y}{\Delta x}$ is equal to the slope at $x=x_{i-1}$, that is $f'(x_{i-1})$.
The Riemann sum becomes $$\sum_{i=1}^n \sqrt{1+[f'(x_{i-1})]^2}\Delta x$$
As $n\to\infty$, the arc length is $$\int_a^b \sqrt{1+[f'(x)]^2}\,dx$$
|
integral-guest
|
https://math.stackexchange.com/questions/1107759/understanding-the-arc-length-integral-formula
|
{
"answer_id": 1108336,
"answer_link": null,
"answer_owner": "slo",
"answer_text": "Elaboration of my comment:\n\nYou want to convert an infinite sum to an integral. As you probably understand that can be interpreted as summing infinitely many rectangles and deciding what the area converges to. But this is not enough! The rectangles have to be infinitely small as well. Take a look at the picture. There are infinitely rectangles, but since they are not infinitely small, the area does not converge to the area under the curve.",
"is_accepted": false,
"score": 1
}
|
CC BY-SA (Stack Exchange content)
|
5,102,179
|
find $\lim_{x\to 0} \frac{\cos(mx)-\cos(nx)}{x^{2}}=\frac{n^{2}-m^{2}}{2} $
|
I want to find the solution to this limit
Here they give a clue:
https://artofproblemsolving.com/community/c7h500368p2811532
something like this:
$$ \color{green}{y=2x} $$
$$ \color{green}{\lim_{x\to 0} \frac{2\sin(by)^{2}-2\sin(ay)^{2}}{4y^{2}}=L} $$
$$ \color{green}{L=\frac{b^{2}-a^{2}}{2}} $$
I solved using the clue of the link, but not sure if my procedure is correct, and I have 2 doubts. I could only make sense of it like this:
$$ x=2y$$
$$\frac{x}{2}=y$$
when
$$ x\to 0$$
$$ y=\frac{0}{2}=0 $$
Is the right use of the substitution like this?
$$ \color{violet}{\cos(2x)}=1-2\sin^{2}(x) $$
$$ \cos(2x)-1=-2\sin^{2}(x) $$
$$ 1-\cos(2x)=\color{violet}{2\sin^{2}(x)} $$
$$ \lim_{y\to 0} \frac{\cos(m2y)-\cos(n2y)}{4y^{2}} $$
$$ \lim_{y\to 0} \big(\frac{-\cos(2ny)+1}{4y^{2}}-\frac{1-\cos(2my)}{4y^{2}}\big) $$
$$ \lim_{y\to 0} \big(\frac{2\sin^{2}(ny)}{4y^{2}}-\frac{2\sin^{2}(my)}{4y^{2}}\big) $$
$$ \lim_{y\to 0} \frac{2}{4}\frac{\sin^{2}(ny)}{y^{2}}-\frac{2}{4}\frac{\sin^{2}(my)}{y^{2}} $$
Here, in sin there is only n or m not the square of these but it makes sense to cover the square of sin itself. It's right?
$$ \lim_{y\to 0} \frac{2}{4}\frac{\color{red}{n^{2}}\sin^{2}(ny)}{\color{red}{n^{2}}y^{2}}-\frac{2}{4}\frac{\color{red}{m^{2}}\sin^{2}(my)}{\color{red}{m^{2}}y^{2}} $$
$$ \frac{1}{2} \lim_{y\to 0}n^{2}\cdot\frac{\sin^{2}(ny)}{n^{2}y^{2}}-\frac{1}{2} \lim_{y\to 0}m^{2}\cdot\frac{\sin^{2}(my)}{m^{2}y^{2}} $$
$$ \frac{1}{2} n^{2}\cdot 1-\frac{1}{2} m^{2}\cdot 1 $$
$$ \frac{n^{2}-m^{2}}{2} $$
|
Abraham Carrasquel
|
https://math.stackexchange.com/questions/5102179/find-lim-x-to-0-frac-cosmx-cosnxx2-fracn2-m22
|
{
"answer_id": 5102265,
"answer_link": null,
"answer_owner": "Ryszard Szwarc",
"answer_text": "The properties\n\n$\\sin^2t+\\cos^2t=1$\n\n and\n\n$\\lim_{t\\to 0}{\\sin at\\over t}=a$\n\n suffice.\n\n$${\\cos (mx)-\\cos (nx)\\over x^2}={\\cos^2(mx)-\\cos^2(nx)\\over [\\cos(mx)+\\cos(nx)]}\\,{1\\over x^2}\\\\ ={1\\over \\cos(mx)+\\cos(nx)}{\\sin^2(nx)-\\sin^2(mx)\\over x^2}\\\\ =\n\n{1\\over \\cos(mx)+\\cos(nx)}\\left [\\left ({\\sin (nx)\\over x}\\right)^2 -\\left ({\\sin (mx)\\over x}\\right)^2\\right ]$$\n\n Thus the limit, when\n\n$x\\to 0,$\n\n is equal\n\n${1\\over 2}(n^2-m^2).$",
"is_accepted": true,
"score": 5
}
|
CC BY-SA (Stack Exchange content)
|
3,472,203
|
$\lim_{(x→\pi/6)}\frac{2\log(Γ(\sin x))-\logπ}{Γ(\sec 2x)-1}$
|
Find:
$$\lim_{x→\frac{\pi}{6}}\frac{2\log(Γ(\sin x))-\logπ}{Γ(\sec2 x)-1}$$
I can find this limit using L Hospital rule. I don 't know how to solve this without using L Hospital.Question is given by Jalil Hajimir.
Solution on Wolfram Alpha
.
|
Kian
|
https://math.stackexchange.com/questions/3472203/lim-x%e2%86%92-pi-6-frac2-log%ce%93-sin-x-log%cf%80%ce%93-sec-2x-1
|
{
"answer_id": 3472943,
"answer_link": null,
"answer_owner": "Nanayajitzuki",
"answer_text": "Basically, if what did you mean the alternative approach is through the Taylor series or anything analogous, there is no different for a continuous function dealt with L'Hopital or series expansion. the proof of L'Hopital already showed that the essential this how to find the derivative for the biggest non-trivial order in both numerator and denominator, no matter you take the derivative regularly or find it by Taylor expansion. so actually I don't think the method I put here can be a so-called 'new' approach, for neither the form of your Gamma function become more complicated nor their expansion taken at other non-trivial points will fundamentally increase the difficulty of the problem which is still on a 'good' continuous function.\n\nbegin with Legendre duplication formula\n\n$$\\Gamma(z)\\Gamma\\left(z+\\tfrac1{2}\\right)=2^{1-2z}\\sqrt{\\pi}\\Gamma(2z)$$\n\nwhich is\n\n$$\\Gamma(\\sin x)\\Gamma\\left(\\sin x+\\tfrac1{2}\\right)=2^{1-2\\sin x}\\sqrt{\\pi}\\Gamma(2\\sin x)$$\n\nor\n\n$$\\begin{aligned}\n\n2\\ln\\Gamma(\\sin x)&=\\ln\\pi+2\\ln2\\cdot(1-2\\sin x)+2\\ln\\Gamma(1+(2\\sin x-1))-2\\ln\\Gamma\\left(1+\\tfrac{2\\sin x-1}{2}\\right)\\\\\n\n&=\\ln\\pi-2\\ln2\\cdot z_1+2\\ln\\Gamma(1+z_1)-2\\ln\\Gamma\\left(1+\\tfrac{z_1}{2}\\right)\n\n\\end{aligned}$$\n\nby\n\n$\\Gamma(1+z)=z\\Gamma(z)$\n\n we also have\n\n$$\\begin{aligned}\n\n\\Gamma(\\sec(2x))&=(\\sec(2x)-1)\\Gamma(\\sec(2x)-1)=(1+(\\sec(2x)-2))\\Gamma(1+(\\sec(2x)-2))\\\\\n\n&=(1+z_2)\\Gamma(1+z_2)\n\n\\end{aligned}$$\n\nwe have these two Taylor series at\n\n$x=\\tfrac{\\pi}{6}$\n\n$$z_1=2\\sin x-1=\\sqrt{3}\\left(x-\\tfrac{\\pi}{6}\\right)+o(x-\\tfrac{\\pi}{6})\\\\\n\nz_2=\\sec(2x)-2=4\\sqrt{3}\\left(x-\\tfrac{\\pi}{6}\\right)+o(x-\\tfrac{\\pi}{6})$$\n\nwhich means\n\n$z_1$\n\n and\n\n$z_2$\n\n have same highest order in expansion, next, by Weierstrass product\n\n$$\\Gamma(1+z)=z\\Gamma(z)=e^{-\\gamma z}\\prod_{n=1}^{\\infty}\\left(1+\\frac{z}{n}\\right)^{-1}e^{z/n}$$\n\nwhere as\n\n$z\\to0$\n\n$$\\prod_{n=1}^{\\infty}\\left(1+\\frac{z}{n}\\right)^{-1}e^{z/n}=\\prod_{n=1}^{\\infty}\\left(1-\\frac{z^2}{n^2}+o(z^2)\\right)$$\n\nhence\n\n$$\\Gamma(1+z)=1-\\gamma z+o(z)$$\n\nthat is\n\n$$2\\ln\\Gamma(\\sin x)-\\ln\\pi=-2\\ln2\\cdot z_1+2(1-\\gamma z_1)-2\\left(1-\\tfrac{\\gamma z_1}{2}\\right)+o(z_1)=(-2\\ln2-\\gamma)z_1+o(z_1)$$\n\nand\n\n$$\\Gamma(\\sec(2x))=(1+z_2)(1-\\gamma z_2+o(z_2))=1+(1-\\gamma)z_2+o(z_2)$$\n\ntherefore your answer is\n\n$$\\lim_{x\\to\\pi/6}\\frac{(-2\\ln2-\\gamma)z_1(x)+o(z_1)}{(1-\\gamma)z_2(x)+o(z_2)}=\\lim_{x\\to\\pi/6}\\frac{2\\ln2+\\gamma}{(\\gamma-1)\\cdot\\tfrac{z_2(x)}{z_1(x)}}=\\frac{2\\ln2+\\gamma}{4(\\gamma-1)}$$",
"is_accepted": true,
"score": 1
}
|
CC BY-SA (Stack Exchange content)
|
2,220,765
|
Prob. 15, Chap. 4, in Baby Rudin: Every continuous open mapping of $\mathbb{R}$ into $\mathbb{R}$ is monotonic
|
Here is Prob. 15, Chap. 4 in the book
Principles of Mathematical Analysis
by Walter Rudin, 3rd edition:
Call a mapping of
$X$
into
$Y$
open
if
$f(V)$
is an open set in
$Y$
whenever
$V$
is an open set in
$X$
.
Prove that every continuous open mapping of
$\mathbb{R}^1$
into
$\mathbb{R}^1$
is monotonic.
My effort:
Suppose
$f$
is a continuous open mapping of
$\mathbb{R}^1$
into
$\mathbb{R}^1$
. If
$f$
is not monotonic, then there exist real numbers
$x$
,
$y$
, and
$z$
such that
$$x < y < z,$$
and
$$
\begin{align}
& \mbox{ either } \qquad f(x) < f(y) \ \mbox{ and } \ f(y) > f(z), \\
& \mbox{ or } \qquad f(x) > f(y) \ \mbox{ and } \ f(y) < f(z).
\end{align}
$$
Now as
$f$
is continuous on the closed interval
$[x, z]$
and as
$[x, z]$
is a compact subset of
$\mathbb{R}$
, so the image
$f\left( [x, z] \right)$
is also a compact --- and hence closed and bounded --- subset of
$\mathbb{R}$
.
Thus the set
$f\left( [x, z] \right)$
has a maximum element
$M$
and a minimum element
$m$
.
First, assume that
$$x < y < z, \ \ f(x) < f(y), \ \mbox{ and } \ f(y) > f(z). $$
Then
$$ f(y) > \max \left\{ f(x), f(z) \right\}. \ \tag{1}$$
But in view of (1) above, we can conclude that the maximum
$M$
of
$f\left( [x, z] \right)$
is attained at some interior point
$p$
, say, of
$[x, z]$
.
Then we can conclude that the image set
$f\left( (x, z ) \right)$
of the open set
$(x, z)$
in the domain space
$\mathbb{R}^1$
has a maximum element
$M$
and therefore cannot be open in the codomain space
$\mathbb{R}^1$
; for no
$\delta > 0$
can the open interval
$( M-\delta, M+\delta)$
be contained in
$f\left( (x, z ) \right)$
, which gives rise to a contradiction.
So we assume that
$$ x < y < z, \ \ f(x) > f(y), \ \mbox{ and } \ f(y) < f(z). $$
Then
$$f(y) < \min \left\{ f(x), f(z) \right\}. \ \tag{2}$$
Then the minimum
$m$
of the set
$f \left( [x, z] \right)$
is attained at some interior point
$q$
, say, of
$[x, z]$
, which implies that the image under
$f$
of the open set
$(x, z)$
fails to be open because this image set contains
$m$
but fails to contain the open interval
$(m-\delta, m+\delta)$
for any real number
$\delta > 0$
, which is a contradiction.
Hence every continuous open mapping
$f$
of
$\mathbb{R}^1$
into
$\mathbb{R}^1$
is monotonic.
Is this proof correct? If so, then what about the presentation? Is the presentation lucid enough too? If not, then where does the problem lie?
|
Saaqib Mahmood
|
https://math.stackexchange.com/questions/2220765/prob-15-chap-4-in-baby-rudin-every-continuous-open-mapping-of-mathbbr
|
{
"answer_id": 2221425,
"answer_link": null,
"answer_owner": "xpaul",
"answer_text": "Hint: Let $a<b$. Since $f(x)$ is continuous and open, then\n\n$$ f((a,b))=(c,d) $$\n\nfor some $c<d$. Try to show $f({a,b})={c,d}$. From this, either $f(a)=c,f(b)=d$ or $f(a)=d,f(b)=c$. Namely $f(a)<f(b)$ or $f(a)>f(b)$.",
"is_accepted": false,
"score": 1
}
|
CC BY-SA (Stack Exchange content)
|
2,148,777
|
Limit of the Riemann sum $ \sum_{i=1}^n \sin({i\pi \over n}){\pi \over n}$
|
Show that $f: [0, \pi] \to \mathbb R$, $f(x) = \sin(x)$ is Riemann-integrable and determine $\int_0^\pi f$ (e.g. by Riemann sums)
I showed it being Riemann-integrable because it is continuous.
So, I made an equal partition $P_n$, s. t. $|x_i - x_{i-1}|= {\pi \over n}$
and $$S(f, P, Z) = \sum_{i=1}^n \sin({i\pi \over n}){\pi \over n}$$
I've been given a formula $\sin(a)+\sin(a+t)+\sin(a+2t)+...+\sin(a+(n-1)t) = {\sin(nt/2) \over \sin(t/2)} \sin(a+{n-1 \over 2}t)$
whis is good for all $a, t \in \mathbb R$, $t \neq 0$.
I took out $\pi \over n$ from the sum and put it in front of it, ran the formula and got $${{\sin({n({\pi \over n}) \over 2})} \over \sin({{\pi \over n} \over 2})} \sin({\pi \over n} + {n-1 \over 2}{\pi \over n})$$
I rolled it around a bit, but could not get $2$ out of it, any help?
|
repulsive23
|
https://math.stackexchange.com/questions/2148777/limit-of-the-riemann-sum-sum-i-1n-sini-pi-over-n-pi-over-n
|
{
"answer_id": 2148800,
"answer_link": null,
"answer_owner": "Guy Fsone",
"answer_text": "Since $$\\sum_{k= 1}^{n} \\sin kx= \\frac{\\sin\\left({nx\\over2} \\right)}{\\sin\\left(\\frac x2\\right)}\\sin\\left( \\frac{n+1}{2}x\\right)$$\n\ntaking $x= \\frac \\pi n$ we get,\n\n$$S = {\\pi \\over n}\\sum_{i=1}^n \\sin({i\\pi \\over n}) = 2\n\n{{{\\pi \\over 2n}} \\over \\sin({{\\pi \\over 2n} })} \\sin\\left( {n+1 \\over n}{\\pi \\over 2}\\right)\\to 2 $$\n\nsince $$ \\lim_{n\\to \\infty }{{{\\pi \\over 2n}} \\over \\sin({{\\pi \\over 2n} })} = \\lim_{h\\to 0} \\frac{h}{\\sin h}= 1$$",
"is_accepted": false,
"score": 1
}
|
CC BY-SA (Stack Exchange content)
|
5,107,505
|
Determine the nature of the series $ \sum_{n=1}^{\infty} \left( e^{\frac{1}{n}} - 1 - \frac{1}{n^4} \right) $
|
We have this series
$$
\sum_{n=1}^{\infty} \left( e^{\frac{1}{n}} - 1 - \frac{1}{n^4} \right)
$$
Preface: can the sum be split into three terms?
Let
$a_n = e^{1/n} - 1 - \frac{1}{n^4}$
:
$$
\lim_{n \to \infty} a_n = \lim_{n \to \infty} \left( e^{1/n} - 1 - \frac{1}{n^4} \right).
$$
Since
$e^{1/n} \to 1$
and
$\frac{1}{n^4} \to 0$
, it follows that
$$
\lim_{n \to \infty} a_n = 1 - 1 - 0 = 0.
$$
The limit of the general term is zero, so the necessary condition for the convergence of the series is satisfied. Let us see if there exists a convergent majorant series. Observe that
$e^{1/n} - 1$
behaves like
$\frac{1}{n}$
for large
$n$
, so
$$
a_n \sim_{\infty} \frac{1}{n} - \frac{1}{n^4} \sim_{\infty} \frac{1}{n}.
$$
The harmonic series diverges, but I think I cannot say that
$a_n$
diverges.
Just do I must use a different criterion?
|
Sebastiano
|
https://math.stackexchange.com/questions/5107505/determine-the-nature-of-the-series-sum-n-1-infty-left-e-frac1n
|
{
"answer_id": 5107512,
"answer_link": null,
"answer_owner": "xpaul",
"answer_text": "Using\n\n$$ e^x\\ge 1+x $$\n\none has\n\n$$ e^{\\frac{1}{n}} - 1 - \\frac{1}{n^4}\\ge \\frac1n-\\frac1{n^4} $$\n\nand hence\n\n$$ \\sum_{n=1}^\\infty\\bigg(e^{\\frac{1}{n}} - 1 - \\frac{1}{n^4}\\bigg)\\ge \\sum_{n=1}^\\infty\\frac1n-\\sum_{n=1}^\\infty\\frac1{n^4}=\\infty. $$\n\nSo\n\n$\\sum_{n=1}^\\infty\\bigg(e^{\\frac{1}{n}} - 1 - \\frac{1}{n^4}\\bigg)$\n\n diverges.",
"is_accepted": true,
"score": 3
}
|
CC BY-SA (Stack Exchange content)
|
484,160
|
Separate incomplete elliptic integral into real and imaginary parts
|
I am working in a problem that involves Incomplete Elliptic Integrals of the First and Second kind of the form $F(\sin^{-1}x~|~m)$ and $E(\sin^{-1}x~|~m)$ where the parameters $m$, $x$ are real numbers in the range $m>1$ and $1/\sqrt{m} \le x \le 1$.
($x$ and $m$ are related to the commonly used argument $\phi$ and modulus $m$ by $x \equiv \sin \phi$ and $m \equiv k^2$)
As can be seen by plotting them, in this range the real part of the integrals is independent of $x$ while the imaginary part isn't. As an example, see
this plot for F
and
this other for E
for a value $m=5$.
What I would like to do is to separate the real and imaginary part of this integrals, at least in this particular range. In other words, finding the real valued functions $f_{re} (m)$, $g_{re} (m)$, $f_{im} (x,m)$ and $g_{im} (x,m)$ that satisfy:
$$
F(\sin^{-1}x~|~m) \equiv f_{re} (m) + \text{i} f_{im} (x,m)
$$
$$
E(\sin^{-1}x~|~m) \equiv g_{re} (m) + \text{i} g_{im} (x,m)
$$
in the range $m>1$ and $1/\sqrt{m} \le x \le 1$.
By using the Reciprocal Modulus Transformations (see DLMF section 19.7) and taking the limit $x\rightarrow 1/\sqrt{m}$, I have found the real parts to be:
$$
f_{re}(m) \equiv \frac{1}{\sqrt{m}} K\left(\frac{1}{m}\right)
$$
$$
g_{re}(m) \equiv \sqrt{m} \left[ E \left( \frac{1}{m} \right) - K \left( \frac{1}{m} \right) \right] + \frac{1}{\sqrt{m}} K\left(\frac{1}{m}\right)
$$
However, the imaginary parts $f_{im} (x,m)$, $g_{im} (x,m)$ escape me. I reckon there should be a way of expressing them in terms of incomplete elliptic integrals with parameters in the real valued range.
If I use the reciprocal modulus transformations I will bring the parameter inside the range $0<m<1$ but the argument will now be complex as I will have $x>1$. I have looked everywhere in the literature but I can't seem to find any identity that solves the problem. I could perhaps do something if there was a way of expressing elliptic integrals of complex argument as a combination of elliptic integrals of real argument and imaginary pure argument, but I don't know how it can be done.
Does someone have any insight on how those imaginary parts could be found?
|
m3tro
|
https://math.stackexchange.com/questions/484160/separate-incomplete-elliptic-integral-into-real-and-imaginary-parts
|
{
"answer_id": 3917107,
"answer_link": null,
"answer_owner": "Parcly Taxel",
"answer_text": "The reciprocal modulus transformation is Byrd and Friedman 114.01:\n\n$$F(\\sin^{-1}x,m)=\\frac1{\\sqrt m}F(\\sin^{-1}x\\sqrt m,1/m)$$\n\n$$E(\\sin^{-1}x,m)=\\frac1{\\sqrt m}(mE(\\sin^{-1}x\\sqrt m,1/m)+(1-m)F(\\sin^{-1}x\\sqrt m,1/m))$$\n\nNow the parameter is in\n\n$[0,1]$\n\n but the sine-amplitude is a real number in\n\n$[1,\\sqrt m]$\n\n. Thus B&F 115.02 applies:\n\n$$F(\\sin^{-1}x\\sqrt m,1/m)=K(1/m)-iF(A,1-1/m)$$\n\n$$E(\\sin^{-1}x\\sqrt m,1/m)=E(1/m)-i\\left(F(A,1-1/m)-E(A,1-1/m)+\\frac{(1-1/m)\\sin A\\cos A}{\\sqrt{1-(1-1/m)\\sin^2A}}\\right)$$\n\nwhere\n\n$$A=\\sin^{-1}\\frac{\\sqrt{mx^2-1}}{x\\sqrt{m-1}}$$\n\nNote that I have flipped the signs of the imaginary parts from B&F to match the values of\n\n$E(\\cdot)$\n\n and\n\n$F(\\cdot)$\n\n as calculated by Mathematica and mpmath. Finally we get\n\n$$F(\\sin^{-1}x,m)=\\frac1{\\sqrt m}(K(1/m)-iF(A,1-1/m))$$\n\n$$E(\\sin^{-1}x,m)=\\frac1{\\sqrt m}\\left[mE(1/m)+(1-m)K(1/m)\\\\\n\n-i\\left(F(A,1-1/m)-mE(A,1-1/m)+\\frac{(m-1)\\sin A\\cos A}{\\sqrt{1-(1-1/m)\\sin^2A}}\\right)\\right]$$",
"is_accepted": true,
"score": 8
}
|
CC BY-SA (Stack Exchange content)
|
5,107,506
|
calculating an integral
|
I need to compute the integral
$$\int \frac{1 - \ln u}{(u - \ln u)^2} \, du$$
i tried applying the substitution
$w = u - \ln u$
$$\frac{dw}{du} = 1 - \frac{1}{u}, \quad \text{so} \quad dw = \left(1 - \frac{1}{u}\right) du$$
this means that the integral is
$$\int \frac{1- u + w}{w^2} \cdot \frac{du}{\left(1 - \frac{1}{u}\right)} = \int \frac{1}{w} \cdot \frac{du}{1 - \frac{1}{u}}$$
but this seems complicated and i dont know how to reduce this expression more. can someone help me find a better way?
|
Demir
|
https://math.stackexchange.com/questions/5107506/calculating-an-integral
|
{
"answer_id": 5107520,
"answer_link": null,
"answer_owner": "Mike",
"answer_text": "I think I may see something, but I think I may have lucked my way into it. Divide numerator and denominator by\n\n$u^2$\n\n to get\n\n$$\\int\\dfrac{1-\\ln u}{(u-\\ln u)^2}du=\\int\\dfrac{\\frac1{u^2}-\\frac{\\ln u}{u^2}}{(1-\\frac{\\ln u}u)^2}du$$\n\n$$v=1-\\dfrac{\\ln u}u,dv=-(\\frac1{u^2}-\\dfrac{\\ln u}{u^2})du$$\n\nThe integral then becomes\n\n$$-\\int\\dfrac{dv}{v^2}=\\frac1v+C=\\dfrac1{1-\\frac{\\ln u}u}+C=\\frac{u}{u-\\ln u}+C$$",
"is_accepted": false,
"score": 2
}
|
CC BY-SA (Stack Exchange content)
|
4,282,084
|
How do I solve for x when given the derivative equation and the slope of the tangent line?
|
The derivative of a function
$f$
is given by
$f′(x)=0.1x+e^{0.25x}$
. At what value of
$x$
for
$x>0$
does the line tangent to the graph of
$f$
at
$x$
have slope
$2$
?
This provides the derivative and slope of the tangent line but I am not sure how to solve for x
|
Maus
|
https://math.stackexchange.com/questions/4282084/how-do-i-solve-for-x-when-given-the-derivative-equation-and-the-slope-of-the-tan
|
{
"answer_id": 4282102,
"answer_link": null,
"answer_owner": "SV-97",
"answer_text": "Finding the exact value is hard and even if you have a \"closed form expression\" it'll probably not be \"just some number\" (see for example the result here\n\nhttps://www.wolframalpha.com/input/?i=solve+2+%3D+0.1x%2Be%5E%280.25x%29\n\n), so we'll have to resort to numerics. If this sounds daunting note that you can get a super simple and good approximation of the actual solution in this case:\n\nfirst note that as\n\n$x$\n\n grows, you can basically ignore the linear part of\n\n$f'$\n\n as it'll become neglible in contrast to the exponential one. So lets assume that\n\n$x$\n\n is sufficiently large and solve\n\n$2=e^{x/4} \\iff x = 4 \\ln 2$\n\n. Since\n\n$f'$\n\n is monotonous we expect our actual answer in the domain\n\n$(0, 4 \\ln2)$\n\n. Taylorexpand\n\n$f'$\n\n around\n\n$4 \\ln 2 = \\ln 16$\n\n to get\n\n$f'(x) \\approx 2.27726+0.6(x-\\ln 16) + 0.0625 (x - \\ln 16)^2$\n\n (note also that the coeffiecients in the expansion become quite small as we go to higher degrees). We now set this equal to\n\n$2$\n\n and solve again to find\n\n$x \\approx 2.2858$\n\n or\n\n$x \\approx -6.3406$\n\n. We're clearly after the\n\n$x \\approx 2.2858$\n\n solution. If you want more precision you can repeat this process:\n\ntaylorexpand around the\n\n$x$\n\n value we just found\n\nsolve the resulting polynomial equation\n\nbut the first estimate is already quite a good estimate, if we consider that the actual value (numerically computed) is at around\n\n$x \\approx 2.28688$\n\n. So via a simple approximation we've managed to find an approximate solution with a relative error of\n\n$\\sim 0.047\\%$\n\n.",
"is_accepted": false,
"score": 0
}
|
CC BY-SA (Stack Exchange content)
|
5,107,430
|
Can a function have a cusp at a point without being twice differentiable?
|
A function
$f:[a,b]\to \mathbb{R}$
is said to have a
cusp
at a point
$c$
if:
$f$
is continuous at
$c$
;
The one-sided derivatives satisfy
$$\lim_{x \to c^-} f'(x) = -\infty \quad \text{and} \quad \lim_{x \to c^+} f'(x) = +\infty.$$
(or the reverse signs)
This definition doesn’t require
$f$
to be twice differentiable.
However around
$c$
, in many examples, the second derivative can help describe the shape of the cusp — for instance, if the second derivatives on both sides have the same sign, the graph forms a sharp point with a vertical tangent.
My question is:
Can there exist a function that has a cusp at a point in this sense, but is not twice differentiable at that point (or perhaps not even twice differentiable in any neighborhood of it)?
I couldn't come up with an example to this so I tried to prove that is impossible but failed.
|
pie
|
https://math.stackexchange.com/questions/5107430/can-a-function-have-a-cusp-at-a-point-without-being-twice-differentiable
|
{
"answer_id": 5107437,
"answer_link": null,
"answer_owner": "dfnu",
"answer_text": "If we interpret your question as:\n\nIs it true that a function with a cusp in\n\n$c$\n\n is twice differentiable in a punctured neighborhood of\n\n$c$\n\n?\n\nthe answer is no.\n\nYou could start from a piecewise constant function\n\n$g(x)$\n\n, whose envelopes are the functions\n\n$f_1(x)=1/\\sqrt[3]x$\n\n, and\n\n$f_2(x) = 2/\\sqrt[3]x$\n\n (dashed red lines in the picture below), and then define\n\n$$f(x) = \\int_0^x g(t) dt = \\lim_{\\varepsilon\\to 0}\\int_{\\varepsilon}^x g(t)dt.$$\n\nThe function\n\n$f(x)$\n\n thus defined is continuous in\n\n$0$\n\n, and\n\n$$\\lim_{x\\to 0^+} f'(x) = \\lim_{x\\to 0^+} g(x) = +\\infty,$$\n\nand\n\n$$\\lim_{x\\to 0^-} f'(x) = \\lim_{x\\to 0^-} g(x) = -\\infty,$$\n\nas you require, and yet there are points where the first (and of course the second) derivative of\n\n$f$\n\n is not defined, in\n\nevery\n\n (punctured) neighborhood of\n\n$0$\n\n.",
"is_accepted": false,
"score": 5
}
|
CC BY-SA (Stack Exchange content)
|
1,030,860
|
Finding radius when performing shell method
|
Find the volume of the region generated by revolving $y = -x^3$ and $y = -\sqrt x$ around the $x$-axis.
I don't understand how the radius component is $-y$; why not $+y$?
|
Jermiah
|
https://math.stackexchange.com/questions/1030860/finding-radius-when-performing-shell-method
|
{
"answer_id": 1030895,
"answer_link": null,
"answer_owner": "Rory Daulton",
"answer_text": "The region defined by those equations is in the fourth quadrant, where $y$ is negative. The radius must be positive, so it is written as $|y|$ or $-y$. When $y$ is negative, $-y$ is positive, despite the presence of the minus sign.",
"is_accepted": false,
"score": 0
}
|
CC BY-SA (Stack Exchange content)
|
End of preview. Expand
in Data Studio
No dataset card yet
- Downloads last month
- 6